#ceph IRC Log

Index

IRC Log for 2012-01-17

Timestamps are in GMT/BST.

[0:09] * fronlius (~fronlius@f054106203.adsl.alicedsl.de) Quit (Quit: fronlius)
[1:11] * JefferyT (~jefft@pool-72-71-225-27.cncdnh.east.myfairpoint.net) has joined #ceph
[1:11] <JefferyT> Check out this viral video http://bit.ly/x5JHCf
[1:11] * JefferyT (~jefft@pool-72-71-225-27.cncdnh.east.myfairpoint.net) has left #ceph
[1:13] * JefferyT (~jefft@pool-72-71-225-27.cncdnh.east.myfairpoint.net) has joined #ceph
[1:13] <JefferyT> Check out this viral video http://bit.ly/x5JHCf
[1:13] * JefferyT (~jefft@pool-72-71-225-27.cncdnh.east.myfairpoint.net) has left #ceph
[1:16] * JefferyT (~jefft@pool-72-71-225-27.cncdnh.east.myfairpoint.net) has joined #ceph
[1:16] <JefferyT> Check out this viral video http://bit.ly/x5JHCf
[1:16] * JefferyT (~jefft@pool-72-71-225-27.cncdnh.east.myfairpoint.net) has left #ceph
[1:16] <nhm> heh
[1:18] * JefferyT (~jefft@pool-72-71-225-27.cncdnh.east.myfairpoint.net) has joined #ceph
[1:18] <JefferyT> Check out this viral video http://bit.ly/x5JHCf
[1:18] * JefferyT (~jefft@pool-72-71-225-27.cncdnh.east.myfairpoint.net) has left #ceph
[2:31] * morse (~morse@supercomputing.univpm.it) Quit (Ping timeout: 480 seconds)
[2:38] * adjohn (~adjohn@70-36-197-80.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[2:39] * adjohn (~adjohn@70-36-197-80.dsl.dynamic.sonic.net) has joined #ceph
[2:43] * adjohn (~adjohn@70-36-197-80.dsl.dynamic.sonic.net) Quit ()
[2:49] * The_Bishop (~bishop@cable-89-16-138-109.cust.telecolumbus.net) has joined #ceph
[2:50] * jojy (~jvarghese@108.60.121.114) Quit (Quit: jojy)
[2:50] * zphj1987 (~zphj1987@113.106.102.5) has joined #ceph
[2:52] * bchrisman (~Adium@108.60.121.114) Quit (Quit: Leaving.)
[2:58] * zphj1987 (~zphj1987@113.106.102.5) has left #ceph
[4:54] * adjohn (~adjohn@70-36-197-80.dsl.dynamic.sonic.net) has joined #ceph
[5:00] * The_Bishop (~bishop@cable-89-16-138-109.cust.telecolumbus.net) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[5:23] * adjohn (~adjohn@70-36-197-80.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[5:53] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[5:58] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:00] * adjohn (~adjohn@70-36-197-80.dsl.dynamic.sonic.net) has joined #ceph
[6:10] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[6:20] * The_Bishop (~bishop@e179015184.adsl.alicedsl.de) has joined #ceph
[7:02] * prometheanfire (~promethea@rrcs-24-173-105-84.sw.biz.rr.com) has joined #ceph
[7:03] <prometheanfire> does ceph experience split brain scenarios and if so, what is the recovery mothodology?
[7:06] <prometheanfire> ah, logs :D http://irclogs.ceph.widodh.nl/index.php?date=2011-03-17
[7:13] <prometheanfire> but it still does not go over what happens when more nodes then the redundancy level go offline, does the FS simply become unavailable?
[7:13] <prometheanfire> If it does become unavail, does it behave like nfs (badly)
[7:13] * prometheanfire (~promethea@rrcs-24-173-105-84.sw.biz.rr.com) Quit (synthon.oftc.net graviton.oftc.net)
[7:13] * svenx_ (92744@diamant.ifi.uio.no) Quit (synthon.oftc.net graviton.oftc.net)
[7:19] * prometheanfire (~promethea@rrcs-24-173-105-84.sw.biz.rr.com) has joined #ceph
[7:19] * svenx_ (92744@diamant.ifi.uio.no) has joined #ceph
[7:28] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) Quit (Remote host closed the connection)
[7:30] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) has joined #ceph
[7:31] * Kioob (~kioob@luuna.daevel.fr) Quit (Quit: Leaving.)
[7:53] * adjohn (~adjohn@70-36-197-80.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[8:12] * adjohn (~adjohn@70-36-197-80.dsl.dynamic.sonic.net) has joined #ceph
[8:12] * adjohn (~adjohn@70-36-197-80.dsl.dynamic.sonic.net) Quit ()
[8:23] * Meths (rift@2.25.193.184) Quit (Read error: Connection reset by peer)
[8:24] * Meths (rift@2.25.193.184) has joined #ceph
[8:25] * Meths (rift@2.25.193.184) Quit (Read error: Connection reset by peer)
[8:29] * Meths (rift@2.25.193.184) has joined #ceph
[8:54] * BManojlovic (~steki@93-87-148-183.dynamic.isp.telekom.rs) has joined #ceph
[9:04] * bchrisman1 (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[9:04] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[9:11] * Lo-lan-do (~roland@mirenboite.placard.fr.eu.org) has joined #ceph
[9:12] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[9:14] <chaos_> sagewk, is there any manual for "ceph --admin-socket /path/to/sock command" ?
[9:15] * The_Bishop (~bishop@e179015184.adsl.alicedsl.de) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[9:52] * fronlius (~fronlius@testing78.jimdo-server.com) has joined #ceph
[9:53] * Meths (rift@2.25.193.184) Quit (Read error: Connection reset by peer)
[10:41] * BManojlovic (~steki@93-87-148-183.dynamic.isp.telekom.rs) Quit (Ping timeout: 480 seconds)
[11:04] * Kioob`Taff1 (~plug-oliv@89-156-116-126.rev.numericable.fr) has joined #ceph
[11:20] * Meths (rift@2.25.214.71) has joined #ceph
[11:24] * BManojlovic (~steki@93-87-148-183.dynamic.isp.telekom.rs) has joined #ceph
[12:15] * gregorg_taf (~Greg@78.155.152.6) Quit (Ping timeout: 480 seconds)
[12:40] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[13:41] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[13:45] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[14:03] <wonko_be> can you "add" a rbd device on the same host as a monitor?
[14:03] <wonko_be> or would you advise against it?
[14:16] * mtk (~mtk@ool-44c35967.dyn.optonline.net) has joined #ceph
[15:47] <iggy> prometheanfire: there are probably only a few people that can answer that (in theory) and probably fewer have actually tried it
[15:48] <iggy> wonko_be: I asked about this recently... if you are using userspace access to rbd (i.e. qemu w/ librbd) vs a kernel client, it should be okay
[15:49] <prometheanfire> iggy: how are you?
[15:49] <prometheanfire> didn't know you used ceph
[15:50] <iggy> prometheanfire: I haven't actually played with it in a few months, but yeah the eventual goal is kvm+ceph=win
[15:51] <prometheanfire> well, ceph + kvm + rbd
[15:51] <prometheanfire> it's been about a year since I first tried it
[15:51] <prometheanfire> you gonna be watching the redhat thing tomorrow?
[15:51] <iggy> I'm thinking about it, need to check I don't have any lame ass meetings to go to here
[15:52] <prometheanfire> I dunno if I want to, seems like something that is not a tech demo
[15:53] <iggy> ehh, still might be able to glean some details about where things are going, etc.
[15:54] <prometheanfire> ya, but more then a quick news article would give?
[15:54] <iggy> plus I'm trying to sell the people here on rhev
[15:54] <iggy> so I'll get some good marketing garbage I can regurgitate on command
[15:55] <prometheanfire> heh, I'm thinking about ganeti
[15:57] <iggy> well, they are already a big RH shop here, so it just makes sense
[15:58] <prometheanfire> ya, I think gentoo has kept up to date too
[15:59] <prometheanfire> although we only have 38
[15:59] <prometheanfire> but I can fix that :D
[16:25] * bchrisman1 (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[16:36] <nhm> prometheanfire: Some of the folks here are running Ganeti sucessfully with debian. I haven't played with it at all yet, but they seem to like it.
[16:37] * elder (~elder@c-71-193-71-178.hsd1.mn.comcast.net) has joined #ceph
[16:40] <prometheanfire> I'd imagine using file based storage?
[16:40] <prometheanfire> the dream is to use rbd :D
[16:42] <nhm> prometheanfire: yeah, they've got a netapp and a beefy ZFS server they use for everything.
[16:42] <iggy> ganeti uses libvirt in the background?
[16:42] <prometheanfire> not that I know
[16:42] <prometheanfire> nhm: why use ceph on centralized storage (netapp)?
[16:42] <iggy> oh, it just makes god awful command lines too I guess
[16:43] <nhm> prometheanfire: Tehy don't, I thought you were talking about Ganeti.
[16:44] <nhm> prometheanfire: we're primarily a netapp/lustre shop for the moment.
[16:44] <prometheanfire> ah :D
[16:51] * ghaskins (~ghaskins@68-116-192-32.dhcp.oxfr.ma.charter.com) has joined #ceph
[17:06] * BManojlovic (~steki@93-87-148-183.dynamic.isp.telekom.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:07] <prometheanfire> but on oftc #grsecurity #pax on freenode #gentoo-hardened #gentoo-voip #mangler (hi econnell) #icinga-dev ##/r/sysadmin #gentoo-dev #ceph
[17:07] <prometheanfire> BOOO
[17:30] * elder (~elder@c-71-193-71-178.hsd1.mn.comcast.net) Quit (Quit: Leaving)
[17:39] * Tv (~Tv|work@aon.hq.newdream.net) has joined #ceph
[17:50] * elder (~elder@c-71-193-71-178.hsd1.mn.comcast.net) has joined #ceph
[17:55] * bchrisman (~Adium@108.60.121.114) has joined #ceph
[18:00] <gregaf1> Kioob`Taff1: there are often ways to debug things; can you give us more info?
[18:00] <gregaf1> lxo: I think Sage hacked in cluster_snap, but be careful; it's not tested at all and I don't think there's any way to actually make use of it right now
[18:03] <gregaf1> prometheanfire: no split brain; the OSDs are very careful about it
[18:03] <gregaf1> if the nodes hosting all copies of some data go offline, that data becomes unavailable, but the rest of the FS should remain operational
[18:03] <gregaf1> (assuming that you didn't lose metadata)
[18:04] * jojy (~jvarghese@108.60.121.114) has joined #ceph
[18:04] <prometheanfire> ok, does it fail like nfs, (badly)?
[18:04] <gregaf1> I have blessedly not used NFS much, so I don't know a lot about its practical failure modes
[18:04] <gregaf1> ;)
[18:05] <gregaf1> in theory, no; in practice, sometimes
[18:05] <prometheanfire> ok, as long as split brain is avoided
[18:05] <gregaf1> yeah; it definitely is
[18:07] <Tv> INFO:teuthology.task.ceph.osd.0.err:daemon-helper: command crashed with signal 6
[18:07] <Tv> :(
[18:07] <prometheanfire> trying to decide between http://goo.gl/4jbTU for personal use
[18:07] * prometheanfire wants to set up a cluster at home
[18:09] <Lo-lan-do> Now that this channel seems to be a bit more alive than when I initially asked… any comments on http://roland.entierement.nu/blog/2012/01/15/looking-for-the-ultimate-distributed-filesystem.html ?
[18:09] <Tv> "filestore(/tmp/cephtest/data/osd.0.data) ENOSPC on setxattr on 9.6_head/2012-01-17-09-0-aaa.." :(
[18:10] <gregaf1> I'm just looking at it now, Lo-lan-do…we'll have some updates for you soon!
[18:10] <Tv> Lo-lan-do: 1) author seems to think ceph doesn't to repairs & rebalancing at all, that's a bit weird 2) ceph is not suitable for a WAN use case
[18:11] <nhm> prometheanfire: they are nearly identical...
[18:11] <prometheanfire> ya, only read diff is the PS
[18:11] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[18:11] <Lo-lan-do> Tv: Author is me, and I got this impression from reading the web site. I may be confused.
[18:11] <prometheanfire> didn't know how much redundancy I should build into home
[18:11] <Lo-lan-do> gregaf1: Great :-)
[18:12] <prometheanfire> if the failure caused split brains then I would get the one with dual PS
[18:12] <nhm> prometheanfire: how many nodes?
[18:12] <Tv> Lo-lan-do: all data stored on OSDs is replicated as configured, and they autonomously try to reach that replication level after node losses etc
[18:12] <prometheanfire> 4-5 I think
[18:12] <prometheanfire> 3 to start if I can
[18:12] <prometheanfire> this is just for home
[18:12] <Lo-lan-do> Tv: Ah, good. Thanks for the heads-up about WAN, too.
[18:13] <prometheanfire> at 5 I will want double redundancy I think
[18:13] <nhm> prometheanfire: I was actually thinking of building a home cluster for testing using old Dell PE840s off ebay for $80 a pop.
[18:13] <prometheanfire> this personal project is about 7-8 months down the road at least
[18:14] <prometheanfire> they will also host VMs using ganeti, so it has to have a virt capable proc. I will also be migrating my data from my current setup (8-9 TB).
[18:15] <prometheanfire> probably not worth it to get an ssd for caching (for my purposes)
[18:19] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[18:25] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[18:30] <Kioob`Taff1> (18:00:06) gregaf1: Kioob`Taff1: there are often ways to debug things; can you give us more info? <== well I start by upgrading from 0.38 to 0.40, and I didn't have the problem anymore. So I suppose it was already fixed.
[18:30] <Kioob`Taff1> but one thing : is there a way to consult each "copy" of one file ?
[18:40] <gregaf1> Kioob`Taff1: not through the filesystem interface, no
[18:40] <Kioob`Taff1> and through the "ceph" tool ?
[18:41] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[18:41] <gregaf1> we don't have an interface set up for it because we haven't seen a need (although I guess maybe for checking state it would be nice)
[18:41] * lx0 is now known as lxo
[18:41] <gregaf1> so no, not the ceph tool either :(
[18:42] <Kioob`Taff1> ok, thanks :)
[18:42] <gregaf1> if it's a small file you could find out what object it's stored in and pull that out of the OSD's storage space, if you actually need to look at them...
[18:42] <gregaf1> but in general it's not supposed to be interesting
[18:42] <Kioob`Taff1> yes, I understand
[18:42] <lxo> so, I've just tried to rollback to an old snapshot by restarting all active osds with an “osd rollback to cluster snap = <name>” in ceph.conf; no luck, it located the snapshot, said it was rolling back to it, then failed trying to convert the repository (doh)
[18:43] <Kioob`Taff1> in case of data corruption, I would like to see if the problem was from ceph or not
[18:43] * joshd (~joshd@aon.hq.newdream.net) has joined #ceph
[18:43] <lxo> then I retried removing all existing snap_* and current, and it failed to find current. then I btrfs snapshotted clustersnap_<name> to current, and it went fine. then I commented out the line from ceph.conf. is that how one's supposed to rollback to snapshots for now?
[18:44] <gregaf1> lxo: like I said, cluster snapshot is *NOT TESTED BEEP BEEP BEEP ALERT*
[18:44] <gregaf1> hell, I don't think it's documented; sagewk?
[18:45] <lxo> err, I don't recall seeing you say that. I guess I missed that in one of the many network failures I've experienced lately
[18:45] <lxo> anyway... it does take the snapshots fine, and rolling back to them works, if manually, so I'm a happy camper ;-)
[18:45] <gregaf1> about 45 minutes ago, sorry
[18:46] <lxo> aah, I can see it now. thanks
[18:46] <lxo> I had missed it all right, but not because of the network ;-)
[18:49] <lxo> anyway, since I got your attention ;-) any tips on where to look to figure out why restarting the mds after changing the timestamp of a directory in which I'd just created a snapshot changes the timestamp that was recorded in the snapshot as well? it seems to also mispropagate size info, e.g., if I create a large file in that subtree and restart the mds, the snapshot will display the increased size too
[18:49] * fronlius (~fronlius@testing78.jimdo-server.com) Quit (Quit: fronlius)
[18:50] <gregaf1> oh right, I was going to ask about that stuff from you
[18:50] <lxo> I don't know for sure whether it's the action of restarting the mds that causes this, or if it's already wrong on disk, but only gets brought into the mds state when the mds is restarted
[18:51] <gregaf1> so yeah, probably there's an error in the way the "snaptree" is propagating and inheriting statistics; the more info you can provide on reproducing this the easier it will be to track down
[18:51] <gregaf1> but we're really focused on other parts of the system right now and snapshot times are sufficiently niche that I don't think we're going to be able to devote resources to it right now (unless sagewk or Mark say otherwise) :/
[18:52] <lxo> yeah, I don't mind looking into it myself, but I could use some pointers into the code to speed that up
[18:52] <lxo> in fact, I *do* want to look into it myself, but I can file a bug report with a very simple reproducer if that helps
[18:53] <gregaf1> well, without looking into it more, everything involving SnapTree manipulation
[18:53] <lxo> okiedokie, here we go ;-) tks
[18:54] * fronlius (~fronlius@testing78.jimdo-server.com) has joined #ceph
[18:55] * fronlius (~fronlius@testing78.jimdo-server.com) Quit ()
[18:55] <lxo> one thing that might help me nail it down: how can I find the timestamp/inode info of a directory and its snapshots on disk?
[18:56] <lxo> if I can tell whether it's being mangled when the dir timestamp is changed, or when the mds replays its log, that will cut down on debugging already
[18:57] <sagewk> lxo: one interesting data point would be whether it si the mds restart or clientn reconnect that causes the miscalculation.. try unmounting the client (or doing a sync) and then restarting the mds and see if it happens then, too.
[18:57] <lxo> say, I have the dir inode, and I located the file corresponding to that inode in osd*/current/1.*/**
[18:58] <lxo> sagewk, that I've already done, it's not the client
[18:58] <sagewk> lxo: it lives in the journal for a while before getting flush to the directory object, so it's hard to nail down on disk. you'll have better luck inspecting the log than the on-disk data.
[18:58] <sagewk> hmm!
[18:59] * aliguori (~anthony@32.97.110.59) has joined #ceph
[18:59] <lxo> sagewk, I can wait for the dir object to be updated, but yeah, I guess I can turn up mds logging and see what that tells me
[19:01] <sagewk> lxo: does it "lose" the snapped dir stat update, or does it get shifted to the head?
[19:01] <lxo> oh, BTW, one thing I noticed about ceph-fuse mounts is that, even after umount, a ceph-fuse process is left behind, apparently flushing data to mds and osds in background. I'm concerned this could lead to data loss on shutdown/reboot. shouldn't the umount only complete when info is stable on disk, i.e., without leaving this ceph-fuse behind?
[19:01] <lxo> it gets shifted to the head
[19:02] * jojy (~jvarghese@108.60.121.114) Quit (Quit: jojy)
[19:02] <sagewk> lxo: in think that's a function of how the fusermount interacts via the fuse stuff.. not sure we can control it without changing fuse itself. may be wrong tho..
[19:03] <lxo> I tried creating files in the head, to see if that would set it apart (wild guess), but that made no difference
[19:04] <lxo> hey, I wonder if that's caused by a bind-automount! that could be it!
[19:04] <lxo> (the left-over ceph-fuse)
[19:05] <lxo> I think I've seen bind mounts of ceph-fuse mounts fork over additional processes
[19:08] * jojy (~jvarghese@108.60.121.114) has joined #ceph
[19:09] <sagewk> hmm, looking over the code it looks like old_inodes isn't journaled properly(/at all) right now. that would explain it
[19:10] <sagewk> EMetaBlob::fullbit needs an old_inodes member, and add_primary_dentry and add_root need to fill pass it in to the constructor
[19:11] <sagewk> w/ properly versioning and all that
[19:12] <lxo> ok, it will take me a while to as much as understand what you're saying, but I'll try ;-)
[19:12] <sagewk> i can help you out :)
[19:12] * Kioob`Taff1 (~plug-oliv@89-156-116-126.rev.numericable.fr) Quit (Quit: Leaving.)
[19:13] <lxo> thanks
[19:13] <lxo> I gotta do some Real Work (TM) now, but I'll get back to it probably tonight
[19:13] <lxo> since you won't be around then, I'll try to make sense of it now
[19:21] <lxo> oh, I forgot I have patches for the .spec file. will post momentarily
[19:35] <lxo> sagewk, sanity check: why should you need to journal snapshotted inodes? they're not supposed to change!
[19:46] <sagewk> by design any and all metadata changes go into the journal, and are only lazily written to the per-directory objects. it's mainly about making the mds io profile efficient
[19:50] <lxo> right. so, again, why do they need journaling, if they're *not* to change?
[19:50] <lxo> do you mean they need journaling at the time of the snap creation?
[19:53] <gregaf1> lxo: at snap creation, yeah — metadata snapshots are handled differently from data (RADOS object) snapshots
[19:54] * fronlius (~fronlius@d217032.adsl.hansenet.de) has joined #ceph
[19:54] <gregaf1> RADOS objects snapshot the actual object on disk (with btrfs, if using that, otherwise we hack our own COW over top), but the metadata snapshots are extra references which are stored in the regular metadata on-disk object since they need to actually interface with the rest of the metadata
[19:56] <sagewk> someone want to take a quick look at wip-osd-dump-journal?
[19:56] <sagewk> the wip-encoding branch dumps it in json.. this can probably switch over to that eventually. it's still useful as is, though.
[19:57] <lxo> hmm... so the theory is that the problem comes about when the mds replays the snapshot creation operation from mds journal to osd objects, using already-modified directory state?
[19:58] <gregaf1> well, the mds journal is composed of osd objects…but assuming you mean the replay from journal to actual directory objects, yes, somewhere in that path
[19:59] <lxo> yeah, I meant directory objects, sorry
[20:00] * cp (~cp@75.103.61.58) has joined #ceph
[20:00] * cp (~cp@75.103.61.58) Quit ()
[20:04] * The_Bishop (~bishop@cable-89-16-138-109.cust.telecolumbus.net) has joined #ceph
[20:04] <gregaf1> sagewk: probably want sjust to check it but my naive impression is it's fine
[20:04] <gregaf1> but if we have a good implementation already done I'm not sure it's appropriate to put in a dumb implementation
[20:05] <sagewk> lxo: sort of. the mds is recovering all of its state from the journal,but since old_inoes isn't journalled it's effectively forgetting everything about it, and treating the head inode as the snapshotted version.
[20:05] <joshd> sagewk: I agree with gregaf1, although I'd like to see ceph_osd's main split into more than one function in the future
[20:05] <sagewk> k
[20:10] * sagewk (~sage@aon.hq.newdream.net) Quit (Read error: Connection reset by peer)
[20:15] * sagewk (~sage@aon.hq.newdream.net) has joined #ceph
[20:20] <lxo> ok, thanks, I think I have enough of the picture to make sense of it now
[20:20] <lxo> FWIW, I've just opened #1946 for this issue
[20:21] <lxo> and forgot the <pre></pre> markers to avoid comments becoming numbered bullets :-(
[20:35] * Kioob (~kioob@luuna.daevel.fr) has joined #ceph
[21:42] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[21:52] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[21:58] * Lo-lan-do (~roland@mirenboite.placard.fr.eu.org) Quit (Ping timeout: 480 seconds)
[22:05] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[22:24] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) Quit (Quit: Leaving)
[23:48] <Tv> oh wow, a 220MHz embedded processor
[23:48] <Tv> haven't seen one that slow in a while ;)
[23:54] <nhm> Tv: There must still be 6500s floating around
[23:55] <nhm> sorry, 6502
[23:55] <iggy> I think my toaster is faster than that

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.