#ceph IRC Log

Index

IRC Log for 2010-11-15

Timestamps are in GMT/BST.

[0:10] * allsystemsarego (~allsystem@188.26.32.123) Quit (Quit: Leaving)
[0:22] * alexxy[home] (~alexxy@79.173.81.171) has joined #ceph
[0:22] * alexxy (~alexxy@79.173.81.171) Quit (Read error: Connection reset by peer)
[3:24] * xilei (~xilei@61.135.165.172) has joined #ceph
[5:32] * terang (~me@ip-66-33-206-8.dreamhost.com) Quit (Ping timeout: 480 seconds)
[6:45] * sentinel_e86 (~sentinel_@188.226.51.71) Quit (Remote host closed the connection)
[6:52] * sentinel_e86 (~sentinel_@188.226.51.71) has joined #ceph
[7:18] * atg (~atg@please.dont.hacktheinter.net) Quit (Read error: Connection reset by peer)
[7:19] * atg (~atg@please.dont.hacktheinter.net) has joined #ceph
[7:19] * atg (~atg@please.dont.hacktheinter.net) Quit (Remote host closed the connection)
[7:19] * atg (~atg@please.dont.hacktheinter.net) has joined #ceph
[7:42] * atg (~atg@please.dont.hacktheinter.net) Quit (Remote host closed the connection)
[7:48] * f4m8_ is now known as f4m8
[7:55] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) has joined #ceph
[8:14] * atg (~atg@please.dont.hacktheinter.net) has joined #ceph
[8:18] * andret (~andre@pcandre.nine.ch) Quit (Remote host closed the connection)
[8:21] * andret (~andre@pcandre.nine.ch) has joined #ceph
[8:22] * atg (~atg@please.dont.hacktheinter.net) Quit (Ping timeout: 480 seconds)
[8:33] * Jiaju (~jjzhang@222.126.194.154) Quit (Ping timeout: 480 seconds)
[8:34] * Jiaju (~jjzhang@222.126.194.154) has joined #ceph
[8:47] * Jiaju (~jjzhang@222.126.194.154) Quit (Ping timeout: 480 seconds)
[8:54] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) Quit (Quit: Yoric)
[8:59] * allsystemsarego (~allsystem@188.26.32.123) has joined #ceph
[9:04] * Jiaju (~jjzhang@222.126.194.154) has joined #ceph
[10:01] * gregorg (~Greg@78.155.152.6) has joined #ceph
[10:20] * Yoric (~David@213.144.210.93) has joined #ceph
[10:51] * vituko (~vituko@76.180.18.95.dynamic.jazztel.es) has joined #ceph
[11:32] <vituko> Hi, has someone tried to deploy this software as a community infrastructure over long distances? I've read in the faq that ceph shouldn't be chosen in a low bandwidth/high latency environment. But these are relative measurements and the acceptable performance too, I mean what about a distributed wireless network? with some connections by ADSL or optical fiber. I came into this project searching cause I had a need, the idea of striping is m
[11:32] <vituko> aybe the only choice when the connection is a relative bottleneck, when speaking about bandwidth, of course, the latency could be addressed with permanent connections... Another of the wished fetures (and availability, reliability) would be privacy/security, the first point is the idea of being impossible to recover data with just a node because physically some bits are not there and possibly requiring some minimum nodes to recover data. To
[11:32] <vituko> be honest, I began to think on mdadm over drbd as a start point. Ideas about this topic?
[11:42] * atg (~atg@please.dont.hacktheinter.net) has joined #ceph
[13:53] * xilei (~xilei@61.135.165.172) Quit (Quit: Leaving)
[16:00] * f4m8 is now known as f4m8_
[17:33] <wido> vituko: don't try it
[17:33] <wido> Ceph is designed to work in the same DC/LAN, at least, low latency
[17:34] <wido> RADOS might work over long distances, but the Ceph filesystem will not perform
[17:34] <wido> technically it would work, but don't expect anything of it
[17:38] * atg (~atg@please.dont.hacktheinter.net) Quit (Ping timeout: 480 seconds)
[17:52] * joshd (~joshd@ip-66-33-206-8.dreamhost.com) has joined #ceph
[18:04] * atg (~atg@please.dont.hacktheinter.net) has joined #ceph
[18:31] <vituko> wido: already experimented in this scenario?
[18:38] <gregaf> vituko: Ceph's architecture assumes a low-latency environment
[18:39] <gregaf> you could probably get it to run over the internet but there's just no way it will be a pleasant experience
[18:39] <vituko> ok
[18:39] <gregaf> from what you're talking about with sharding and wide-area distributions you might want to look into another project a la TahoeFS?
[18:52] <sagewk> wido: how frequently are you seeing htat btrfs warning?
[18:55] * Yoric (~David@213.144.210.93) Quit (Quit: Yoric)
[18:56] * sjust (~sam@ip-66-33-206-8.dreamhost.com) has joined #ceph
[18:57] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[19:50] * asllkj (~lbz@fw1.aspsys.com) Quit (Quit: Leaving)
[19:52] <wido> sagewk: I'm not sure, let me check my cluster
[19:52] <wido> vituko: No, but that's what the dev's are saying
[19:53] <vituko> understood, a design question
[19:53] <wido> yes, indeed
[19:54] <vituko> I'll continue searching more possibilities
[19:54] <sagewk> vituko: check xtreemfs if you haven't already
[19:54] <wido> sagewk: right now I'm seeing it on one other node
[19:54] <vituko> I'll do, thanks
[19:56] <sagewk> wido: if it's reproducible, maybe you can you try with http://fpaste.org/UjSV/ applied and see if it goes away?
[19:58] <wido> sagewk: I'll try, but i'm note sure, I don't see it every day, so it will take some time
[19:58] <sagewk> ok
[19:58] <wido> I'll rebuild the current unstable and let you know
[19:59] <sagewk> k thanks
[20:00] <wido> sagewk: sure the patch is OK? simply override "r" ?
[20:00] <sagewk> yeah
[20:00] <sagewk> i'm just wondering if it's due to the async snap creation ioctl
[20:01] <wido> ok, building right now
[20:01] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[20:17] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) has joined #ceph
[20:40] <wido> sagewk: I'm seeing that "cluster staying in a degraded state" again
[20:40] <wido> 2/1410387 degraded (0.000%)
[20:41] <sagewk> ok
[20:41] <wido> I have to go afk
[20:41] <sagewk> k
[20:41] <wido> but if you want to hunt it, go ahead
[20:41] <sagewk> will look now
[20:41] <sagewk> thanks
[20:41] <wido> ok, i'm afk!
[20:43] <wido> oh, sagewk what I forgot, i've got 11 OSD's up
[20:43] <wido> one is down due to a btrfs bug I'm debugging with the btrfs dev's
[20:43] <sagewk> 8?
[20:43] <wido> yes
[20:43] <sagewk> ok. what's the bug btw?
[20:44] <wido> not sure yet, a simple "touch foo; sync; rm foo" kernel panics the box
[20:44] <wido> hardware is OK, but seems a corner case with the machine config
[20:44] <sagewk> ah ok
[20:44] <wido> there seem to be some other OSD's bouncing/crashing now
[20:45] <wido> but I really need to go, tnx!
[20:45] <sagewk> yeah
[21:18] * cmccabe1 (~cmccabe@dsl081-243-128.sfo1.dsl.speakeasy.net) has joined #ceph
[21:46] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) Quit (Quit: Yoric)
[22:01] * allsystemsarego (~allsystem@188.26.32.123) Quit (Quit: Leaving)
[22:33] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) has joined #ceph
[22:37] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) Quit ()

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.