#ceph IRC Log

Index

IRC Log for 2010-12-26

Timestamps are in GMT/BST.

[0:06] * MarkN (~nathan@59.167.240.178) has joined #ceph
[3:58] * bchrisman (~Adium@c-24-130-226-22.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[4:06] * bchrisman (~Adium@c-24-130-226-22.hsd1.ca.comcast.net) has joined #ceph
[4:20] * bchrisman (~Adium@c-24-130-226-22.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[4:26] * bchrisman (~Adium@c-24-130-226-22.hsd1.ca.comcast.net) has joined #ceph
[4:48] * bchrisman (~Adium@c-24-130-226-22.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[5:19] * bchrisman (~Adium@c-24-130-226-22.hsd1.ca.comcast.net) has joined #ceph
[6:26] * ijuz__ (~ijuz@p4FFF5612.dip.t-dialin.net) has joined #ceph
[6:34] * ijuz_ (~ijuz@p4FFF7767.dip.t-dialin.net) Quit (Ping timeout: 480 seconds)
[9:01] * Meths_ (rift@91.106.201.165) has joined #ceph
[9:04] * Meths (rift@91.106.159.179) Quit (Read error: Operation timed out)
[9:48] * bchrisman (~Adium@c-24-130-226-22.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[10:49] * allsystemsarego (~allsystem@188.25.129.139) has joined #ceph
[13:56] * Meths_ is now known as Meths
[16:22] * monrad-51468 (~mmk@domitian.tdx.dk) Quit (Quit: bla)
[16:23] * monrad-51468 (~mmk@domitian.tdx.dk) has joined #ceph
[16:26] * bchrisman (~Adium@c-24-130-226-22.hsd1.ca.comcast.net) has joined #ceph
[16:27] * DeHackEd (~dehacked@dhe.execulink.com) Quit (Ping timeout: 480 seconds)
[18:44] * allsystemsarego (~allsystem@188.25.129.139) Quit (Quit: Leaving)
[19:25] * bchrisman (~Adium@c-24-130-226-22.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[19:42] * bchrisman (~Adium@c-24-130-226-22.hsd1.ca.comcast.net) has joined #ceph
[20:06] * DeHackEd (~dehacked@dhe.execulink.com) has joined #ceph
[20:07] <DeHackEd> so, is it stable yet?
[20:07] <DeHackEd> :)
[20:35] * metabaronen (~metabaron@h21n8-m-rg-gr100.ias.bredband.telia.com) has joined #ceph
[20:41] <metabaronen> Hi all I'm trying to get a feeling of if Ceph is a distributed filesystem for us. How well suited can Ceph be on a multisite WAN network connected with high bandwidth but high latency?
[20:48] <metabaronen> we have over 300 ms to some sites
[20:50] <metabaronen> so it's important that is can choose the closers but also be able to effectivly transfer data on a high latency link
[20:51] <ijuz__> as far as i know ceph has atm no way to choose the object nodes where the actual data should be stored (to have copies closer at the consumers)
[20:51] <ijuz__> is there some new article about ceph today or something?
[21:32] <metabaronen> not as far as I know... it was just a hope.
[21:34] <metabaronen> so ceph is more in the same sphere as lustre with but with more clever redundancy I guess
[21:36] * bchrisman (~Adium@c-24-130-226-22.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[21:41] * bchrisman (~Adium@c-24-130-226-22.hsd1.ca.comcast.net) has joined #ceph
[23:29] * metabaronen (~metabaron@h21n8-m-rg-gr100.ias.bredband.telia.com) Quit (Quit: Ex-Chat)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.