#ceph IRC Log

Index

IRC Log for 2010-08-03

Timestamps are in GMT/BST.

[1:19] * gregaf (~Adium@ip-66-33-206-8.dreamhost.com) Quit (Quit: Leaving.)
[1:19] * gregaf (~Adium@ip-66-33-206-8.dreamhost.com) has joined #ceph
[1:48] * gregaf (~Adium@ip-66-33-206-8.dreamhost.com) Quit (Ping timeout: 480 seconds)
[4:39] * akhurana (~ak2@c-98-232-30-233.hsd1.wa.comcast.net) has joined #ceph
[4:45] * akhurana (~ak2@c-98-232-30-233.hsd1.wa.comcast.net) Quit (Quit: akhurana)
[4:45] * ghaskins_mobile (~ghaskins_@66-189-114-103.dhcp.oxfr.ma.charter.com) Quit (Quit: This computer has gone to sleep)
[5:07] * revstray (~rev@blue-labs.us) has joined #ceph
[5:10] <revstray> hello! I am going to be deploying CEPH in a test/dev environment, any tips?
[6:47] * Osso (osso@AMontsouris-755-1-10-232.w90-46.abo.wanadoo.fr) Quit (Quit: Osso)
[6:53] * f4m8_ is now known as f4m8
[7:46] * Jiaju (~jjzhang@222.126.194.154) Quit (Remote host closed the connection)
[7:49] * Jiaju (~jjzhang@222.126.194.154) has joined #ceph
[8:26] * allsystemsarego (~allsystem@188.26.32.97) has joined #ceph
[8:58] <iggy> revstray: stick around here and follow the mailing list at the very least
[13:54] * ghaskins_mobile (~ghaskins_@66-189-114-103.dhcp.oxfr.ma.charter.com) has joined #ceph
[14:01] * sassyn (~sassyn@62.219.154.151) has joined #ceph
[14:01] <sassyn> hi all
[15:23] * fzylogic (~fzylogic@dsl081-243-128.sfo1.dsl.speakeasy.net) Quit (Quit: fzylogic)
[15:32] * Osso (osso@AMontsouris-755-1-10-232.w90-46.abo.wanadoo.fr) has joined #ceph
[15:50] <wido> hi sassyn
[15:50] <wido> jantje: ?
[16:35] <jantje> wido: yes?
[16:43] <sassyn> hi wido
[16:43] <sassyn> can u please drop a comment how stable ceph is?
[16:43] <sassyn> I want to use it with btrfs on ZFS
[16:44] <sassyn> on = or
[16:45] <darkfader> btrfs and ceph are both experimental
[16:47] <monrad-65532> but you dont get from experimental to stable of nobody uses them :)
[16:50] <monrad-65532> if nobody ...
[17:40] * [1]sassyn (~sassyn@62.219.154.151) has joined #ceph
[17:44] * sassyn (~sassyn@62.219.154.151) Quit (Ping timeout: 480 seconds)
[17:44] * [1]sassyn is now known as sassyn
[17:45] <jantje> well, it would be nice to know how big chances are to lose your data :-)
[17:59] * akhurana (~ak2@c-98-232-30-233.hsd1.wa.comcast.net) has joined #ceph
[18:19] * akhurana (~ak2@c-98-232-30-233.hsd1.wa.comcast.net) Quit (Quit: akhurana)
[19:04] * fzylogic (~fzylogic@dsl081-243-128.sfo1.dsl.speakeasy.net) has joined #ceph
[19:05] * fzylogic (~fzylogic@dsl081-243-128.sfo1.dsl.speakeasy.net) Quit ()
[19:06] * gregaf (~Adium@ip-66-33-206-8.dreamhost.com) has joined #ceph
[19:06] * fzylogic (~fzylogic@dsl081-243-128.sfo1.dsl.speakeasy.net) has joined #ceph
[19:11] * ajnelson (~ajnelson@host-240-22.pubnet.pdx.edu) has joined #ceph
[19:14] * ajnelson (~ajnelson@host-240-22.pubnet.pdx.edu) Quit ()
[20:14] * fred_ (~fred@212-235.1-85.cust.bluewin.ch) has joined #ceph
[20:14] <fred_> hi
[20:14] <fred_> yehudasa, you there ?
[20:15] <yehudasa> fred_: yep!
[20:16] <fred_> yehudasa, great, I just wanted to let you know that everything is fine now with qemu+rbd
[20:16] <yehudasa> great!
[20:17] <fred_> so have a nice day, bye
[20:17] * fred_ (~fred@212-235.1-85.cust.bluewin.ch) Quit ()
[21:27] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) Quit (Quit: Ex-Chat)
[21:27] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) has joined #ceph
[22:32] <jantje> Hmm
[22:34] <jantje> Can CEPH be mapped on parallel nfs somehow? , I guess I could run a CEPH client on each OSD, and export that. Anyone tried that before?
[22:34] <jantje> I need some (gateway) for clients to access the ceph cluster
[22:35] <jantje> Right now I'm just thinking of ways to do it, I have no clue if it's even possible.
[22:49] <sagewk> jantje: it would be possible to create a mds that talks pnfs, but you would lose many of the scalability benefits of the current mds architecture
[22:50] <jantje> yea, I want to have nfs on top of ceph's architecture
[22:50] <jantje> just to provide (slower) access to the storage cluster
[22:53] <sagewk> you can also re-export a ceph mount via nfs, but it can suffer from ESTALE in some cases
[22:54] <jantje> why's that
[22:54] <jantje> because the nfs msd caches the location while ceph moved it?
[23:03] <sagewk> ceph metadata partition/replication is dynamic, pnfs partition is (mostly?) static, non replicated. with pnfs individual dirs have single mds, with ceph they can be fragmented/hashed across multiple nodes
[23:03] <sagewk> ceph mds/client protocol has fine grained coherent leasing/locking, (p)nfs has coarse delegations and/or weak consistency, timeouts
[23:08] <jantje> ok, thanks
[23:10] <jantje> sagewk: i'm wondering if there is any milestone for v1.0 , because then I could 'sell' it to my boss :-)
[23:10] <jantje> (with milestone I mean a date)
[23:31] <jantje> nite!
[23:32] <gregaf> night

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.