#ceph IRC Log

Index

IRC Log for 2011-09-05

Timestamps are in GMT/BST.

[1:51] * greglap (~Adium@166.205.142.3) has joined #ceph
[2:00] * yoshi (~yoshi@p10166-ipngn1901marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[2:53] * greglap (~Adium@166.205.142.3) Quit (Ping timeout: 480 seconds)
[3:57] * lxo (~aoliva@9KCAAAVZY.tor-irc.dnsbl.oftc.net) has joined #ceph
[6:10] * lxo (~aoliva@9KCAAAVZY.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[6:24] * lxo (~aoliva@9YYAABBMX.tor-irc.dnsbl.oftc.net) has joined #ceph
[8:48] * yoshi_ (~yoshi@p10166-ipngn1901marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[8:48] * yoshi (~yoshi@p10166-ipngn1901marunouchi.tokyo.ocn.ne.jp) Quit (Read error: Connection reset by peer)
[9:03] * darktim (~andre@ticket1.nine.ch) has joined #ceph
[9:58] * lxo (~aoliva@9YYAABBMX.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[10:05] * yoshi (~yoshi@p5039-ipngn601marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[10:07] * lxo (~aoliva@19NAADKN5.tor-irc.dnsbl.oftc.net) has joined #ceph
[10:11] * yoshi_ (~yoshi@p10166-ipngn1901marunouchi.tokyo.ocn.ne.jp) Quit (Ping timeout: 480 seconds)
[10:42] * votz (~votz@pool-72-94-171-89.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[11:48] * yoshi (~yoshi@p5039-ipngn601marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[11:49] * yoshi (~yoshi@p5039-ipngn601marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[15:38] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[16:13] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[16:21] * morse (~morse@supercomputing.univpm.it) Quit (Ping timeout: 480 seconds)
[16:24] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[19:20] * RupS (~rups@panoramix.m0z.net) has joined #ceph
[19:26] <RupS> hello all
[19:26] <RupS> getting excited reading the wiki ...
[19:38] <RupS> got a question though... you mount the monitor, but actual transfer of data does go from the OSD's?
[19:41] <ajm> correct, directly from client -> relevant osd
[19:44] <RupS> k, we're very interested, also in a large deployment, but the sw not being production ready is off course a problem :)
[19:46] <jantje_> RupS: I've done some experimenting with ceph a while ago, and the only advice I can give you: try it, emulate your environment, and try to get it broken and submit a bug report
[19:47] <jantje_> if you have a large setup, then probably ceph will hit it's first stable release when you're done testing ;-)
[19:49] <RupS> I'm looking for the guy to talk to I guess... I'd like to give that a try, but perhaps in a bit more formal setting
[19:50] <RupS> I'm looking at Isilon, which is nice, but expensive
[19:50] <RupS> Looking at building something ourselves, but takes to much time
[19:51] <RupS> I can survive for 6 months for example, with a FreeNAS setup and some replication, but the goal is to have some sort of a scale out storage environment, but to scale out, it must be stable :)
[20:23] * greglap (~Adium@166.205.142.3) has joined #ceph
[20:32] <RupS> one of the disadvantages of solutions like Gluster or Ibrix (HP x9000) is that they all use a file based approach (for re-distribution of data for example), not a block based. How does ceph do this?
[20:39] <greglap> RupS: not sure why you think file-based is a weakness, but Ceph splits up files into 4MB chunks and distribute those
[20:39] <greglap> (btw, there will be more people around tomorrow ??? Labor Day today!)
[20:40] <RupS> greglap: in environments with petabytes of storage and biljons of files, meta data becomes an issue? although, that's my experience in more traditional fs's :)
[20:41] <RupS> might be wrong, that this is no issue in these solutions, but I have zero experience there :)
[20:41] <greglap> RupS: ah, depends on wether you have to store metadata for each block ??? Ceph doesn't; I'm not sure about the others
[20:41] <RupS> greglap: ah, ok, I'll wait till tomorrow than :)
[20:51] <RupS> greglap: you know ho ceph sustains a node failure? is it somewhat comparable to network raid 5 or something?
[20:51] <greglap> RupS: no, it just does straight-up replication of data chunks
[20:51] <RupS> ow, nm, I'll start reading the btrfs docs first :)
[20:52] <RupS> ok, so redundancy is 2N?
[20:52] <greglap> well, you can set the replication level where you'd like it ??? generally 2 or 3 is a good level, though
[20:55] <RupS> was trying to do some calculations, to be on the safe side I need a 100% extra off the net. capacity I guess...
[21:01] * greglap (~Adium@166.205.142.3) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.