#ceph IRC Log

Index

IRC Log for 2011-07-18

Timestamps are in GMT/BST.

[0:07] * jbd (~jbd@ks305592.kimsufi.com) has left #ceph
[0:45] * verwilst (~verwilst@dD576F220.access.telenet.be) Quit (Quit: Ex-Chat)
[2:17] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) has joined #ceph
[2:24] * pruby (~tim@leibniz.catalyst.net.nz) Quit (Ping timeout: 480 seconds)
[2:29] * pruby (~tim@leibniz.catalyst.net.nz) has joined #ceph
[2:58] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) Quit (Quit: This computer has gone to sleep)
[3:20] * yoshi (~yoshi@KD027091032046.ppp-bb.dion.ne.jp) has joined #ceph
[3:20] * yoshi (~yoshi@KD027091032046.ppp-bb.dion.ne.jp) Quit (Remote host closed the connection)
[5:28] * DanielFriesen (~dantman@S0106001731dfdb56.vs.shawcable.net) has joined #ceph
[5:35] * Dantman (~dantman@S0106001731dfdb56.vs.shawcable.net) Quit (Ping timeout: 480 seconds)
[7:37] * lx0 (~aoliva@186.214.52.99) has joined #ceph
[7:37] * lxo (~aoliva@09GAAFHKK.tor-irc.dnsbl.oftc.net) Quit (Read error: Connection reset by peer)
[8:15] * jbd (~jbd@ks305592.kimsufi.com) has joined #ceph
[8:55] * MK_FG (~MK_FG@188.226.51.71) Quit (Quit: o//)
[8:56] * MK_FG (~MK_FG@188.226.51.71) has joined #ceph
[9:04] * jbd (~jbd@ks305592.kimsufi.com) has left #ceph
[9:09] * foxhunt (~richard@109.109.115.145) has joined #ceph
[9:50] * jbd (~jbd@ks305592.kimsufi.com) has joined #ceph
[12:59] * Ameshk (~jorix72Tk@115-64-27-246.static.tpgi.com.au) has joined #ceph
[12:59] * Ameshk (~jorix72Tk@115-64-27-246.static.tpgi.com.au) Quit (Remote host closed the connection)
[14:33] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) has joined #ceph
[14:40] * lx0 is now known as lxo
[14:43] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) Quit (Quit: This computer has gone to sleep)
[14:43] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[14:49] * ghaskins (~ghaskins@66-189-113-47.dhcp.oxfr.ma.charter.com) has left #ceph
[14:55] * ghaskins (~ghaskins@66-189-113-47.dhcp.oxfr.ma.charter.com) has joined #ceph
[15:55] * mtk (~mtk@ool-182c8e6c.dyn.optonline.net) Quit (Quit: Leaving)
[15:55] * mtk (~mtk@ool-182c8e6c.dyn.optonline.net) has joined #ceph
[16:43] * phil_ (~quassel@chello080109010223.16.14.vie.surfer.at) has joined #ceph
[16:43] <phil_> hiho, is there a release date for production stable v 1.0 yet?
[16:52] * greglap (~Adium@166.205.142.172) has joined #ceph
[17:35] <phil_> hello there, i'm currently planning a new storage server system and ceph looks like a very good candidate. It is for production and as i see it, ceph is declared unstable. On the website version 1.0, declared production stable, is said to be released in 34 days - is that true?
[17:36] <greglap> phil_: 1.0 doesn't have a real release date; it's just been a while since it got pushed to "the future"
[17:36] <greglap> sorry
[17:37] <phil_> alright, thanks for the info
[17:38] <phil_> ah, one or two questions perhaps
[17:39] <greglap> I have to run, I'll be back in 20 minutes
[17:39] * greglap (~Adium@166.205.142.172) Quit (Quit: Leaving.)
[17:58] * rsharpe (~Adium@70-35-37-146.static.wiline.com) Quit (Quit: Leaving.)
[17:58] <gregaf> back
[17:59] <phil_> wb
[18:00] * rsharpe (~Adium@70-35-37-146.static.wiline.com) has joined #ceph
[18:01] <phil_> is there an educated guess for a stable release date? and are you providing migrating for the versions inbetween?
[18:01] * Tv (~Tv|work@ip-64-111-111-107.dreamhost.com) has joined #ceph
[18:02] <gregaf> migration Just Works, or should ??? right now the daemons aren't necessarily wire-compatible (although usually) so you need to shut them all down together, but they are disk-compatible
[18:02] <gregaf> the stable release will be When It's Done ;)
[18:03] <gregaf> although when it's actually stable for you depends on what you're doing; some stuff is already pretty solid (just using RADOS) while other pieces like multiple MDSes aren't anywhere near ready
[18:03] <gregaf> actually, they're probably close to ready but there's no points for close
[18:07] <phil_> well, i'm not that into it just by now, don't even know what MDS and RADOS is exactly, the final goal is a linux based distributed parallel storage system available to win and mac clients
[18:07] <phil_> for routhly 30 TB data, high throughput
[18:08] <gregaf> well you'd need something else to share the filesystem with Windows and Mac clients right now ??? there's an NFS gateway and some other folks are building a samba front end
[18:08] <gregaf> but the native clients are only for Linux
[18:10] <phil_> i've read about the nfsreexport feature, yes
[18:12] <phil_> the thing is, is "pretty solid" something a company can rely on
[18:13] <phil_> so if version 1.0 is going to be there in, say, a couple of months, it is doable
[18:13] <gregaf> well we're launching a product based on RADOS and RGW Real Soon Now (we have hardware and are going through QA)
[18:13] <phil_> with additional backup
[18:14] <Tv> phil_: you are going to want to talk to bchrisman at some point ;)
[18:14] <gregaf> but the POSIX filesystem layer isn't as stable as that
[18:15] <phil_> not stable meaning corrupted data?
[18:15] <phil_> thx Tv
[18:16] <gregaf> usually it just crashes, but if you use some of the more advanced features you can lose data
[18:16] <gregaf> usually not beyond the point of a decent fsck tool to fix, but we haven't build one of those yet since they're complicated on a distributed fs
[18:18] <phil_> build one of what?
[18:18] <gregaf> *built
[18:18] <phil_> fsck tool?
[18:18] <gregaf> an fsck tool
[18:19] <phil_> so the option of fixing possible failure is not there, ok, thats an ko criterium
[18:19] <gregaf> when we do get data loss reports it's generally because something happened to the metadata, not the data itself
[18:21] <phil_> well, a harddisk failure should also be coped with, so ceph is not a reliable option yet
[18:21] <gregaf> well, hard drive failures are dealt with, that's the whole point
[18:22] <gregaf> but yes, it sounds like Ceph is not mature enough yet for your needs
[18:23] <phil_> the metadata can be restored easily?
[18:23] <gregaf> not right now, it's theoretically simple but we've been focusing more on not having any failures
[18:24] <phil_> i see
[18:24] <phil_> well then, keep on, good luck
[18:24] <gregaf> you too
[18:33] * joshd (~joshd@ip-64-111-111-107.dreamhost.com) has joined #ceph
[18:41] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[18:44] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) has joined #ceph
[18:46] * cmccabe (~cmccabe@69.170.166.146) has joined #ceph
[19:04] * sjust (~sam@ip-64-111-111-107.dreamhost.com) has joined #ceph
[20:43] * phil__ (~quassel@chello080109010223.16.14.vie.surfer.at) has joined #ceph
[20:48] * phil_ (~quassel@chello080109010223.16.14.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[21:14] * DanielFriesen (~dantman@S0106001731dfdb56.vs.shawcable.net) Quit (Remote host closed the connection)
[23:26] * Juul (~Juul@port80.ds1-vo.adsl.cybercity.dk) has joined #ceph
[23:57] * yoshi (~yoshi@KD027091032046.ppp-bb.dion.ne.jp) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.