#ceph IRC Log

Index

IRC Log for 2012-05-09

Timestamps are in GMT/BST.

[0:42] * S0NiC (~john@p54A307DE.dip0.t-ipconnect.de) has joined #ceph
[0:49] * S0NiC_ (~john@p54A3067A.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[0:57] * sjustlaptop (~sam@m9d2736d0.tmodns.net) has joined #ceph
[1:10] * BManojlovic (~steki@212.200.243.232) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:31] * stass (stas@ssh.deglitch.com) Quit (Read error: Connection reset by peer)
[1:48] * sjustlaptop (~sam@m9d2736d0.tmodns.net) Quit (Ping timeout: 480 seconds)
[1:49] * stass (stas@ssh.deglitch.com) has joined #ceph
[1:52] * bchrisman (~Adium@108.60.121.114) Quit (Quit: Leaving.)
[1:53] * CristianDM (~CristianD@host217.190-230-240.telecom.net.ar) Quit ()
[2:25] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[2:34] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) Quit (Ping timeout: 480 seconds)
[2:35] * yoshi (~yoshi@p3167-ipngn3601marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[2:42] * lofejndif (~lsqavnbok@9YYAAFYCP.tor-irc.dnsbl.oftc.net) has joined #ceph
[2:57] * aliguori (~anthony@cpe-70-123-136-102.austin.res.rr.com) Quit (Read error: Operation timed out)
[3:14] * mib_d8e86y (5766480a@ircip1.mibbit.com) has joined #ceph
[3:14] <mib_d8e86y> Want Free Rewards? Click Here! ->> http://quickprize.info/?ref=11509
[3:14] * mib_d8e86y (5766480a@ircip1.mibbit.com) has left #ceph
[3:19] * lofejndif (~lsqavnbok@9YYAAFYCP.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[3:20] * aa (~aa@r186-52-162-16.dialup.adsl.anteldata.net.uy) has joined #ceph
[3:43] * aa (~aa@r186-52-162-16.dialup.adsl.anteldata.net.uy) Quit (Ping timeout: 480 seconds)
[3:47] * aliguori (~anthony@cpe-70-123-145-39.austin.res.rr.com) has joined #ceph
[5:14] * s[X] (~sX]@eth589.qld.adsl.internode.on.net) Quit (Remote host closed the connection)
[5:35] * Ryan_Lane (~Adium@216.38.130.166) Quit (Quit: Leaving.)
[6:07] * chutzpah (~chutz@216.174.109.254) Quit (Quit: Leaving)
[6:26] * cattelan is now known as cattelan_away
[6:53] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[6:57] * Theuni (~Theuni@46.253.59.219) has joined #ceph
[7:11] * Theuni (~Theuni@46.253.59.219) Quit (Quit: Leaving.)
[7:13] * aa (~aa@r186-52-172-104.dialup.adsl.anteldata.net.uy) has joined #ceph
[7:41] * aa (~aa@r186-52-172-104.dialup.adsl.anteldata.net.uy) Quit (Ping timeout: 480 seconds)
[8:12] * Theuni (~Theuni@195.62.106.100) has joined #ceph
[8:12] * Theuni (~Theuni@195.62.106.100) has left #ceph
[8:14] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) has joined #ceph
[9:01] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:36] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[9:55] * jdsm (~Adium@bl14-43-231.dsl.telepac.pt) Quit (Quit: Leaving.)
[9:58] * sjustlaptop (~sam@m9d2736d0.tmodns.net) has joined #ceph
[9:58] * sjustlaptop (~sam@m9d2736d0.tmodns.net) Quit ()
[10:03] <S0NiC> hi
[10:03] <S0NiC> is here someone who is responsible for http://ceph.com/?
[10:12] <S0NiC> last week there was a different documentation on the website, is it possible to get this one?
[10:15] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[10:40] * verwilst (~verwilst@dD5769628.access.telenet.be) has joined #ceph
[10:48] * verwilst (~verwilst@dD5769628.access.telenet.be) Quit (Quit: Ex-Chat)
[10:52] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[10:52] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[11:03] * mib_3w295h (5766480a@ircip1.mibbit.com) has joined #ceph
[11:03] <mib_3w295h> Want Free Games? Click Here! ->> http://quickprize.info/?ref=11509
[11:03] * mib_3w295h (5766480a@ircip1.mibbit.com) has left #ceph
[11:26] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[11:28] * verwilst (~verwilst@dD5769628.access.telenet.be) has joined #ceph
[11:39] * alexxy (~alexxy@79.173.81.171) Quit (Ping timeout: 480 seconds)
[11:39] * yoshi (~yoshi@p3167-ipngn3601marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[12:03] * Qten (~Q@ppp59-167-157-24.static.internode.on.net) has joined #ceph
[12:24] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[12:37] * joshd (~jdurgin@wsip-70-165-201-203.lv.lv.cox.net) has joined #ceph
[12:37] * joshd (~jdurgin@wsip-70-165-201-203.lv.lv.cox.net) has left #ceph
[12:40] * alexxy (~alexxy@79.173.81.171) has joined #ceph
[14:45] * Azrael (~azrael@terra.negativeblue.com) has left #ceph
[15:04] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[15:10] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[15:23] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[15:23] * UnixDev (~user@c-98-242-186-177.hsd1.fl.comcast.net) Quit (Read error: Connection reset by peer)
[15:23] * benny (~benny@81-64-199-82.rev.numericable.fr) has joined #ceph
[15:23] <benny> Hi all
[15:23] <benny> has ceph got a stable version?
[15:23] <benny> or is it still in dev?
[15:24] <benny> hmm
[15:24] <benny> i just found your blog
[15:24] <benny> forget about my question :)
[15:24] * benny (~benny@81-64-199-82.rev.numericable.fr) Quit ()
[15:30] * The_Bishop (~bishop@cable-86-56-102-91.cust.telecolumbus.net) Quit (Ping timeout: 480 seconds)
[15:34] * The_Bishop (~bishop@cable-86-56-102-91.cust.telecolumbus.net) has joined #ceph
[15:43] * lofejndif (~lsqavnbok@09GAAFOE9.tor-irc.dnsbl.oftc.net) has joined #ceph
[15:52] * lofejndif (~lsqavnbok@09GAAFOE9.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[16:36] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) has joined #ceph
[16:56] * dennisj (~chatzilla@p5DCF6BD9.dip.t-dialin.net) has joined #ceph
[17:02] <dennisj> i just had a pretty severe hang that required me to reset the machine
[17:02] <dennisj> i did a mkfs.ext4 on an rbd image
[17:04] <dennisj> unfortunately the cluster itself is running in two vm's on the same machine so it's not clear it was a client or server side problem
[17:04] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[17:04] <dennisj> the vm's were frozen and I couldn't even kill them anymore
[17:05] <dennisj> "ps ax" on the client -> hang
[17:09] <dennisj> i'm wondering if this has something to do with the fact that the client had an outdated ceph.conf that was missing two of the four osd's of the cluster
[17:37] <filoo_absynth> yes
[17:37] <filoo_absynth> it does
[17:37] <filoo_absynth> which version?
[17:41] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:41] <dennisj> 0.46
[17:45] <filoo_absynth> sounds like an issue we are seeing too. VMs hang if an OSD is down
[17:50] <dennisj> yeah, i'm not sure what exactly is going on in that situation but my client was kind of usable in a shell but most other apps seemed to freeze the moment i tied to work with them (eg. firefox)
[17:50] * steki-BLAH (~steki@212.200.243.232) has joined #ceph
[17:50] <dennisj> as did the "ps ax"
[17:50] <dennisj> looked like some kind of fairly central lock that was blocked
[18:04] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[18:12] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[18:29] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[18:39] * LarsFronius (~LarsFroni@95-91-243-252-dynip.superkabel.de) has joined #ceph
[18:46] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) has joined #ceph
[18:59] * bchrisman (~Adium@108.60.121.114) has joined #ceph
[19:08] * chutzpah (~chutz@216.174.109.254) has joined #ceph
[19:09] * Ryan_Lane (~Adium@216.38.130.166) has joined #ceph
[19:13] <nhm> good morning all
[19:22] * Oliver1 (~oliver1@ip-78-94-238-206.unitymediagroup.de) has joined #ceph
[19:28] * LarsFronius (~LarsFroni@95-91-243-252-dynip.superkabel.de) Quit (Quit: LarsFronius)
[19:34] * joshd (~jdurgin@wsip-70-165-201-203.lv.lv.cox.net) has joined #ceph
[19:49] * yehuda_hm (~yehuda@99-48-179-68.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[19:58] * Ryan_Lane1 (~Adium@216.38.130.166) has joined #ceph
[19:58] * Ryan_Lane (~Adium@216.38.130.166) Quit (Read error: Connection reset by peer)
[20:09] * joshd (~jdurgin@wsip-70-165-201-203.lv.lv.cox.net) Quit (Quit: Leaving.)
[20:16] * elder (~elder@aon.hq.newdream.net) has joined #ceph
[20:17] * cattelan_away (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) Quit (Quit: Terminated with extreme prejudice - dircproxy 1.2.0)
[20:30] * Oliver1 (~oliver1@ip-78-94-238-206.unitymediagroup.de) Quit (Quit: Leaving.)
[20:32] * LarsFronius (~LarsFroni@95-91-243-252-dynip.superkabel.de) has joined #ceph
[20:37] * jlogan_ (~chatzilla@2600:c00:3010:1:3401:eed2:6946:d8ee) has joined #ceph
[20:38] * jlogan_ is now known as jlogan
[20:45] <darkfader> dennisj: for the record... i know hanging ps from nfs hardmounts. the linux ps is too "feature-heavy" and walks the file handles in /proc/pid something and all those will block on kernel io
[20:45] <darkfader> it's an idiocy
[21:11] * danieagle (~Daniel@177.43.213.15) has joined #ceph
[21:13] * lxo (~aoliva@83TAAFOGG.tor-irc.dnsbl.oftc.net) Quit (Read error: Connection reset by peer)
[21:29] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[21:37] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[21:39] <jlogan> Looking for distro suggestions.
[21:39] <jlogan> I have debian sid, but cannot get openstack to start instances
[21:40] <jlogan> I would like to have a ceph and openstack together. Any suggestions?
[21:43] <darkfader> yeah, use opennebula instead of openstack *hides*
[21:43] * lofejndif (~lsqavnbok@1RDAABODN.tor-irc.dnsbl.oftc.net) has joined #ceph
[21:45] <jlogan> I won't turn down the idea
[21:45] <jlogan> why do you like that project better?
[21:46] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) Quit (Quit: Konversation terminated!)
[21:46] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) has joined #ceph
[21:46] <darkfader> jlogan: it's more simplistic and it has a larger user base. from a chat in here i know it's somewhere in what orionvm.com.au uses
[21:47] <darkfader> and they happen to run the fastest (iowise) iaas cloud on the planet
[21:47] <darkfader> me personally, no real world experience in running prod clouds
[21:47] <darkfader> but i trust them
[21:48] <darkfader> let me try to rephrase
[21:49] <darkfader> "it's been useable / working for a longer period and i consider it more a "stable" kind of software. one has to write storage "drivers" for it to use something like ceph, but that process is documented and has been done a few times for other (but less capable that ceph) distributed fs
[21:49] <darkfader> there's plenty of docs and a active user base that is (main difference goes here!) really running clouds on it
[21:50] <darkfader> openstack (imo) is mostly about throwing all big players marketing money in one basket
[21:51] <darkfader> it would be cool if some openstack user speaks up against that now and then i'll gladly revise my opinion hehe
[21:52] * Ryan_Lane1 shrugs
[21:52] * Ryan_Lane1 is now known as Ryan_Lane
[21:52] <darkfader> Ryan_Lane: do you run something on openstack?
[21:52] <darkfader> or just dont care for either?
[21:52] <Ryan_Lane> https://labsconsole.wikimedia.org/wiki/Main_Page
[21:52] <Ryan_Lane> I found opennebula to kind of suck
[21:52] <Ryan_Lane> if you want something really stable go for ganeti
[21:53] <Ryan_Lane> I'm using openstack and like it, but it's still a really young project
[21:53] <darkfader> Ryan_Lane: what points did suck for you?
[21:53] <Ryan_Lane> it didn't reliably start instances
[21:53] <darkfader> and do you like ganeti better or openstack (stability aside?)
[21:53] <Ryan_Lane> it scheduled them poorly
[21:53] <Ryan_Lane> I prefer openstack
[21:53] <darkfader> interesting
[21:53] <Ryan_Lane> my coworker prefers ganeti ;)
[21:54] <darkfader> you're the first i talk to who has live experience with all of them
[21:54] <Ryan_Lane> I like openstack because of the multi-tenancy
[21:54] <Ryan_Lane> and the api
[21:54] <darkfader> orionvm had ditched all others for one
[21:54] <darkfader> One even
[21:55] <jlogan> What I'm looking for is some management framework to between my hypervisor and Cepf. Initially its for a group of admins (my group), but eventually I would the dev and qa teams to be able to manage their own instances.
[21:56] <Ryan_Lane> well, with openstack you can just set it to use rbd, I'd imagine the others have similar solutions
[21:56] <darkfader> i think most can do that, and One is also multi-tennant these days (virtual dcs) but consider that just an updated. the scheduling issues he mentioned would be suckish
[21:56] * Ryan_Lane goes away
[21:57] <darkfader> Ryan_Lane: you could write a rados "driver", i know there is one for moosefs (please: never use for real data!)
[21:57] <darkfader> write as in replace the right lines
[21:57] <jlogan> @Ryan, what distro are you running openstack on?
[21:57] <darkfader> Ryan_Lane: how long did you take to bring up openstack?
[22:03] * danieagle (~Daniel@177.43.213.15) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[22:28] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) Quit (Quit: Konversation terminated!)
[22:29] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) has joined #ceph
[22:29] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) Quit (Quit: Leaving)
[22:30] <Ryan_Lane> jlogan: ubuntu
[22:30] <Ryan_Lane> darkfader: I've been in closed beta since the cactus releae
[22:30] <jlogan> 12.04?
[22:31] <Ryan_Lane> I was in alpha on the bexar release
[22:31] <Ryan_Lane> jlogan: will be at some point
[22:31] <Ryan_Lane> I'm on lucid, using the diablo release of nova
[22:31] <jlogan> If I want to setup something today which version would you use?
[22:44] * joao (~JL@aon.hq.newdream.net) has joined #ceph
[22:54] * verwilst (~verwilst@dD5769628.access.telenet.be) Quit (Read error: Connection reset by peer)
[23:04] * trollboy (~matt@wsip-98-173-208-26.sb.sd.cox.net) has joined #ceph
[23:07] <trollboy> so I'm looking for some more info on ceph, how it scales, redundancy, what happens when I lose a server, etc... any resources on that?
[23:09] <joao> we have docs on the site, but I believe there's still heavy work in progress on that
[23:10] <joao> http://ceph.com/docs/master/
[23:12] <trollboy> yeah under resources the first link was to his thesis, and that 404'd
[23:12] <trollboy> hmmm
[23:12] <trollboy> How about I explain what I'm wanting to do, and you guys can give me your thoughts on whether or not ceph would be a good fit?
[23:12] <joao> can you point me to the 404'd url?
[23:12] <trollboy> sure one sec
[23:13] <joao> trollboy, most of the team is still at interop, or in transit to the office, so it might be hard to get answers today
[23:13] <joao> I will help in anyway I can though
[23:13] <trollboy> That's best actually
[23:13] <trollboy> I'm looking for a good oss solution well supported by a community
[23:14] <trollboy> so the opinions of those with @inktank.com addresses or paychecks... (no offense) wouldn't carry as much water
[23:14] <trollboy> bear in mind I would still have to sell this to TheBoss(tm)
[23:15] <trollboy> http://ceph.com/resources/ <--- top link to "Ceph: Reliable, Scalable, and High-Performance Distributed Storage" under publications
[23:15] <trollboy> that's the 404
[23:16] <joao> trollboy, you can actually find it here: http://ceph.newdream.net/papers/weil-thesis.pdf
[23:17] <joao> but thanks for the catch
[23:17] <trollboy> so.. we're looking to build a distributed cdn-esque fileserver system... whereas webservers and external clients can communicate with it via api calls, and it in turn manages files, permissions, etc
[23:17] <trollboy> so storage and access between nodes becomes an issue I hope to resolve with ceph
[23:18] <elder> trollboy, I don't understand why Inktank people's opinions wouldn't carry weight in your mind.
[23:19] <trollboy> elder: they would, but not as much.. its kinda like asking a mcdonald's employee where to get the best cars
[23:19] <trollboy> my major concerns is what if fs2 (from fs1 - fs10) dies, and what happens if fs1-5 are in NYC and 6-10 are in LA
[23:19] <trollboy> elder: I'm just looking for a good, mature, community supported solution ;-)
[23:19] <elder> OK, I get that. But we won't try to sell you on ceph if it's not the right answer for what you're doing.
[23:20] <elder> We are actively trying to cultivate a solid development community for ceph (outside Inktank).
[23:20] <elder> At the moment, though, the majority of those doing the development are employees.
[23:20] <trollboy> and like I said I still have to sell the concept to TheBoss(tm) so "3 employees told me its awesome" doesn't sound as convincing as "3 people with nothing to gain told me its awesome"
[23:21] <trollboy> so, that said... that's what we're trying to build out
[23:21] <trollboy> thoughts if any?
[23:21] <elder> I don't believe that Ceph is very well supported (at this point at least) for wide replication, if that's what you mean.
[23:22] <trollboy> by wide replication you mean across datacenters I take it?
[23:22] <elder> But there are others not online right now that could tell you the story on that more authoritatively.
[23:22] <elder> Yes.
[23:23] <elder> Are you saying that fs..10 are basically mirrors of each other?
[23:23] <trollboy> yes
[23:23] <elder> But geographically distributed?
[23:23] <trollboy> no
[23:23] <trollboy> we'll have groups of 5(ish) or so fs servers per datacenter
[23:23] <jlogan> When I was looking at geographic distributed filesystems I only saw XtreemFS supporting multiple sites.
[23:24] <trollboy> on the fs servers we'll have http running behind a load balancer listening for rest/soap/etc
[23:24] <jlogan> Ceph, from my reading, it 1 DC only.
[23:24] <trollboy> so I would have to handle cross DC synching myself
[23:24] <trollboy> good to know
[23:26] <elder> So if I understand your description trollboy (and I'm not sure I do):
[23:27] <elder> - you will have several geographically distributed data centers
[23:27] <elder> - each will serve on the order of 5 distinct filesystems
[23:27] <elder> - the file systems hosted at each data centers are basically all replicas of those at the other centers
[23:28] <elder> - you want to provide a file-like interface to outside clients (with permissions, ownership, etc.)
[23:28] <trollboy> first thing: yes
[23:28] <trollboy> second thing: that will vary, we're looking for a solution where we can add more boxes as needed
[23:29] <elder> - you need to somehow reconcile file access between data centers for consistency or something?
[23:29] <trollboy> third thing: maybe, maybe not.. pure replicas (mirrors) seems like wasted space to me
[23:29] <elder> - you would like an open source solution with good community support
[23:30] <elder> I don't believe exactly what the problem that you're hoping ceph might solve with respect to "storage and access between nodes"
[23:30] <trollboy> the "file-like" interface aspect will be handled by the api.. the cephfs(if we go with that) will never be world readable...
[23:32] <trollboy> well I'd be open to mirroring NYC & LA
[23:32] <trollboy> so long as I could pop FS1-FSx in each DC and store files
[23:34] <elder> So you will provide a "file-like" API. Behind that API you want Ceph (for the sake of discussion) to provide the file storage backing? And does Ceph implement the access control also?
[23:35] <trollboy> nope access control is in the api
[23:35] <elder> So the only thing you'd need from the backing store is an object store, basically. Named objects of arbitrary size.
[23:35] <trollboy> I just need to ensure that FS1 - fSx can each open/update/delete the same file in a timely manner
[23:35] <trollboy> Like I said earlier, the project is cdn-esque
[23:36] <elder> But no need for a particluar directory hierarchy, nor ownership, nor permission enforcement.
[23:36] <trollboy> correct
[23:36] <elder> Are FS1-FSx then distict filesystems, or copies of the same data?
[23:36] <trollboy> as all read/writes will be coming from the api
[23:37] <trollboy> FS1-FSx == the cluster of file servers
[23:37] <elder> Oh. And when you say that, is a "cluster" co-located in the same data center, or are you referring to nodes implementing this service via different data centers?
[23:37] <trollboy> initially the plan was to tag files to a db that referenced which physical box they where on.. w/ 1 day cache for local (to the dc) files, and 5 day cache for external(to the dc) files
[23:38] <jeffp> hi, i just noticed an error in the librados docs. http://ceph.com/docs/master/api/librados/ says that rados_getxattr returns 0 on success but it actually returns the length of the xattr
[23:38] <trollboy> but then that seemed complex, so we started looking at the option of a distributed filesystem to avoid that syncing madness
[23:39] <trollboy> elder: my definition of "cluster" so far as your question is concerned, is open...
[23:39] <Ryan_Lane> jlogan: if you needed to set one up today, use precise with the essex release (it's packaged and in the ubuntu repo)
[23:39] <Ryan_Lane> jlogan: (and maintained by canonical)
[23:39] <jlogan> Thanks. I think I'll give that a try.
[23:39] <elder> jeffp I just opened a bug with your comment. Please update it if you have any additional helpful information. http://tracker.newdream.net/issues/2391
[23:40] <trollboy> if we can make something awesome that just work on all nodes in all dc's, then w00t.. if we need to mirror two datacenters manually via rsync or some such, then that's ok too
[23:40] <trollboy> to clarify in the first case, a cluster would be all fileservers, in the second, 1 cluster per dc w/ rsync or some such voodoo
[23:42] <elder> OK, well I think I have a clearer picture of what you're talking about now... That being said, I'm getting a little out of my depth. I'm not a very good person to suggest what's right to do in a ceph deployment sense. However...
[23:43] <elder> Ceph could allow you to maintain multiple servers serving the same file system data. The inter-data center stuff would (for now anyway) be better done using a separate mechanism.
[23:43] <trollboy> I do love a good "However...", that's how I got a date to the prom
[23:43] <trollboy> danke
[23:44] <trollboy> You may see me again!
[23:44] <elder> There might be other configurations (using RADOS as a back end, but not necessarily the ceph filesystem)
[23:44] <trollboy> thanks very much for all your help!
[23:44] * trollboy (~matt@wsip-98-173-208-26.sb.sd.cox.net) Quit (Quit: Leaving)
[23:47] * dennisj_ (~chatzilla@p5DCF6885.dip.t-dialin.net) has joined #ceph
[23:52] * steki-BLAH (~steki@212.200.243.232) Quit (Quit: Ja odoh a vi sta 'ocete...)
[23:53] * dennisj (~chatzilla@p5DCF6BD9.dip.t-dialin.net) Quit (Ping timeout: 480 seconds)
[23:56] * yehuda_hm (~yehuda@99-48-179-68.lightspeed.irvnca.sbcglobal.net) Quit (Read error: Operation timed out)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.