#ceph IRC Log

Index

IRC Log for 2011-04-09

Timestamps are in GMT/BST.

[1:06] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Read error: Operation timed out)
[1:10] * Administrator__ (~samsung@113.106.102.19) Quit (Ping timeout: 480 seconds)
[1:22] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[1:36] * gregaf1 (~Adium@ip-66-33-206-8.dreamhost.com) has joined #ceph
[1:43] * gregaf (~Adium@ip-66-33-206-8.dreamhost.com) Quit (Ping timeout: 480 seconds)
[1:53] * greglap (~Adium@ip-66-33-206-8.dreamhost.com) Quit (Quit: Leaving.)
[2:05] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[2:05] * Administrator__ (~samsung@113.106.102.19) has joined #ceph
[2:15] * Tv (~Tv|work@ip-66-33-206-8.dreamhost.com) Quit (Remote host closed the connection)
[2:36] * joshd (~joshd@ip-66-33-206-8.dreamhost.com) Quit (Quit: Leaving.)
[3:00] * Psi-Jack_ (~psi-jack@71.43.83.180) has joined #ceph
[3:03] * Psi-Jack (~psi-jack@yggdrasil.hostdruids.com) Quit (Ping timeout: 480 seconds)
[3:09] * hijacker (~hijacker@213.91.163.5) Quit (Read error: Connection reset by peer)
[3:09] * Psi-Jack_ (~psi-jack@71.43.83.180) Quit (Ping timeout: 480 seconds)
[3:09] * hijacker (~hijacker@213.91.163.5) has joined #ceph
[3:16] * cmccabe (~cmccabe@208.80.64.121) has left #ceph
[3:16] * bchrisman (~Adium@70-35-37-146.static.wiline.com) Quit (Quit: Leaving.)
[3:26] * hubertchang (~hubertcha@220.181.133.22) has joined #ceph
[3:51] * hubertchang (~hubertcha@220.181.133.22) Quit (Ping timeout: 480 seconds)
[4:39] * sjustlaptop (~sam@adsl-76-208-183-201.dsl.lsan03.sbcglobal.net) has joined #ceph
[4:54] * sjustlaptop (~sam@adsl-76-208-183-201.dsl.lsan03.sbcglobal.net) Quit (Quit: Leaving.)
[4:57] * sjustlaptop (~sam@adsl-76-208-183-201.dsl.lsan03.sbcglobal.net) has joined #ceph
[5:01] * sjustlaptop (~sam@adsl-76-208-183-201.dsl.lsan03.sbcglobal.net) has left #ceph
[5:26] * Psi-Jack (~psi-jack@71.43.83.180) has joined #ceph
[5:36] * hubertchang (~hubertcha@221.218.170.174) has joined #ceph
[5:36] <hubertchang> I want to use ceph filesystem as the enterprise git central repository. Does it make sense?
[5:42] * Administrator_ (~samsung@113.106.102.19) has joined #ceph
[5:43] <hubertchang> I want to use ceph filesystem as the enterprise git central repository. Does it make sense?
[5:48] * Administrator__ (~samsung@113.106.102.19) Quit (Ping timeout: 480 seconds)
[5:52] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) has joined #ceph
[5:54] * hutchint (~hutchint@c-75-71-83-44.hsd1.co.comcast.net) has joined #ceph
[6:26] * hutchint (~hutchint@c-75-71-83-44.hsd1.co.comcast.net) Quit (Quit: Leaving)
[6:48] * Psi-Jack_ (~psi-jack@71.43.83.180) has joined #ceph
[6:51] * Psi-Jack (~psi-jack@71.43.83.180) Quit (Read error: Connection reset by peer)
[6:52] * Psi-Jack- (~psi-jack@71.43.83.180) has joined #ceph
[6:53] * Psi-Jack_ (~psi-jack@71.43.83.180) Quit (Read error: Connection reset by peer)
[6:54] * Psi-Jack_ (~psi-jack@71.43.83.180) has joined #ceph
[6:55] * Psi-Jack- (~psi-jack@71.43.83.180) Quit (Read error: Connection reset by peer)
[6:55] * Psi-Jack_ (~psi-jack@71.43.83.180) Quit (Read error: No route to host)
[7:01] * Psi-Jack_ (~psi-jack@71.43.83.180) has joined #ceph
[7:07] * MK_FG (~MK_FG@188.226.51.71) Quit (Quit: o//)
[7:10] * MK_FG (~MK_FG@188.226.51.71) has joined #ceph
[7:13] * darkfader (~floh@188.40.175.2) Quit (Remote host closed the connection)
[7:13] * darkfader (~floh@188.40.175.2) has joined #ceph
[7:19] * Psi-Jack_ (~psi-jack@71.43.83.180) Quit (Ping timeout: 480 seconds)
[7:43] * Psi-Jack (~psi-jack@71.43.83.180) has joined #ceph
[7:58] * greglap (~Adium@cpe-76-170-84-245.socal.res.rr.com) has joined #ceph
[8:33] * allsystemsarego (~allsystem@188.25.132.41) has joined #ceph
[8:47] * Psi-Jack (~psi-jack@71.43.83.180) Quit (Read error: Connection reset by peer)
[8:48] * Psi-Jack (~psi-jack@71.43.83.180) has joined #ceph
[9:17] * neurodrone (~neurodron@cpe-76-180-162-12.buffalo.res.rr.com) Quit (Quit: neurodrone)
[9:59] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) has joined #ceph
[10:11] * Psi-Jack_ (~psi-jack@mwg-w03.infosec.fedex.com) has joined #ceph
[10:11] <Psi-Jack_> I'm curious.
[10:12] <Psi-Jack_> I have 4 hypervisor servers and two dedicated storage servers, and I'm wondering if it's reasonably possible to run ceph on the dedicates storage servers, providing chunk servers also housed on the virtual server's host OS, providing spanned access to all of the combined storage into one distributed system.
[10:41] <hubertchang> WE CAN NOT GET ANY HELP IN THIS IRC CHANNLE.
[10:42] * hubertchang (~hubertcha@221.218.170.174) has left #ceph
[10:44] <darkfader> wow
[10:46] <darkfader> when will people finally understand that the thing with defined sla and response time is not called irc
[10:47] <darkfader> Psi-Jack_: look at the qemu rbd module
[10:47] <darkfader> that will say technically possible
[10:48] * Psi-Jack_ ndos.
[10:48] <darkfader> and then look at the word experimental. again. again. and again. :)
[10:48] <Psi-Jack_> Heh yeah.
[10:48] <Psi-Jack_> I just tried out sheepdog, which isn't quiiiiite production ready either.
[10:48] <Psi-Jack_> But it's big problem is, ALL DATA GONE, just because one node went down.
[10:49] <darkfader> oh, ok
[10:49] <darkfader> then definitely go with ceph instead
[10:50] <darkfader> why the layer with the chunk servers?
[10:51] <Psi-Jack_> Well, live migration to any server, versus the limitation of just 2.
[10:51] <Psi-Jack_> 2 being handled by LV's being DRBD replicated.
[10:52] <darkfader> i still dont (yet) follow.
[10:52] <darkfader> you wanna have dedicated storage servers
[10:53] <darkfader> the vm hosts can put the vms directly on them
[10:53] <Psi-Jack_> I want to have distributed storage. ;)
[10:53] <darkfader> yeah so you'd have 5 storage servers and 3 vm hosts for example
[10:53] <darkfader> all data with like 2 copies
[10:54] <Psi-Jack_> In my case, it's more like, 6 storage servers, 4 of which also run virtual-machines with rdb
[10:54] <darkfader> but when i get it right you also wanna run a "chunk server" vm or multiple of them?
[10:54] <Psi-Jack_> So, yeah, mixed.
[10:56] <darkfader> does the idea look anything like this:
[10:56] <darkfader> http://deranfangvomende.files.wordpress.com/2011/03/lab.png?w=467&h=209
[10:56] <darkfader> (which we never built because gluster is too chaotic to rely on imo)
[10:56] <Psi-Jack_> Yes, actually, very similar.
[10:57] <Psi-Jack_> Yeah, I don't and will not use gluster.
[10:57] <Psi-Jack_> Such a pain in the ass.
[10:57] <darkfader> hrhr nevermind, but at least i now somewhat understand
[10:57] * Psi-Jack_ nods.
[10:58] <Psi-Jack_> I don't mind, right now, setting up an experimental rdb-ceph set of vm's to test out how well, or not so well, this concept could work out with ceph
[10:58] <Psi-Jack_> I just made a huge mistake relying on sheepdog for my virtualized firewalls, and it all fell apart when one server fell off the cluster due to an APC.
[10:58] <darkfader> in general i think running an extra layer of VMs that use "local disks" and then make rbd that is used at times by their own hosts for other vms is ugly
[10:58] <darkfader> it looks elegant but it has ETOOMANYLAYERS
[10:58] <Psi-Jack_> Why? It's local storage, so wouldn't use up AS MUCH bandwidth. ;)
[10:59] <Psi-Jack_> Done right, it could be allocated really well. ;)
[10:59] <darkfader> Psi-Jack_: because it would be much saner to run ceph in the vm hosts instead
[10:59] <Psi-Jack_> I'm talking more setting up ceph to run the rootfs OF these vm's.
[10:59] <Psi-Jack_> So they can live migrate to any of the 4 primary servers in the virtual cluster.
[11:00] <Psi-Jack_> Based on utilization, availability, node attributes, etc.
[11:02] <darkfader> so: 1. dedicated storage box has a fs /xx which is part of a ceph fs, right?
[11:03] <darkfader> 2. vm boots directly off that using ceph kernel client (fs for / being ceph) or using qemu-rbd (fs for / being i.e. ext3)
[11:03] <Psi-Jack_> Here's my idea: 2 dedicated storage servers, using RAID10 on both. 4 servers running additional chunk servers along with virtual machines, further expanding the available space.
[11:03] <Psi-Jack_> Right, qemu-rdb. ;)
[11:04] <Psi-Jack_> Allowing ext3/4 etc to be used with, pressumably automagically incrementing ceph data for that, as a block device.
[11:04] <darkfader> well ok i think i got it
[11:04] <Psi-Jack_> So, partition may be 12 GB, but actual usage being 1, maybe 2 GB, growing up to 12GB. ;)
[11:04] <darkfader> not as complicated as i was afraid
[11:04] <Psi-Jack_> heh
[11:04] <Psi-Jack_> No, not really.
[11:05] <Psi-Jack_> It's just distribution of storage, including storage on the vm hosts themselves, that's all.
[11:05] <darkfader> yeah that will work
[11:05] <Psi-Jack_> The HDD's of the vm servers aren't even being utilized much.
[11:06] <darkfader> wait till one of the gurus is awake and ask them to recommend a version thats quite stable
[11:06] <Psi-Jack_> I realize ceph is experimental. I'm curious though. Are their fsck-like tools for it?
[11:06] * Psi-Jack_ nods.
[11:06] <darkfader> do test and you'll see
[11:06] <Psi-Jack_> Probably some time when I get home. ;)
[11:06] <stefanha> Have any of you tried forcing data placement with ceph? For example, if you have a VM running on host 1 and data currently isn't replicated there, you want to move it there.
[11:06] <Psi-Jack_> Problems I've had with ceph so far is that compiling it for opensuse is a pain in the arse.
[11:07] <darkfader> Psi-Jack_: i use ubuntu vms for testing it i think
[11:07] <darkfader> you need a very new kernel anyway
[11:07] <Psi-Jack_> Umm. No thanks. I won't use ubuntu.
[11:07] <darkfader> hehe
[11:07] <Psi-Jack_> OpenSUSE has 2.6.37
[11:07] <darkfader> oh ok :)
[11:07] <Psi-Jack_> And qemu 0.14.0, which has ceph support built-in for the rdb./
[11:08] <Psi-Jack_> Hence, why I was curious to desire to test it out.
[11:08] <Psi-Jack_> Support is there, it's just ceph packages are not. ;)
[11:08] <darkfader> stefanha: i think you'd have to change your crushmap for it. i tried and failed getting it right :)
[11:09] <darkfader> but it is possible
[11:09] <stefanha> darkfader: :) yeah, I was thinking in that direction too and wanted to see if it worked for you
[11:11] <darkfader> the people who do more stuff with ceph regularly is it i think
[11:12] <darkfader> i'm too lazy right now
[11:12] <darkfader> right now == 6 months
[11:18] * darkfaded (~floh@188.40.175.2) has joined #ceph
[11:18] * darkfaded (~floh@188.40.175.2) Quit ()
[12:45] * Administrator_ is now known as huangsan
[12:51] * Dantman (~dantman@S0106001eec4a8147.vs.shawcable.net) Quit (Remote host closed the connection)
[13:08] * Psi-Jack_ (~psi-jack@mwg-w03.infosec.fedex.com) Quit (Quit: Leaving)
[13:15] * Dantman (~dantman@S0106001eec4a8147.vs.shawcable.net) has joined #ceph
[14:43] * alexxy (~alexxy@79.173.81.171) Quit (Ping timeout: 480 seconds)
[14:43] * alexxy (~alexxy@79.173.81.171) has joined #ceph
[15:11] * alexxy[home] (~alexxy@79.173.81.171) has joined #ceph
[15:14] * alexxy (~alexxy@79.173.81.171) Quit (Ping timeout: 480 seconds)
[15:21] * alexxy (~alexxy@79.173.81.171) has joined #ceph
[15:24] * alexxy[home] (~alexxy@79.173.81.171) Quit (Ping timeout: 480 seconds)
[15:26] * Psi-Jack_ (~psi-jack@71.43.83.180) has joined #ceph
[15:33] * Psi-Jack (~psi-jack@71.43.83.180) Quit (Ping timeout: 480 seconds)
[15:47] * sakib (~sakib@95.158.0.249) has joined #ceph
[16:21] * huangsan (~samsung@113.106.102.19) Quit (Quit: Leaving)
[17:05] * julienhuang (~julienhua@82.67.204.235) has joined #ceph
[17:13] * neurodrone (~neurodron@cpe-76-180-162-12.buffalo.res.rr.com) has joined #ceph
[17:20] * votz (~votz@dhcp0020.grt.resnet.group.UPENN.EDU) Quit (Quit: Leaving)
[19:45] * sakib (~sakib@95.158.0.249) Quit (Quit: leaving)
[21:55] * julienhuang (~julienhua@82.67.204.235) Quit (Ping timeout: 480 seconds)
[22:51] * allsystemsarego (~allsystem@188.25.132.41) Quit (Quit: Leaving)
[23:45] * lxo (~aoliva@201.82.32.113) Quit (Ping timeout: 480 seconds)
[23:45] * lxo (~aoliva@201.82.32.113) has joined #ceph
[23:46] * verwilst (~verwilst@dD576FAAE.access.telenet.be) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.