#ceph IRC Log

Index

IRC Log for 2012-08-12

Timestamps are in GMT/BST.

[0:17] * danieagle (~Daniel@177.43.213.15) has joined #ceph
[0:55] * sagelap (~sage@cpe-76-94-40-34.socal.res.rr.com) has joined #ceph
[0:57] * Cube (~Adium@c-38-80-203-198.rw.zetabroadband.com) has joined #ceph
[1:00] * loicd (~loic@brln-4d0cc179.pool.mediaWays.net) Quit (Ping timeout: 480 seconds)
[1:03] * danieagle (~Daniel@177.43.213.15) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[1:10] * ferai (~quassel@quassel.jefferai.org) has joined #ceph
[1:11] * jefferai (~quassel@quassel.jefferai.org) Quit (Read error: Operation timed out)
[1:12] * al (d@niel.cx) Quit (Ping timeout: 480 seconds)
[1:14] * al (d@fourrooms.bandsal.at) has joined #ceph
[1:18] * JJ (~JJ@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[1:24] * Cube (~Adium@c-38-80-203-198.rw.zetabroadband.com) Quit (Quit: Leaving.)
[1:54] * steki-BLAH (~steki@bojanka.net) Quit (Quit: Ja odoh a vi sta 'ocete...)
[2:00] * chuanyu_ (chuanyu@linux3.cs.nctu.edu.tw) Quit (Remote host closed the connection)
[2:09] * stally (~stally@83TAAH1W0.tor-irc.dnsbl.oftc.net) has joined #ceph
[2:10] * sagelap (~sage@cpe-76-94-40-34.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[2:13] * stally (~stally@83TAAH1W0.tor-irc.dnsbl.oftc.net) has left #ceph
[2:18] * sagelap (~sage@cpe-76-94-40-34.socal.res.rr.com) has joined #ceph
[2:29] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Quit: Leseb)
[2:36] * stally (~stally@83TAAH1W0.tor-irc.dnsbl.oftc.net) has joined #ceph
[2:36] * Anticime1 (anticimex@netforce.csbnet.se) has joined #ceph
[2:37] * Anticimex (anticimex@netforce.csbnet.se) Quit (Read error: Connection reset by peer)
[2:37] * stally (~stally@83TAAH1W0.tor-irc.dnsbl.oftc.net) has left #ceph
[2:39] * stally (~stally@83TAAH1W0.tor-irc.dnsbl.oftc.net) has joined #ceph
[2:39] <- *stally* ok
[2:39] * stally (~stally@83TAAH1W0.tor-irc.dnsbl.oftc.net) has left #ceph
[3:05] * Cube (~Adium@c-67-182-188-153.hsd1.ca.comcast.net) has joined #ceph
[3:18] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[3:20] * Cube (~Adium@c-67-182-188-153.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[3:57] * lofejndif (~lsqavnbok@28IAAGQH9.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[4:22] * Cube (~Adium@c-67-182-188-153.hsd1.ca.comcast.net) has joined #ceph
[4:29] * Cube (~Adium@c-67-182-188-153.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[4:42] * sagelap (~sage@cpe-76-94-40-34.socal.res.rr.com) Quit (Read error: Operation timed out)
[4:51] * CristianDM (~CristianD@186.153.252.64) has joined #ceph
[4:51] <CristianDM> Hi
[4:52] <CristianDM> I have an issue with "ceph status"
[4:52] <CristianDM> This return unrecognized subsystem
[4:52] <CristianDM> Same as http://tracker.newdream.net/issues/2721
[4:52] <CristianDM> But I can??t fix it
[4:59] * sagelap (~sage@cpe-76-94-40-34.socal.res.rr.com) has joined #ceph
[5:14] * CristianDM (~CristianD@186.153.252.64) Quit ()
[5:25] * JJ (~JJ@cpe-76-175-17-226.socal.res.rr.com) Quit (Quit: Leaving.)
[5:25] * JJ (~JJ@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[5:28] * JJ (~JJ@cpe-76-175-17-226.socal.res.rr.com) Quit ()
[5:37] * deepsa (~deepsa@117.203.19.216) Quit (Ping timeout: 480 seconds)
[5:44] * exec (~v@109.232.144.194) has joined #ceph
[6:04] * deepsa (~deepsa@115.240.93.128) has joined #ceph
[6:52] * danieagle (~Daniel@177.43.213.15) has joined #ceph
[6:53] * danieagle (~Daniel@177.43.213.15) Quit ()
[8:22] * deepsa_ (~deepsa@117.203.16.162) has joined #ceph
[8:24] * deepsa (~deepsa@115.240.93.128) Quit (Ping timeout: 480 seconds)
[8:24] * deepsa_ is now known as deepsa
[9:25] * loicd (~loic@brln-4d0cc179.pool.mediaWays.net) has joined #ceph
[10:03] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[11:50] * EmilienM (~EmilienM@ede67-1-81-56-23-241.fbx.proxad.net) has joined #ceph
[12:27] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[13:44] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[13:46] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[13:47] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[13:47] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[14:33] * kibbu (claudio@owned.ethz.ch) has joined #ceph
[15:28] * mech422 (~mech422@65.19.151.114) has joined #ceph
[15:29] <mech422> Morning all ..
[15:29] <mech422> I finally have a lil time to dig into ceph, and I had some basic newby questions...
[15:30] <mech422> Will there be an 'in-place' upgrade path from 'argonaut' to 'v1.0' when its released ?
[15:30] <mech422> anyone tried Argonaut on debian Squeeze/Wheezy ?
[15:31] <mech422> and finally, anyone tried rados in Argonaut with Xen 4 (in debian preferrably...) ?
[15:32] <mech422> oh - and one last one - can I install all the ceph components on 1 box, and then add boxes (for replication/HA/capacity) once I've got my configs right ?
[15:54] * BManojlovic (~steki@212.200.240.248) has joined #ceph
[16:16] <Deuns> mech422: I use Argonaut on Debian
[16:16] <mech422> oh cool :-) Is it working well ?
[16:16] <Deuns> pretty well so far :)
[16:16] <Deuns> I'm still in a test environnement though
[16:17] <mech422> Great :-) Do you happen to know if there will be 'in place' migration from Argonaut to 'v1.0' ?
[16:17] <Deuns> no idea
[16:17] <Deuns> I hope so :p
[16:17] <mech422> Heh..I am as well...
[16:18] <Deuns> I also installed everything on one box and then add a second one with osd+mds+mon
[16:18] <Deuns> adding a server is very easy
[16:18] <mech422> ahh - great, so that is possible ?
[16:18] <mech422> I wasn't sure if the quorum thing would mess it up with 1 box
[16:18] <mech422> have you happened to try RADOS with Xen ?
[16:18] <Deuns> i don't think it is the prefered setup but it works
[16:19] <Deuns> nope, I'm using cephfs for now
[16:19] <mech422> oh ? I'd read that was still sorta flaky ? is it giving you any problems ?
[16:19] <mech422> ( I need to replace a moosefs cluster eventually...)
[16:20] <Deuns> I read that too but so far, so good :-)
[16:20] <mech422> Are you using btrfs as the backing store ?
[16:20] <Deuns> yep
[16:21] <mech422> Hehe..sounds like I should just copy your configs :-P Your doing pretty much everything I want to :-P
[16:21] <Deuns> unfortunately, I get some traceback from btrfs from time to time
[16:21] <mech422> dang
[16:22] <Deuns> I couldn't find if it breaks something though :-/
[16:22] <mech422> One Squeeze ? or Wheezy ?
[16:22] <Deuns> my ceph cluster is still up and running
[16:22] <mech422> err.. s/One/On/
[16:22] <Deuns> wheezy
[16:22] <Deuns> both boxes are wheezy
[16:23] <Deuns> the only "real" problem i have with ceph is the lack of up to date documentation
[16:24] <mech422> hehe
[16:25] <mech422> I have a hard time telling whats still supported...
[16:25] <mech422> like eucalyptus integration (v2 only?) or xen driver
[16:26] <mech422> seems like the kvm stuff is still current, but I kinda got the feeling the xen bit was orphaned
[16:27] <Deuns> it looks like kvm is hotter these days than xen
[16:28] <mech422> yeah - its been the darling a while now... hopefully Xen will get more love in the near future
[16:28] <mech422> ubuntu seems to have picked it up again, and debian is pushing 'cluster computing' as a main use case
[16:46] * EmilienM (~EmilienM@ede67-1-81-56-23-241.fbx.proxad.net) Quit (Quit: Leaving...)
[17:08] * gretchen (ccae6223@ircip4.mibbit.com) has joined #ceph
[17:09] * gretchen (ccae6223@ircip4.mibbit.com) Quit ()
[17:25] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[17:27] * loicd1 (~loic@brln-4dbab807.pool.mediaWays.net) has joined #ceph
[17:32] * loicd (~loic@brln-4d0cc179.pool.mediaWays.net) Quit (Ping timeout: 480 seconds)
[18:38] <iggy> mech422: a few things... Don't try to use cephfs or rbd kernel driver on the same box that has OSDs (can't tell if that's something you were planning or not)... don't run 2 MDSes (1 or 3 or more)... MDSes aren't required for rados/rbd (only for the FS)... There are a few things that can't be changed after things are setup, so you'll want to set things up like you are planning a bigger deployment from the start (PGs in a pool, etc.)
[18:39] <mech422> oh - the rdb/osd thing would be a problem - thx!
[18:39] <iggy> mech422: that's just the kernel driver
[18:40] <iggy> I don't know how Xen has it integrated, but for kvm, there's librbd support built into qemu/kvm... and that is legal
[18:40] <mech422> yeah - I'm not sure the status of the xen backend...there was something on the wiki...let me look
[18:42] <iggy> afaik, xen is working on upstreaming into qemu and using that, so you'd get qemu's built in librbd support for free (at least for hvm guests... dunno about pv guests)
[18:42] <mech422> http://ceph.com/wiki/Xen-rbd
[18:43] <iggy> yeah, that's a no go
[18:43] <mech422> blah
[18:43] <mech422> Hmm...guess I can throw up a couple of VMs just for playing with
[18:44] <iggy> heh, that's the only kind of ceph deployment I've done so far
[18:44] <mech422> though for production, I wanted to replace our MooseFS setup - we're using it with the xen 'file:' driver
[18:44] <mech422> so I'm gonna need dedicated storage nodes as well as the vm hosts...
[18:44] <iggy> btw, the rbd-on-osd problem is a general one of kernel space processes talking to user space processes
[18:45] <mech422> oh?
[18:46] <iggy> yeah, in low memory situations, the kernel sends a request to the osd, but there isn't enough mem left for the osd to allocate buffers to reply back -> deadlock
[18:47] <tnt> running the OSD inside a DomU should work though.
[18:47] <Tobarja> perhaps that's why i keep toasting my test boxes
[18:47] <mech422> Hmm..
[18:47] <iggy> it would almost certainly be hit in osd recovery situations because the OSDs use a fair amount of memory then
[18:48] <mech422> Thats a possibility - putting the osd's in a domU
[18:48] <mech422> err..no, wait...
[18:48] <mech422> that would be pointless, as I'd have to use 'file://' images to back the domU, right ?
[18:49] <tnt> or phy:// ... yes.
[18:49] <mech422> yeah..Hmm...
[18:49] <iggy> it's not insurmountable (i.e. samba and some other projects that have been around for a while have solved it in various dirty ways), but it's not an easy fix and even harder to test to make sure you get all the corner cases
[18:50] <mech422> ehh - prolly be easier just to get a couple of atom boxes or something for the osd's
[18:50] <tnt> Is there a way to estimate how much memory is needed during recovery ? and what happens if there is just not enough ?
[18:50] <mech422> we have a very small setup
[18:51] <iggy> I believe the devs have said atom boxes won't work for OSDs (could be wrong)
[18:51] <iggy> tnt: I don't know, but it seems like most people are doing 1G of ram per disk in an osd
[18:53] <mech422> iggy: really? any idea why ?
[18:54] <mech422> (we don't need blazing throughput or anything...)
[18:54] <iggy> I don't know actually
[19:03] <Tobarja> is there a samba plugin to export cephfs volumes?
[19:03] <iggy> no
[19:06] * BManojlovic (~steki@212.200.240.248) Quit (Read error: Operation timed out)
[19:06] * BManojlovic (~steki@bojanka.net) has joined #ceph
[19:58] * steki-BLAH (~steki@212.200.240.248) has joined #ceph
[20:02] * BManojlovic (~steki@bojanka.net) Quit (Ping timeout: 480 seconds)
[20:04] <darkfader> someone recently brought up the fraunhofer fs, i read a lot more about it (again) today. my pain point was i.e. that it is not really opensource, and also lacks a lot of HA points. it's more like a (well-deserved) lustre replacement
[20:06] <iggy> we've tested it at work
[20:07] <iggy> it's more geared toward performance than availability
[20:08] <darkfader> iggy: i liked that it has ib->ethernet failover and such
[20:08] <darkfader> but for me useless
[20:08] <darkfader> last time someone mentioned it i just had forgotten what i found problematic, so i wanted to report back for the archive :)
[20:14] <Deuns> do you know how ceph compares to gluster ?
[20:16] * mech422 (~mech422@65.19.151.114) has left #ceph
[20:19] * dabeowulf (dabeowulf@free.blinkenshell.org) has joined #ceph
[20:27] * fiddyspence (~fiddyspen@94-192-234-112.zone6.bethere.co.uk) has joined #ceph
[20:29] * nhm (~nh@184-97-251-210.mpls.qwest.net) Quit (Read error: No route to host)
[21:46] * kingleecher (~kingleech@204-174-98-35.dhcp470.dsl.ucc-net.ca) has joined #ceph
[21:50] * joshd (~jdurgin@2602:306:c5db:310:1e6f:65ff:feaa:beb7) has joined #ceph
[21:52] * danieagle (~Daniel@177.43.213.15) has joined #ceph
[21:55] * fiddyspence (~fiddyspen@94-192-234-112.zone6.bethere.co.uk) Quit (Quit: Leaving.)
[22:39] * danieagle (~Daniel@177.43.213.15) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[23:00] <joshd> iggy: Tobarja: afaik this was the latest samba plugin: http://samba.2283325.n4.nabble.com/An-updated-Samba-VFS-for-Ceph-using-the-libceph-user-space-interface-td3756101.html
[23:19] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[23:39] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[23:50] <iggy> interesting, last i heard the accepted method was just re-exporting a ceph mount via samba
[23:59] * s[X] (~sX]@ppp59-167-154-113.static.internode.on.net) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.