#ceph IRC Log

Index

IRC Log for 2013-01-22

Timestamps are in GMT/BST.

[0:14] * mattbenjamin (~matt@aa2.linuxbox.com) Quit (Quit: Leaving.)
[0:19] * aliguori (~anthony@cpe-70-112-157-151.austin.res.rr.com) Quit (Remote host closed the connection)
[0:21] * jlogan2 (~Thunderbi@72.5.59.176) Quit (Ping timeout: 480 seconds)
[0:23] * aliguori (~anthony@cpe-70-112-157-151.austin.res.rr.com) has joined #ceph
[0:50] * PerlStalker (~PerlStalk@72.166.192.70) Quit (Quit: ...)
[0:53] * vata (~vata@2607:fad8:4:6:9002:b4b0:f356:6335) Quit (Quit: Leaving.)
[0:56] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[0:57] * dosaboy (~gizmo@12.231.120.253) has joined #ceph
[1:04] * ninkotech_ (~duplo@89.177.137.236) Quit (Read error: Connection reset by peer)
[1:04] * cdblack (86868b4c@ircip3.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[1:04] * tnt (~tnt@204.203-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[1:05] * ninkotech (~duplo@89.177.137.236) Quit (Read error: Connection reset by peer)
[1:05] * ninkotech (~duplo@89.177.137.236) has joined #ceph
[1:05] * ninkotech_ (~duplo@89.177.137.236) has joined #ceph
[1:08] * BManojlovic (~steki@85.222.185.201) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:13] * mattbenjamin (~matt@75.45.228.196) has joined #ceph
[1:16] * leseb (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Remote host closed the connection)
[1:16] * leseb (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[1:19] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) Quit (Remote host closed the connection)
[1:20] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[1:20] * loicd (~loic@2a01:e35:2eba:db10:d451:536e:aa1f:4c2a) has joined #ceph
[1:21] * leseb (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Operation timed out)
[1:22] * winston-d (~zhiteng@pgdmzpr01-ext.png.intel.com) has joined #ceph
[1:23] * Cube (~Cube@184-231-224-123.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[1:25] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) Quit (Quit: Leaving.)
[1:27] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[1:31] * aliguori (~anthony@cpe-70-112-157-151.austin.res.rr.com) Quit (Remote host closed the connection)
[1:49] * ScOut3R (~ScOut3R@dsl51B61EED.pool.t-online.hu) Quit (Ping timeout: 480 seconds)
[1:51] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[2:08] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Read error: Connection reset by peer)
[2:08] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[2:11] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[2:20] * LeaChim (~LeaChim@b0faf18a.bb.sky.com) Quit (Ping timeout: 480 seconds)
[2:22] * ShaunR (~ShaunR@staff.ndchost.com) has joined #ceph
[2:26] * gaveen (~gaveen@112.135.148.237) Quit (Remote host closed the connection)
[2:31] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[2:37] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[2:37] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[3:08] * winston-d (~zhiteng@pgdmzpr01-ext.png.intel.com) Quit (Remote host closed the connection)
[3:15] * mattbenjamin (~matt@75.45.228.196) Quit (Quit: Leaving.)
[3:19] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[3:21] * winston-d (~zhiteng@pgdmzpr01-ext.png.intel.com) has joined #ceph
[3:33] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[4:44] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[4:55] * thelan_ (~thelan@paris.servme.fr) has joined #ceph
[4:55] * thelan (~thelan@paris.servme.fr) Quit (Read error: Connection reset by peer)
[5:43] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[5:52] * winston-d (~zhiteng@pgdmzpr01-ext.png.intel.com) Quit (Remote host closed the connection)
[5:57] * dosaboy (~gizmo@12.231.120.253) Quit (Quit: Leaving.)
[6:00] * yoshi (~yoshi@p6124-ipngn1401marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[6:07] * winston-d (~zhiteng@pgdmzpr01-ext.png.intel.com) has joined #ceph
[7:07] * loicd (~loic@2a01:e35:2eba:db10:d451:536e:aa1f:4c2a) Quit (Quit: Leaving.)
[7:07] * loicd (~loic@magenta.dachary.org) has joined #ceph
[7:15] * Cube (~Cube@70-12-106-38.pools.spcsdns.net) has joined #ceph
[7:49] * dewan (721f0482@ircip2.mibbit.com) has joined #ceph
[7:50] * dewan (721f0482@ircip2.mibbit.com) Quit ()
[7:51] * tnt (~tnt@204.203-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[8:01] * sleinen (~Adium@217-162-132-182.dynamic.hispeed.ch) has joined #ceph
[8:02] * sleinen1 (~Adium@2001:620:0:26:7965:67e0:5210:dc49) has joined #ceph
[8:07] * dewan (721f0482@ircip3.mibbit.com) has joined #ceph
[8:08] * dewan (721f0482@ircip3.mibbit.com) Quit ()
[8:09] * sleinen (~Adium@217-162-132-182.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[8:10] * loicd1 (~loic@2a01:e35:2eba:db10:d451:536e:aa1f:4c2a) has joined #ceph
[8:10] * loicd (~loic@magenta.dachary.org) Quit (Read error: Connection reset by peer)
[8:11] * fghaas (~florian@91-119-215-212.dynamic.xdsl-line.inode.at) has joined #ceph
[8:16] * xdeller (~xdeller@broadband-77-37-224-84.nationalcablenetworks.ru) has joined #ceph
[8:17] * low (~low@188.165.111.2) has joined #ceph
[8:17] * sleinen1 (~Adium@2001:620:0:26:7965:67e0:5210:dc49) Quit (Quit: Leaving.)
[8:18] * Cube (~Cube@70-12-106-38.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[8:19] * loicd1 (~loic@2a01:e35:2eba:db10:d451:536e:aa1f:4c2a) Quit (Quit: Leaving.)
[8:34] * fghaas (~florian@91-119-215-212.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[8:35] * schlitzer|work (~schlitzer@109.75.189.45) Quit (Quit: Leaving)
[8:41] * schlitzer|work (~schlitzer@109.75.189.45) has joined #ceph
[8:54] * leseb (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[8:54] * wogri (~wolf@nix.wogri.at) has joined #ceph
[8:55] <wogri> hello. i'm new to this channel. will i be flamed if i ask a ceph-crushmap-question? it's like support, not development...
[8:58] <madkiss> w00t
[8:58] <madkiss> an austrian Ceph user
[8:58] <wogri> indeed. and a promoter.
[8:58] <madkiss> welcome to the club then
[8:58] <wogri> thx
[9:02] * leseb (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Ping timeout: 480 seconds)
[9:09] * sleinen (~Adium@130.59.94.54) has joined #ceph
[9:10] * sleinen1 (~Adium@2001:620:0:25:591e:4591:a589:525e) has joined #ceph
[9:12] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[9:12] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:15] * fghaas (~florian@91-119-215-212.dynamic.xdsl-line.inode.at) has joined #ceph
[9:17] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[9:17] * sleinen (~Adium@130.59.94.54) Quit (Ping timeout: 480 seconds)
[9:28] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has joined #ceph
[9:30] * tnt (~tnt@204.203-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[9:31] <absynth_47215> morning
[9:37] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) has joined #ceph
[9:38] * Vjarjadian (~IceChat77@5ad6d005.bb.sky.com) Quit (Quit: Say What?)
[9:39] * loicd1 (~loic@3.46-14-84.ripe.coltfrance.com) has joined #ceph
[9:40] * tnt (~tnt@212-166-48-236.win.be) has joined #ceph
[9:42] * ScOut3R (~ScOut3R@212.96.47.215) has joined #ceph
[9:45] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) Quit (Ping timeout: 480 seconds)
[9:46] * leseb (~leseb@193.172.124.196) has joined #ceph
[9:56] * winston-d (~zhiteng@pgdmzpr01-ext.png.intel.com) Quit (Quit: Leaving)
[10:04] * xiaoxi (~xiaoxiche@134.134.137.71) Quit (Ping timeout: 480 seconds)
[10:05] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Quit: This computer has gone to sleep)
[10:18] * Morg (d4438402@ircip3.mibbit.com) has joined #ceph
[10:20] * LeaChim (~LeaChim@b0faf18a.bb.sky.com) has joined #ceph
[10:22] * loicd1 (~loic@3.46-14-84.ripe.coltfrance.com) Quit (Quit: Leaving.)
[10:22] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) has joined #ceph
[10:50] * fghaas (~florian@91-119-215-212.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[10:55] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[10:56] * fghaas (~florian@91-119-215-212.dynamic.xdsl-line.inode.at) has joined #ceph
[11:04] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[11:06] * tziOm (~bjornar@194.19.106.242) has joined #ceph
[11:14] <jtangwk> cool bobtail is out
[11:21] <ScOut3R> really?
[11:21] <ScOut3R> hm, really, cool :)
[11:24] * xiaoxi (~xiaoxiche@134.134.137.71) has joined #ceph
[11:25] <jtangwk> well the downloads links are there
[11:26] <jtangwk> been out of the office for the past week, so i didnt pay attention
[11:32] * yoshi (~yoshi@p6124-ipngn1401marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[11:45] * yoshi (~yoshi@p6124-ipngn1401marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[12:09] * yoshi (~yoshi@p6124-ipngn1401marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[12:10] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[12:18] * phillipp (~phil@p5B3AEB0E.dip.t-dialin.net) has joined #ceph
[12:18] <phillipp> hi
[12:18] <phillipp> i have a question regarding my hardware setup
[12:21] <phillipp> i would like to build a "mixed" cluster where some client need performance, others need cheap storage. we have systems with 12-36 slow drives having 2 or 3 TB each and some systems that have 10k scsi drives. should i put everything in the same cluster or would it be better to seperate clusters?
[12:22] <phillipp> in terms of maintenance one cluster seems the better way to go
[12:22] <madkiss> well.
[12:22] <madkiss> the only way for you to actually influence what drives the cluster gets to chose is manipulating their weight.
[12:23] <madkiss> You can't tell a client to have a preference for one group of OSDs or another one
[12:24] <madkiss> If you put all OSDs into one cluster, you might end up with a cluster where some read or write processes will be slow (if they hit normal spinners only) or super-fast (if they hit the 10k disks only)
[12:24] <madkiss> provided, of course, that your network is not a bottleneck
[12:25] <madkiss> What NICs do you have in that setup?
[12:25] <phillipp> wouldn't it be possible to make two pools and assign the OSDs accordingly?
[12:25] <phillipp> 2x 1Gig nic each
[12:25] <phillipp> per server
[12:25] <madkiss> You don't assign pools to OSDs.
[12:25] <madkiss> You tell Ceph "Here's a disk, do as you like"
[12:25] <madkiss> and it's gonna do as it likes.
[12:29] <Kioob`Taff> but you can set crush rules to really influence that
[12:29] * pixel (~pixel@188.72.65.190) has joined #ceph
[12:29] <Kioob`Taff> to mix SSD and HDD for example
[12:30] <Kioob`Taff> http://ceph.com/docs/master/rados/operations/crush-map/ <== « Placing Different Pools on Different OSDS »
[12:31] <madkiss> arg. i keep forgetting about that one.
[12:31] * terje__ (~joey@97-118-121-72.hlrn.qwest.net) has joined #ceph
[12:31] <Kioob`Taff> I didn't try for now, but I want to do exactly the same thing
[12:32] * terje (~terje@71-218-10-180.hlrn.qwest.net) Quit (Ping timeout: 480 seconds)
[12:32] <phillipp> ah, wonderful
[12:33] * terje_ (~joey@71-218-10-180.hlrn.qwest.net) Quit (Ping timeout: 480 seconds)
[12:33] <madkiss> i think network's gonna be your bottleneck anyway.
[12:34] <Kioob`Taff> but 10k drives are for low latency, not throughput, right ? So the network bandwith doesn't really matter
[12:35] <madkiss> oO
[12:35] <Kioob`Taff> you have 6ms of latency (instead of 8ms), so I suppose the 0.1ms of the network is not the problem
[12:36] <phillipp> yes, for latency, we have LOTS of random seeks
[12:36] <pixel> Hi everybody, Is it necessary to unmap images before rebooting a server? When I try to restart the server with mounted rbd images it always freezes and I have to reboot it manualy/
[12:37] <madkiss> what distro? ubuntu 12.04?
[12:38] <pixel> madkiss Debian
[12:38] <madkiss> which kernel?
[12:38] <pixel> 3.6.10
[12:39] <madkiss> interesting
[12:39] <madkiss> The effect you describe actually sounds like the rbd kernel module is oopsing
[12:39] <madkiss> and i have heard lots of people reporting that here, but the general recommendation always was "update to a newer kernel", which, in your case, doesn't quite cut it
[12:40] * fghaas (~florian@91-119-215-212.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[12:40] <pixel> ok, going to update it to 3.7.4
[12:48] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) Quit (Quit: Leaving.)
[12:51] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[12:58] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[13:00] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit ()
[13:00] * stass (stas@ssh.deglitch.com) Quit (Read error: Connection reset by peer)
[13:00] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[13:04] * stass (stas@ssh.deglitch.com) has joined #ceph
[13:07] <madkiss> hello wschulze
[13:07] <wschulze> good morning
[13:09] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[13:15] * fghaas (~florian@91-119-215-212.dynamic.xdsl-line.inode.at) has joined #ceph
[13:44] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Quit: tryggvil)
[13:45] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[13:55] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) has joined #ceph
[13:58] * loicd1 (~loic@3.46-14-84.ripe.coltfrance.com) has joined #ceph
[14:01] * richard (~richard@host109-154-219-154.range109-154.btcentralplus.com) has joined #ceph
[14:01] * richard (~richard@host109-154-219-154.range109-154.btcentralplus.com) has left #ceph
[14:01] * rj175 (~richard@host109-154-219-154.range109-154.btcentralplus.com) has joined #ceph
[14:02] <rj175> hello, does anyone know of any web administration interfaces for Ceph?
[14:05] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) Quit (Ping timeout: 480 seconds)
[14:05] * stass (stas@ssh.deglitch.com) Quit (Read error: Connection reset by peer)
[14:11] * stass (stas@ssh.deglitch.com) has joined #ceph
[14:12] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) Quit (Quit: Leaving.)
[14:18] * gaveen (~gaveen@112.135.156.40) has joined #ceph
[14:24] * Azrael (~azrael@terra.negativeblue.com) has joined #ceph
[14:24] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[14:27] * xiaoxi (~xiaoxiche@134.134.137.71) Quit ()
[14:28] * xiaoxi (~xiaoxiche@134.134.137.71) has joined #ceph
[14:31] * jluis (~JL@89.181.156.120) has joined #ceph
[14:31] * joao (~JL@89-181-156-120.net.novis.pt) Quit (Read error: Connection reset by peer)
[14:51] * goodbytes (~goodbytes@2a00:9080:f000::58) has joined #ceph
[14:53] * lx0 is now known as lxo
[14:55] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[14:57] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) Quit ()
[14:58] * pixel (~pixel@188.72.65.190) Quit (Quit: Ухожу я от вас (xchat 2.4.5 или старше))
[14:59] <goodbytes> Hi guys. I'm having difficulties starting ceph monitor locally on Ubuntu using the init/service scripts.
[14:59] * aliguori (~anthony@cpe-70-112-157-151.austin.res.rr.com) has joined #ceph
[14:59] * aliguori_ (~anthony@cpe-70-112-157-151.austin.res.rr.com) has joined #ceph
[15:00] <goodbytes> I've defined a monitor as [mon.a], and use "service ceph -v start nmon.a" as instructed, in the docs
[15:00] <goodbytes> * "service ceph -v start mon.a" sorry.
[15:01] * aliguori_ (~anthony@cpe-70-112-157-151.austin.res.rr.com) Quit ()
[15:01] <goodbytes> and it returns: /usr/bin/ceph-conf -c /etc/ceph/ceph.conf -n mon.a "user"
[15:03] * rj175 (~richard@host109-154-219-154.range109-154.btcentralplus.com) Quit (Quit: rj175)
[15:04] * match (~mrichar1@pcw3047.see.ed.ac.uk) has joined #ceph
[15:14] * mattbenjamin (~matt@adsl-75-45-228-196.dsl.sfldmi.sbcglobal.net) has joined #ceph
[15:17] * rj175 (~richard@host109-154-219-154.range109-154.btcentralplus.com) has joined #ceph
[15:20] * drokita (~drokita@24-107-180-86.dhcp.stls.mo.charter.com) Quit (Ping timeout: 480 seconds)
[15:23] * nhorman (~nhorman@nat-pool-rdu.redhat.com) has joined #ceph
[15:23] * allsystemsarego (~allsystem@188.25.131.2) has joined #ceph
[15:27] * goodbytes_ (~goodbytes@2a00:9080:f000::58) has joined #ceph
[15:27] * goodbytes (~goodbytes@2a00:9080:f000::58) Quit (Read error: Connection reset by peer)
[15:35] * drokita (~drokita@199.255.228.128) has joined #ceph
[15:48] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has left #ceph
[15:49] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[15:50] * schlitzer|work (~schlitzer@109.75.189.45) Quit (Remote host closed the connection)
[15:53] * dosaboy (~gizmo@12.231.120.253) has joined #ceph
[15:58] <slang> goodbytes_: what's the full output of the 'service ceph -v start mon.a' command?
[15:58] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[16:00] * PerlStalker (~PerlStalk@72.166.192.70) has joined #ceph
[16:02] <goodbytes_> slang: the full output is
[16:02] <goodbytes_> /usr/bin/ceph-conf -c /etc/ceph/ceph.conf -n mon.a "user"
[16:10] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[16:10] * ChanServ sets mode +o scuttlemonkey
[16:11] * goodbytes_ (~goodbytes@2a00:9080:f000::58) Quit (Read error: No route to host)
[16:11] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[16:14] * goodbytes (~goodbytes@2a00:9080:f000:0:69e7:b27e:2d13:652d) has joined #ceph
[16:21] * dosaboy (~gizmo@12.231.120.253) Quit (Quit: Leaving.)
[16:22] * dosaboy (~gizmo@12.231.120.253) has joined #ceph
[16:29] <slang> goodbytes: what if you remove the -v?
[16:30] * vata (~vata@2607:fad8:4:6:8161:b31d:b02b:fb00) has joined #ceph
[16:31] <goodbytes> slang, then i get no output. When I try to launch the mon daemon maunally, e.g. ceph-mon --id a
[16:31] <goodbytes> then it starts up just fine
[16:32] <slang> what versions of ubuntu and ceph?
[16:34] * mattbenjamin (~matt@adsl-75-45-228-196.dsl.sfldmi.sbcglobal.net) Quit (Quit: Leaving.)
[16:36] <goodbytes> Ubuntu 12.04 LTS, Linux 3.2.0-23-generic-pae, ceph version 0.56.1
[16:36] <goodbytes> (it's running in a virtual machine for now)
[16:36] <goodbytes> ceph package version is: 0.56.1-1precise
[16:39] * sander (~chatzilla@c-174-62-162-253.hsd1.ct.comcast.net) has joined #ceph
[16:41] * aliguori (~anthony@cpe-70-112-157-151.austin.res.rr.com) Quit (Quit: Ex-Chat)
[16:42] * Morg (d4438402@ircip3.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[16:47] <slang> goodbytes: can you modify the first line of /etc/init.d/ceph to be:
[16:47] <slang> #!/bin/sh -x
[16:48] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[16:49] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[16:50] * tziOm (~bjornar@194.19.106.242) Quit (Remote host closed the connection)
[16:51] * verwilst (~verwilst@109.143.174.3) has joined #ceph
[16:56] * mattbenjamin (~matt@aa2.linuxbox.com) has joined #ceph
[16:56] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Quit: Leaving.)
[16:58] * jmlowe (~Adium@c-71-201-31-207.hsd1.in.comcast.net) Quit (Quit: Leaving.)
[17:00] * slang1 (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[17:02] * jlogan (~Thunderbi@2600:c00:3010:1:a04a:9169:a45b:169f) has joined #ceph
[17:05] * ircolle (~ircolle@65.114.195.189) has joined #ceph
[17:08] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:10] <jefferai> what's mark nelson's nick here?
[17:10] <ircolle> nhm - makes perfect sense doesn't it? ;-)
[17:11] <jefferai> hah
[17:11] <jefferai> nhm: thanks for the article
[17:12] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:12] <jefferai> nhm: One thing I thought odd actually is the JBOD results
[17:12] <jefferai> I set things up as JBOD with one OSD per disk as I figured that was a better idea than setting RAID up on the hardware level
[17:12] <jefferai> let Ceph manage it
[17:13] <jefferai> it seems that in most cases the RAID-0 is a better idea with a large number of disks
[17:13] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Read error: Operation timed out)
[17:14] <jefferai> although you do run the risk of one OSD/disk going down taking your whole node with it
[17:18] <jefferai> The other thing is that btrfs performance generally seems quite a lot nicer
[17:19] <jefferai> I know that the Ceph guys officially still recommend XFS, but I wonder when that will change
[17:19] * rlr219 (43c87e04@ircip1.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[17:19] <jefferai> especially if the nodes use the gitbuilder kernel for precise
[17:19] * aliguori (~anthony@32.97.110.59) has joined #ceph
[17:22] * sleinen1 (~Adium@2001:620:0:25:591e:4591:a589:525e) Quit (Quit: Leaving.)
[17:22] * fghaas (~florian@91-119-215-212.dynamic.xdsl-line.inode.at) Quit (Ping timeout: 480 seconds)
[17:22] * sleinen (~Adium@130.59.94.54) has joined #ceph
[17:23] <jefferai> actually, the 3.6.3 kernel for precise went away on gitbuilder -- is the "master" ref on gitbuilder (which shows a 3.6.0 kernel) the Ceph-maintained one now?
[17:23] <jefferai> containing 3.6 + hand-picked Ceph updates?
[17:24] <nhm> jefferai: Heya, thanks!
[17:25] <nhm> jefferai: Are you looking at the big raid0 results , or the 8 seperate raid0 arrays with an OSD for each one?
[17:25] <jefferai> oh, huh
[17:25] <jefferai> Interesting
[17:25] <jefferai> I missed that that was 8 seprate arrays
[17:25] <slang1> c
[17:26] <nhm> jefferai: Yeah, basically I did that because on the SAS2208, you only get to use WB cache if you are using a RAID mode, not JBOD.
[17:26] <jefferai> ah
[17:26] <jefferai> I think my box is SAS
[17:26] <nhm> jefferai: So JBOD = 8 OSDs without WB cache. 8xRAID0 is 8 seperate single-disk RAID0 arrays, and RAID0 is a big 1-OSD RAID0 array.
[17:26] <jefferai> interesting
[17:26] <jefferai> single-disk raid0 just to fool the controller
[17:27] <jefferai> that's certainly something that I could do, if I take my osds out one at a time
[17:27] <jefferai> rather
[17:27] <nhm> It's actually what Dream Host is using with the SAS2108 since it doesn't support JBOD mode.
[17:27] <jefferai> mark all OSDs on one node down and use noin
[17:27] <jefferai> er
[17:27] <nhm> sorry DreamHost
[17:27] <jefferai> noup
[17:27] <jefferai> reboot, wipe the OSD disks, make them raid0
[17:28] <jefferai> etc
[17:28] * nyeates (~nyeates@pool-173-59-239-231.bltmmd.fios.verizon.net) has joined #ceph
[17:28] <jefferai> nhm: interesting idea, thanks for it
[17:30] <jefferai> nhm: do you know what's up with the Ceph-provided gitbuilder kernel?
[17:30] * sleinen (~Adium@130.59.94.54) Quit (Ping timeout: 480 seconds)
[17:30] <jefferai> is "master" what I think it is?
[17:30] <nhm> jefferai: Sure. It's not always a win, and in a lot of ways I like simple controllers with SSD journals better, but it's an option at least.
[17:30] <slang1> jefferai: the kernel version question is one for elder
[17:30] <jefferai> ah ok
[17:31] <slang1> jefferai: it does look like its based off of v3.6-rc5
[17:31] <elder> jefferai, what do you think "master" is?
[17:31] <jefferai> [11:23:12] <jefferai> actually, the 3.6.3 kernel for precise went away on gitbuilder -- is the "master" ref on gitbuilder (which shows a 3.6.0 kernel) the Ceph-maintained one now?
[17:31] <jefferai> [11:23:28] <jefferai> containing 3.6 + hand-picked Ceph updates?
[17:31] <slang1> jefferai: the gitbuilder is generated from the master branch in the ceph-client repo
[17:31] <slang1> jefferai: I assume you knew that already though
[17:32] * low (~low@188.165.111.2) Quit (Quit: Leaving)
[17:32] <jefferai> nhm: I have two SSDs on each box that are used for OS, with partitions on them to act as journals for the OSDs
[17:32] <jefferai> so I don't have a SSD journal per OSD, but they're split across two SSDs
[17:32] <jefferai> slang1: nope, didn't really know about the ceph-client repo :-)
[17:32] <elder> At the moment master is that, but it will shortly (probably this week) be updated to be based on v3.8-rc4.
[17:33] <jefferai> but, I have my boxes using the kernel client on the 3.6.3 kernel that used to be there
[17:33] <elder> But yes, master will be based on some upstream kernel, plus some ceph-related update.
[17:33] <jefferai> I'm basically just wondering what kernel I should be using that is "recommended" for the kernel client
[17:33] <nhm> jefferai: You'll still probably benefit from WB cache, but with SSD journals it might not be as important is if everything was on spinning disks.
[17:34] <elder> The "testing" branch is even closer to the leading edge of development. After we've run extensive testing on that branch for a while we'll update master to include the content from testing.
[17:34] <jefferai> although, I'm thinking of ditching my VM solution that is using the kernel client, and instead using straight qemu+kvm which uses librados
[17:34] <elder> So "master" is probably best.
[17:34] <jefferai> which according to recent mailing list answers is probably a faster solution anyways
[17:35] * dosaboy (~gizmo@12.231.120.253) Quit (Quit: Leaving.)
[17:35] * dosaboy (~gizmo@12.231.120.253) has joined #ceph
[17:35] <jefferai> elder: that said -- the benchmarks on the blog consistently show that btrfs provides a serious performance win -- at what point do you think you guys will be comfortable recommending it as the official way to do things?
[17:35] <jefferai> I don't see much chatter these days about its stability, which is probably a good thing
[17:35] <drokita> What are people experiences with hosting a high-traffic SQLServer database on an RND device?
[17:35] <drokita> RND=RBD
[17:36] <slang1> jefferai: do you mean which version of the mainline kernel? or which version of our gitbuilder generated kernels?
[17:36] <elder> jefferai, I really don't know the answer.
[17:36] <nhm> jefferai: One thing to keep in mind is that BTRFS tends to degrade over time faster than XFS (and probably EXT4)
[17:36] <jefferai> slang1: gitbuilder -- I like using a kernel that's been tested with Ceph specifically :-)
[17:36] <janos> does btrfs have tools to managing/fixing degradation?
[17:36] <jefferai> nhm: degrade how?
[17:36] <jefferai> speedwise?
[17:36] <nhm> jefferai: yeah
[17:36] <jefferai> ah
[17:37] <jefferai> fragmenting, I would guess...
[17:37] <jefferai> I actually have a side reason for asking this
[17:37] <nhm> jefferai: I haven't had time to really explore in depth, and I haven't tested how much better it is with recent kernels.
[17:37] <jefferai> I see
[17:37] <jefferai> so the problem I have is that for some workloads in my VMs, I really want a snapshotting filesystem
[17:37] <nhm> jefferai: Last summer it would at some point get slower than XFS.
[17:37] <jefferai> I've used zfsonlinux multiple times in the past with no problems
[17:37] <jefferai> but right now for some reason it is *amazingly* slow
[17:37] <jefferai> in my VMs
[17:37] <nhm> jefferai: at least with small ios. Large IOs seemed to be affected less.
[17:38] <jefferai> so I'm trying to sort what about my VMs is making it behave that way -- it's the same on precise, wheezy, and oneiric
[17:38] <janos> appears there is a mount option for btrfs for autodefrag. not sure if implemented
[17:38] <jefferai> and maybe it's something about running in a VM period, or maybe it's something about running in qemu, or the version of qemu in precise
[17:38] <jefferai> I've only run zfsonlinux on bare metal before
[17:38] <nhm> janos: yeah, it's possible that might help.
[17:38] * miroslav (~miroslav@c-98-248-210-170.hsd1.ca.comcast.net) has joined #ceph
[17:38] <jefferai> but when I say slow, I mean, like, 250K write speed, sustained
[17:38] <janos> ouch
[17:39] <jefferai> so if I want to consider btrfs instead, I want to have some confidence in the underlying code
[17:39] <jefferai> the other thing I may try doing is upgrading one of my VM host boxes to oneiric and seeing if that version of qemu behaves better
[17:39] <jefferai> and/or trying the same vm setup on vmware and seeing how it behaves
[17:39] * verwilst (~verwilst@109.143.174.3) Quit (Quit: Ex-Chat)
[17:39] <jefferai> the good thing is I don't think it's ceph's fault, because on the same VM ext4 is quite fast
[17:40] <jefferai> and both are via the kernel client
[17:40] <nhm> jefferai: I'll also say this: With some improvements that our in one of our wip branches, EXT4 is starting to look pretty competitive. Won't know how it all shakes out until more of the work is done though.
[17:40] <nhm> s/our/are
[17:40] <jefferai> nhm: ah, interesting
[17:40] <jefferai> even with the issues with xattrs eh
[17:41] <nhm> jefferai: leveldb seems to do a pretty good job.
[17:41] <jefferai> what do you mean?
[17:45] * rj175 (~richard@host109-154-219-154.range109-154.btcentralplus.com) Quit (Quit: rj175)
[17:47] <nhm> jefferai: with EXT4 leveldb is handling a lot of the xattrs. Potentially moving other things into leveldb may also give us a nice performance bump too.
[17:47] * sleinen (~Adium@217-162-132-182.dynamic.hispeed.ch) has joined #ceph
[17:50] * sleinen1 (~Adium@2001:620:0:26:a9e8:3df8:951:f769) has joined #ceph
[17:52] * tnt (~tnt@212-166-48-236.win.be) Quit (Read error: Operation timed out)
[17:52] <jefferai> nhm: ah, I see
[17:52] <jefferai> cool
[17:53] <jefferai> nhm: elder: You guys might actually know the answer to this -- is there a problem with having a number of placement groups on a pool that is not a power of 2?
[17:53] <jefferai> I followed the instructions in the documentation for choosing the number of PGs, but later was told that there may be a performance hit since I didn't choose a power of 2 (which the docs did not specify)
[17:54] <nhm> jefferai: it's complicated. Data might not be as evenly distributed as it should be with a non-power-of-2 distribution, but if you have enough PGs it might not actually matter that much.
[17:55] * sleinen (~Adium@217-162-132-182.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[17:56] <nhm> jefferai: It's actually something I want to test one of these days. Try various numbers of PGs and look at the resulting maps and how data gets distributed after lots of writes.
[17:56] <jefferai> is there a good way to actually see the distribution of data?
[17:56] <jefferai> and, I have about 7k PGs, so *probably* it's fine? :-)
[17:57] <nhm> jefferai: best bet is probably to just look at the number of objects and sizes on each OSD.
[17:57] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) has joined #ceph
[17:59] <jefferai> ah, ok
[18:00] <nhm> jefferai: I actually don't know how much if any inbalance there woud be with 7k PGs. All I know right now is that ceph_stable_mod produces uneven distributions with non-power-of-2 bucket counts, but that's not the whole story because it's only part of how things work.
[18:03] <jefferai> Ah
[18:04] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[18:10] * xmltok (~xmltok@pool101.bizrate.com) has joined #ceph
[18:10] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[18:10] * match (~mrichar1@pcw3047.see.ed.ac.uk) Quit (Read error: Operation timed out)
[18:12] * tnt (~tnt@204.203-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[18:15] * match (~mrichar1@pcw3047.see.ed.ac.uk) has joined #ceph
[18:24] * yehudasa_ (~yehudasa@m870436d0.tmodns.net) has joined #ceph
[18:25] * ScOut3R (~ScOut3R@212.96.47.215) Quit (Ping timeout: 480 seconds)
[18:28] * loicd1 (~loic@3.46-14-84.ripe.coltfrance.com) Quit (Ping timeout: 480 seconds)
[18:31] * leseb (~leseb@193.172.124.196) Quit (Remote host closed the connection)
[18:31] * leseb (~leseb@mx00.stone-it.com) has joined #ceph
[18:32] * stxShadow (~Jens@ip-178-201-147-146.unitymediagroup.de) has joined #ceph
[18:33] <noob21> hey quick question about rados gateway pools. i see that you can add and remove pools with the admin command. i don't see however how you select which pool to use
[18:35] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[18:36] <PerlStalker> noob21: Most of the tools have a --pool option.
[18:38] * buck (~buck@bender.soe.ucsc.edu) has joined #ceph
[18:38] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[18:38] <noob21> yeah i know but it seems like the rados gw does not?
[18:39] * portante (~user@66.187.233.206) Quit (Quit: updates)
[18:39] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[18:39] * leseb (~leseb@mx00.stone-it.com) Quit (Ping timeout: 480 seconds)
[18:40] * Vjarjadian (~IceChat77@5ad6d005.bb.sky.com) has joined #ceph
[18:42] <noob21> it doesn't mention anywhere in the radosgw-admin or the config sections where to set the pool :-/
[18:50] * yehudasa_ (~yehudasa@m870436d0.tmodns.net) Quit (Read error: Connection reset by peer)
[18:52] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[18:54] * dosaboy (~gizmo@12.231.120.253) Quit (Ping timeout: 480 seconds)
[18:55] * buck1 (~Adium@soenat3.cse.ucsc.edu) has joined #ceph
[18:56] * match (~mrichar1@pcw3047.see.ed.ac.uk) Quit (Quit: Leaving.)
[18:57] * dosaboy (~gizmo@12.231.120.253) has joined #ceph
[19:00] * dosaboy (~gizmo@12.231.120.253) Quit ()
[19:00] * dosaboy (~gizmo@12.231.120.253) has joined #ceph
[19:01] * portante (~user@66.187.233.206) has joined #ceph
[19:01] * glowell (~glowell@c-98-234-186-68.hsd1.ca.comcast.net) has joined #ceph
[19:01] * buck1 (~Adium@soenat3.cse.ucsc.edu) Quit (Quit: Leaving.)
[19:08] * jlogan (~Thunderbi@2600:c00:3010:1:a04a:9169:a45b:169f) Quit (Ping timeout: 480 seconds)
[19:10] * yehudasa_ (~yehudasa@38.122.20.226) has joined #ceph
[19:10] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[19:10] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Remote host closed the connection)
[19:10] * leseb (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[19:10] * xmltok_ (~xmltok@pool101.bizrate.com) has joined #ceph
[19:11] * stxShadow1 (~Jens@jump.filoo.de) has joined #ceph
[19:14] * xmltok (~xmltok@pool101.bizrate.com) Quit (Ping timeout: 480 seconds)
[19:14] * ScOut3R (~ScOut3R@dsl51B61EED.pool.t-online.hu) has joined #ceph
[19:15] * stxShadow (~Jens@ip-178-201-147-146.unitymediagroup.de) Quit (Ping timeout: 480 seconds)
[19:16] <jluis> running a bit late for the stand-up; phone cal
[19:16] <jluis> *call
[19:17] <xdeller> any ideas on how to avoid possible emitting of NMI on heavy io?
[19:18] <xdeller> should be cg block limitaions tried on osd daemons or it is useless?
[19:21] * chutzpah (~chutz@199.21.234.7) has joined #ceph
[19:24] * jlogan (~Thunderbi@72.5.59.176) has joined #ceph
[19:31] * pagefaulted (~pagefault@199.181.135.135) has joined #ceph
[19:32] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[19:32] * xmltok_ (~xmltok@pool101.bizrate.com) Quit (Quit: Leaving...)
[19:33] <elder> What is the convention used in ceph logs for indicating (offset, length)
[19:33] <elder> Is it [off:len] or something?
[19:33] * chutzpah (~chutz@199.21.234.7) Quit (Read error: Connection reset by peer)
[19:34] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[19:39] <elder> Wait, it's off~len isn't it?
[19:40] <noob21> in the ceph clusters you guys have built how many monitors do you usually settle on? i have a cluster of 6 and 3 monitors running on the nodes
[19:40] <Vjarjadian> i saw on a video that more than 3 can be counter productive
[19:41] <Vjarjadian> and iirc you should have an odd number of them anyway
[19:41] <noob21> yeah i know about the odd number :)
[19:41] <noob21> so 3 seems to be the preferred amount?
[19:42] <noob21> i was thinking of adding 2 more which would be vm's and on different storage. that way if i lose a few ceph osd servers i don't take down the majority of my monitors also
[19:42] <noob21> they're combined at the moment
[19:44] * loicd (~loic@magenta.dachary.org) has joined #ceph
[19:49] <pagefaulted> /SET irc_conf_mode 1
[19:56] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[19:58] * xmltok (~xmltok@pool101.bizrate.com) has joined #ceph
[20:02] * sjust (~sam@38.122.20.226) has joined #ceph
[20:04] * dmick (~dmick@2607:f298:a:607:7c6f:c7c1:2b41:8647) has joined #ceph
[20:05] <slang1> .
[20:05] * stxShadow1 (~Jens@jump.filoo.de) Quit (Read error: Connection reset by peer)
[20:06] * dosaboy (~gizmo@12.231.120.253) Quit (Quit: Leaving.)
[20:06] * fghaas (~florian@91-119-215-212.dynamic.xdsl-line.inode.at) has joined #ceph
[20:10] * yehudasa_ (~yehudasa@38.122.20.226) Quit (Ping timeout: 480 seconds)
[20:11] <dmick> slang1: ,
[20:11] <slang1> dmick: !
[20:11] <tnt> ceph-osd process mem usage : http://i.imgur.com/ZJxyldq.png definitely a problem somewhere :p
[20:11] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) has joined #ceph
[20:13] * Cube1 (~Cube@12.248.40.138) has joined #ceph
[20:14] * phillipp (~phil@p5B3AEB0E.dip.t-dialin.net) has left #ceph
[20:14] * gaveen (~gaveen@112.135.156.40) Quit (Ping timeout: 480 seconds)
[20:15] * Ryan_Lane (~Adium@216.38.130.166) has joined #ceph
[20:17] * fghaas (~florian@91-119-215-212.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[20:19] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Quit: tryggvil)
[20:20] * yehudasa_ (~yehudasa@m870436d0.tmodns.net) has joined #ceph
[20:23] * gaveen (~gaveen@112.135.133.218) has joined #ceph
[20:23] <tnt> mmm, the rise in memory usage seem to correlate with higher disk read activity ... whatever that means ...
[20:25] <dmick> tnt: it's *possible* that's valid memory usage; it would be interesting to see what valgrind has to say
[20:26] <tnt> dmick: well, given that left uncheck it grows to several G and I have to restart it like every 6 days ... I doubt it.
[20:29] <morpheus__> restarting the osds every few days is really anjoyng
[20:29] <dmick> oh yeah, you shouldn't have to do that, and I'm certainly willing to believe there's a leak, just saying....storage daemons do cache things
[20:29] <dmick> but it sounds like it could use some more investigation
[20:32] <tnt> It's still 0.48.3 and I'll be upgrading this week end to 0.56.1 so I'll see if that helps or not.
[20:33] * jtang1 (~jtang@79.97.135.214) has joined #ceph
[20:39] * fghaas (~florian@91-119-215-212.dynamic.xdsl-line.inode.at) has joined #ceph
[20:39] <tnt> Interestingly on several OSD (all the ones I checked), at the time the memory rose, it was scrubbing PGs from pool #3 ...
[20:39] <tnt> (it scrubbed PGs from other pools and that didn't trigger anything)
[20:40] * chutzpah (~chutz@199.21.234.7) has joined #ceph
[20:40] <dmick> tnt: that is indeed interestign
[20:41] <tnt> trying to find which one is pool #3 but can't remember how to get pool_name <-> id map
[20:42] <sjust> you can get it with ceph osd dump
[20:42] <sjust> there is probably a better way
[20:42] <tnt> Ok, so pool 3 is the pool where I have all the RBD images
[20:44] <tnt> which is probably the one with the most read/write activity. The RGW pools are mostly write-once then just read.
[20:46] * nhorman (~nhorman@nat-pool-rdu.redhat.com) Quit (Quit: Leaving)
[20:47] * yehudasa_ (~yehudasa@m870436d0.tmodns.net) Quit (Ping timeout: 480 seconds)
[20:56] * yehudasa_ (~yehudasa@38.122.20.226) has joined #ceph
[20:58] * fghaas1 (~florian@91-119-215-212.dynamic.xdsl-line.inode.at) has joined #ceph
[20:58] * fghaas (~florian@91-119-215-212.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[21:09] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[21:20] * rturk-away is now known as rturk
[21:24] * The_Bishop (~bishop@cable-89-16-157-34.cust.telecolumbus.net) Quit (Ping timeout: 480 seconds)
[21:25] * jlogan (~Thunderbi@72.5.59.176) Quit (Read error: Connection reset by peer)
[21:25] * jlogan (~Thunderbi@72.5.59.176) has joined #ceph
[21:27] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[21:28] * yehudasa_ (~yehudasa@38.122.20.226) Quit (Read error: Operation timed out)
[21:29] * joao (~JL@89.181.149.199) has joined #ceph
[21:29] * ChanServ sets mode +o joao
[21:32] * The_Bishop (~bishop@2001:470:50b6:0:5009:fa73:f34e:ff7b) has joined #ceph
[21:34] * sagewk (~sage@2607:f298:a:607:7c9e:ad40:b0ef:25d8) has joined #ceph
[21:34] * jluis (~JL@89.181.156.120) Quit (Ping timeout: 480 seconds)
[21:36] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) Quit (Quit: Ex-Chat)
[21:43] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[21:48] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[21:49] * loicd (~loic@magenta.dachary.org) has joined #ceph
[21:53] * dosaboy (~gizmo@12.231.120.253) has joined #ceph
[21:55] * BManojlovic (~steki@85.222.183.190) has joined #ceph
[22:03] * ircolle (~ircolle@65.114.195.189) Quit (Quit: Leaving.)
[22:05] * alram (~alram@38.122.20.226) has joined #ceph
[22:09] * miroslav (~miroslav@c-98-248-210-170.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[22:10] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:10] * gaveen (~gaveen@112.135.133.218) Quit (Remote host closed the connection)
[22:24] * dosaboy (~gizmo@12.231.120.253) Quit (Quit: Leaving.)
[22:36] * dosaboy (~gizmo@12.231.120.253) has joined #ceph
[22:39] <loicd> leseb: ping ?
[22:39] <leseb> loicd: oui ?
[22:42] * dosaboy (~gizmo@12.231.120.253) Quit (Remote host closed the connection)
[22:42] * dosaboy (~gizmo@12.231.120.253) has joined #ceph
[22:43] <loicd> leseb: it's not possible to use the legal room. There are strict rules for room occupancies. However I'm sure we'll be able to squat a place near a booth or something. I'll figure something out saturday morning and text everyone the location when it's set :-)
[22:43] <loicd> (context http://wiki.ceph.com/deprecated/FOSDEM2013 )
[22:43] <nhm> *jealous* :)
[22:44] <tnt> Oh woaw, I hadn't realized it was in less than 2 weeks.
[22:44] <tnt> time flies by ...
[22:44] <loicd> :-)
[22:44] <leseb> loicd: arf ok that's too bad, in the meantime it sounds normal. I'm pretty sure that we will find a place :)
[22:45] <tnt> well, there is always the hackers rooms where it's possible to group & discuss.
[22:45] <nhm> I would drink so much beer if I went there.
[22:47] <loicd> leseb: yes, I'm not worried :-)
[22:47] <loicd> nhm: we'll drink for you
[22:47] <noob21> :D
[22:48] <nhm> loicd: bastard. ;)
[22:48] <nhm> loicd: eat some waffles too
[22:49] <noob21> anyone up on how to specify the pool to use with the rados gw?
[22:49] <nhm> mmm, and kriek
[22:49] <nhm> noob21: I once knew how to do that I think.
[22:50] <noob21> nice. yeah it's not readily apparent
[22:50] <loicd> nhm: beers & waffles ? it's going to be a tough week-end ;-)
[22:51] <noob21> nhm: has the problem with adding new osds to 0.56.1 been worked out yet?
[22:52] <nhm> noob21: hrm, I think I remember seeing someone report that their cluster was stable after they did some tuning that I think Sage recommended. I haven't been paying very close attention though.
[22:52] * jtang1 (~jtang@79.97.135.214) Quit (Quit: Leaving.)
[22:52] * fghaas1 (~florian@91-119-215-212.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[22:52] <noob21> gotcha
[22:53] <noob21> i wonder what he had to do to get it to stabilize
[22:53] <noob21> i've been reading about the paxos algorithm. it's pretty interesting how it works
[22:54] * jtang1 (~jtang@79.97.135.214) has joined #ceph
[22:54] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[22:54] * jtang1 (~jtang@79.97.135.214) Quit ()
[22:55] <dmick> noob2: what was the symptom you were experiencing?
[22:56] <noob21> dmick: nothing yet but i haven't added any osd's into my cluster yet. in about a week i'm planning on adding more so i'll see what happens
[22:56] <noob21> i'm running 0.56.1-1 on ubuntu 12.10
[23:02] <dmick> I don't really remember a universal symptom everyone was having; it may be they're just fine. I do know another point release is imminent.
[23:02] <noob21> awesome
[23:03] <noob21> so these point releases are still leading up to bobtail right?
[23:03] <noob21> from my usage on 0.56.1-1 it's been very stable
[23:03] <noob21> i've been beating the hell out of it with no problems to report
[23:04] <dmick> yes, still bobtail
[23:14] <noob21> i think the radosgw-admin command is broken in 0.56.1-1
[23:15] <noob21> root@dlcephgw01:~# radosgw-admin log show --bucket=chris --date=2012-04-01
[23:15] <noob21> object or (at least one of date, bucket, bucket-id) were not specified
[23:18] <dmick> yep
[23:18] <dmick> if (object.empty() && (date.empty() || bucket_name.empty() || bucket_id.empty()))
[23:18] <noob21> heh
[23:18] <dmick> those last || should be &&
[23:18] <noob21> exactly
[23:20] <noob21> at first glance it seems right though
[23:20] <noob21> if you have an object and a date or bucket or bucket id then proceed
[23:21] <dmick> yes, but you have to invert the && too
[23:21] <noob21> is there a bug for this already?
[23:21] <dmick> dunno
[23:22] <noob21> i'll check
[23:22] <dmick> http://tracker.newdream.net/issues/3513
[23:22] <noob21> yup :)
[23:24] <dmick> I suggest you add a note there
[23:25] <noob21> yeah i'll put something like i experienced this also in 0.56.1
[23:27] * jlogan (~Thunderbi@72.5.59.176) Quit (Read error: Connection reset by peer)
[23:27] * jlogan (~Thunderbi@72.5.59.176) has joined #ceph
[23:28] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) Quit (Ping timeout: 480 seconds)
[23:29] * xdeller (~xdeller@broadband-77-37-224-84.nationalcablenetworks.ru) Quit (Quit: Leaving)
[23:29] * ScOut3R (~ScOut3R@dsl51B61EED.pool.t-online.hu) Quit (Ping timeout: 480 seconds)
[23:30] <noob21> updated
[23:30] <noob21> and i added your patch note
[23:41] <noob21> dmick: how does one go about helping with simple bugs? like this bug i could see as easy pickings: http://tracker.newdream.net/issues/136
[23:44] * sleinen1 (~Adium@2001:620:0:26:a9e8:3df8:951:f769) Quit (Quit: Leaving.)
[23:50] <wer> is rest-bench still a viable option for testing rodosgw throughput or is there a more preferred method now? I notice all my old references to rest-bench in the ceph wiki are gone....
[23:50] <wer> tsung has serious memory constraints on large file puts.... works great for gets though. So I need to get stats on puts and am looking for the low hanging fruit.
[23:52] <wer> and by large files I mean 1MB files. 1 Megabyte files. :)
[23:56] <dmick> noob21: you know git and github?
[23:59] * aliguori (~anthony@32.97.110.59) Quit (Remote host closed the connection)
[23:59] <dmick> wer: rest-bench should still work. The wiki has been deprecated, but the tool has not

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.