#ceph IRC Log

Index

IRC Log for 2013-02-21

Timestamps are in GMT/BST.

[0:27] * diegows (~diegows@190.188.190.11) has joined #ceph
[0:27] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[0:28] * vata (~vata@2607:fad8:4:6:5d63:7f49:6476:4376) Quit (Quit: Leaving.)
[0:29] <jackhill> I started playing with ceph today. So far it is awesome. Thanks!!
[0:30] * ScOut3R (~scout3r@1F2EAE7E.dsl.pool.telekom.hu) has joined #ceph
[0:31] * slang1 (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Quit: Leaving.)
[0:52] <dmick> jackhill: cool!
[0:54] <Kdecherf> jackhill: ;-)
[0:58] * PerlStalker (~PerlStalk@72.166.192.70) Quit (Quit: ...)
[1:03] * ScOut3R (~scout3r@1F2EAE7E.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[1:05] * slang1 (~slang@c-24-12-181-11.hsd1.il.comcast.net) has joined #ceph
[1:24] * cocoy (~Adium@180.190.218.145) has joined #ceph
[1:26] * Qten (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) Quit ()
[1:32] * ChanServ sets mode +o dmick
[1:32] * dmick changes topic to 'v0.56.3 has been released -- http://goo.gl/f3k3U || argonaut v0.48.3 released -- http://goo.gl/80aGP'
[1:40] * LeaChim (~LeaChim@b0fac1c4.bb.sky.com) Quit (Remote host closed the connection)
[1:53] <cocoy> hi, is using ceph-fuse really not stable? seems my test using dd fails
[2:17] <iggy> cocoy: it doesn't get much attention, no
[2:18] <cocoy> iggy: thanks for reply. normally kernel modules right?
[2:19] <iggy> it's definitely going to be better than ceph-fuse
[2:19] <iggy> but keep in mind the filesystem bits still are considered not quite production ready
[2:21] <cocoy> ok i got it. ty.
[2:25] <gregaf1> I don't use the kernel client that much, but actually I wouldn't expect it to behave much better than ceph-fuse does — they both have their own individualized issues ;)
[2:32] * sjustlaptop (~sam@71-83-191-116.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[2:33] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[2:38] * madkiss1 (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[2:39] <buck> I'm seeing mkcephfs -a fail due to what, on the surface, seems to be an issue in get_name_list() in ceph_common.sh. Has anyone else seen mkcephfs act odd?
[2:40] <cocoy> gregaf1: thanks. haha i got my first issue loading the ceph.ko
[2:40] <cocoy> using ceph.ko on 3.2.0-38-virtual
[2:40] <cocoy> ubuntu 12.04 kernel 3.2.0-38-virtual
[2:41] <gregaf1> buck: everybody has seen mkcephfs act oddly :p
[2:41] <gregaf1> cocoy: I don't think you should need to load a kernel module in 12.04
[2:41] * diegows (~diegows@190.188.190.11) Quit (Ping timeout: 480 seconds)
[2:41] <buck> gregaf1: alright, fair point :) I need to run but I'll dig into this tomorrow
[2:42] * buck (~buck@bender.soe.ucsc.edu) has left #ceph
[2:42] <gregaf1> and that's all the advice I can offer you on kernel modules, unfortunately
[2:43] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[2:44] <cocoy> gregaf1: hmm it says ceph modules are not included on virtual-image kernels. https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1063784
[2:45] <lurbs> Try installing linux-image-extra-virtual
[2:45] <cocoy> lurbs: thanks will try that.
[2:45] <lurbs> Not sure if it has the Ceph modules, but has a bunch of others.
[2:46] <cocoy> lurbs: what's on that extra btw? hahha
[2:46] <lurbs> Description-en: Linux kernel extra modules for virtual machines
[2:46] <lurbs> This package will always depend on the latest kernel extra modules available
[2:46] <lurbs> for virtual machines.
[2:47] <cocoy> i've upgraded to current virtual kernel it ceph.ko is there. unfortunately does not loads. will install it and test.
[2:48] <lurbs> Yeah, actually, it's there in the main -virtual kernel for me. Loads fine too.
[2:50] <cocoy> lurbs: got it loaded after installing extra. cool.
[2:50] <cocoy> thanks guys!
[2:51] <cocoy> let me test again mount dirs :)
[2:56] <lurbs> cocoy: Looks like they fixed it in linux-image-3.2.0-36-virtual, so you might be running an old kernel. With a -36 or above you won't need the -extra.
[2:57] <lurbs> http://paste.nothing.net.nz/5942ef
[2:59] <cocoy> actually using this: linux-image-3.2.0-38-virtual . ceph.ko not loading unless extra is installed.
[3:00] <lurbs> Weird.
[3:00] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) Quit (Quit: Leaving.)
[3:03] * Cube (~Cube@12.248.40.138) Quit (Ping timeout: 480 seconds)
[3:06] * slang1 (~slang@c-24-12-181-11.hsd1.il.comcast.net) Quit (Ping timeout: 480 seconds)
[3:15] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) Quit (Quit: Leaving.)
[3:17] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[3:23] * madkiss1 (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[3:31] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) has joined #ceph
[3:33] * jlogan (~Thunderbi@2600:c00:3010:1:d9b9:ba9f:bb49:6d7d) Quit (Ping timeout: 480 seconds)
[3:38] <cocoy> seems the default settings of ceph hangs when I mount dir with mount.ceph and run : dd if=/dev/zero of=test2 bs=1M count=100
[3:38] <cocoy> :)
[3:42] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) Quit (Ping timeout: 480 seconds)
[3:51] * The_Bishop (~bishop@2001:470:50b6:0:e827:50da:d179:b5f5) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[3:52] * themgt (~themgt@24-177-232-181.dhcp.gnvl.sc.charter.com) Quit (Quit: Pogoapp - http://www.pogoapp.com)
[3:56] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[4:01] * rturk is now known as rturk-away
[4:24] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) Quit (Quit: Leaving.)
[4:25] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[4:37] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (Ping timeout: 480 seconds)
[4:40] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[4:42] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[4:43] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) Quit (Quit: Leaving.)
[4:44] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) Quit (Remote host closed the connection)
[4:47] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[4:47] * The_Bishop (~bishop@e179007139.adsl.alicedsl.de) has joined #ceph
[4:55] * chutzpah (~chutz@199.21.234.7) Quit (Quit: Leaving)
[5:18] * renzhi (~renzhi@116.226.50.112) has joined #ceph
[5:20] * sjustlaptop (~sam@71-83-191-116.dhcp.gldl.ca.charter.com) has joined #ceph
[5:44] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) Quit (Quit: Leaving.)
[5:51] * sjustlaptop (~sam@71-83-191-116.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[6:04] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[6:05] <cocoy> ok. got it working. need to open some ports on ceph servers.
[6:08] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Quit: ChatZilla 0.9.90 [Firefox 18.0.2/20130201065344])
[7:23] * eschnou (~eschnou@65.72-201-80.adsl-dyn.isp.belgacom.be) has joined #ceph
[7:40] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) Quit (Quit: Leaving.)
[7:42] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[7:50] * eschnou (~eschnou@65.72-201-80.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[7:50] * itamar (~itamar@82.166.185.149) has joined #ceph
[8:27] * loicd (~loic@90.84.144.119) has joined #ceph
[8:39] * l0nk (~alex@83.167.43.235) has joined #ceph
[8:52] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Quit: Leaving.)
[8:52] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[8:53] * slang1 (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[8:53] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Read error: Connection reset by peer)
[8:54] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[8:54] * slang1 (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Read error: Connection reset by peer)
[8:54] * slang1 (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[8:54] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Read error: Connection reset by peer)
[8:54] * gerard_dethier (~Thunderbi@85.234.217.115.static.edpnet.net) has joined #ceph
[8:55] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[8:55] * slang1 (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Read error: Connection reset by peer)
[8:56] * slang1 (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[8:56] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Read error: Connection reset by peer)
[8:57] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[8:57] * slang1 (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Read error: Connection reset by peer)
[8:58] * slang1 (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[8:58] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Read error: Connection reset by peer)
[8:59] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[8:59] * slang1 (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Read error: Connection reset by peer)
[8:59] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit ()
[9:01] * fghaas (~florian@91-119-74-57.dynamic.xdsl-line.inode.at) has joined #ceph
[9:03] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[9:04] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has joined #ceph
[9:04] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Read error: Connection reset by peer)
[9:04] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) Quit (Quit: Leaving.)
[9:04] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[9:05] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit ()
[9:12] * fghaas (~florian@91-119-74-57.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[9:16] * The_Bishop (~bishop@e179007139.adsl.alicedsl.de) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[9:19] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Ping timeout: 480 seconds)
[9:23] * eschnou (~eschnou@85.234.217.115.static.edpnet.net) has joined #ceph
[9:25] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[9:30] * renzhi (~renzhi@116.226.50.112) Quit (Quit: Leaving)
[9:34] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:35] * LeaChim (~LeaChim@b0fac1c4.bb.sky.com) has joined #ceph
[9:38] * itamar (~itamar@82.166.185.149) Quit (Quit: Leaving)
[9:44] * dosaboy (~user1@host86-164-227-220.range86-164.btcentralplus.com) has joined #ceph
[9:44] * Ul (~Thunderbi@ip-83-101-40-151.customer.schedom-europe.net) has joined #ceph
[9:48] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[9:48] * ChanServ sets mode +o scuttlemonkey
[9:48] * Ul (~Thunderbi@ip-83-101-40-151.customer.schedom-europe.net) Quit ()
[9:49] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[10:00] * Vjarjadian (~IceChat77@5ad6d005.bb.sky.com) Quit (Quit: Depression is merely anger without enthusiasm)
[10:04] * loicd (~loic@90.84.144.119) Quit (Quit: Leaving.)
[10:09] * sileht (~sileht@sileht.net) Quit (Server closed connection)
[10:10] * sileht (~sileht@sileht.net) has joined #ceph
[10:22] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Quit: tryggvil)
[10:23] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[10:26] * fghaas (~florian@91-119-74-57.dynamic.xdsl-line.inode.at) has joined #ceph
[10:32] * loicd (~loic@90.84.144.119) has joined #ceph
[10:38] * ScOut3R (~ScOut3R@212.96.47.215) has joined #ceph
[10:40] * gregorg (~Greg@78.155.152.6) has joined #ceph
[10:50] * Robe (robe@amd.co.at) Quit (Server closed connection)
[10:50] * Robe (robe@amd.co.at) has joined #ceph
[11:10] * loicd (~loic@90.84.144.119) Quit (Ping timeout: 480 seconds)
[11:29] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[11:46] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has left #ceph
[11:47] * ninkotech (~duplo@ip-89-102-24-167.net.upcbroadband.cz) has joined #ceph
[11:48] * trond (~trond@trh.betradar.com) has joined #ceph
[11:49] <lxo> is it intentional that mdsmap/ isn't trimmed in monitors? MDSMonitor.cc seems to be missing a call to trim_to (in MDSMonitor::update_from_paxos?)
[11:50] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has joined #ceph
[11:53] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[11:59] <joao> lxo, don't think it is
[11:59] <joao> I'll check that in a moment
[12:08] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Quit: tryggvil)
[12:10] <joao> lxo, yeah, it's not there
[12:10] <joao> I have the feeling that it was meant to hold one single version
[12:10] <joao> given the way the monitor creates the pending value, and how it's not based in incrementals at all
[12:11] * leseb (~leseb@mx00.stone-it.com) has joined #ceph
[12:11] <joao> and at some point, it just started holding multiple versions, and didn't trim the previous versions
[12:12] <joao> that's my guess
[12:17] * fghaas (~florian@91-119-74-57.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[12:36] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[12:41] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit ()
[12:42] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Server closed connection)
[12:42] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[12:42] * ChanServ sets mode +o elder
[12:48] * lxo (~aoliva@lxo.user.oftc.net) Quit (Quit: later)
[12:56] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) Quit (Server closed connection)
[12:57] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) has joined #ceph
[13:00] * Rocky (~r.nap@188.205.52.204) Quit (Server closed connection)
[13:00] * Rocky (~r.nap@188.205.52.204) has joined #ceph
[13:00] * diegows (~diegows@190.188.190.11) has joined #ceph
[13:04] * benr (~benr@puma-mxisp.mxtelecom.com) Quit (Server closed connection)
[13:04] * benr (~benr@puma-mxisp.mxtelecom.com) has joined #ceph
[13:09] * sstan (~chatzilla@dmzgw2.cbnco.com) Quit (Server closed connection)
[13:10] * sstan (~chatzilla@dmzgw2.cbnco.com) has joined #ceph
[13:10] * fghaas (~florian@91-119-74-57.dynamic.xdsl-line.inode.at) has joined #ceph
[13:12] * loicd (~loic@lvs-gateway1.teclib.net) has joined #ceph
[13:19] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[13:19] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[13:24] <cocoy> hmm I'm wondering if anybody is using can ceph-fs as a replacement for NFS here.
[13:24] * fghaas (~florian@91-119-74-57.dynamic.xdsl-line.inode.at) Quit (Ping timeout: 480 seconds)
[13:24] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[13:37] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[13:42] * fghaas (~florian@91-119-74-57.dynamic.xdsl-line.inode.at) has joined #ceph
[13:55] * sbadia (~sbadia@yasaw.net) Quit (Server closed connection)
[13:55] * itamar (~itamar@82.166.185.149) has joined #ceph
[13:55] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) has joined #ceph
[13:56] * sbadia (~sbadia@yasaw.net) has joined #ceph
[13:58] <Gugge-47527> cocoy: i would like to, but would need a freebsd client first :)
[14:03] * sdx32 (~sdx23@with-eyes.net) has joined #ceph
[14:05] <sdx32> Hi. The ceph wiki seems to have dissapeared and with it any hint that mounting using the kernel driver to mount on a osd will result in a panic. Maybe you want to include this in the new documentation as well.
[14:06] <scuttlemonkey> sdx32: that'd be a great doc bug to file
[14:06] <scuttlemonkey> the wiki is still reachable...but so much of it was outdated that we deprecated it
[14:06] <scuttlemonkey> http://wiki.ceph.com/deprecated
[14:07] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[14:07] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[14:07] <scuttlemonkey> we wanted people to be aware that a good portion of the info there didn't apply or was just plain wrong since new versions were (in some cases) drastically different
[14:09] <fghaas> scuttlemonkey: that's appreciated, but as a general rule for documentation please don't just remove pages. if need be, replace them with redirects to the new information
[14:09] <fghaas> (applies to the "official" docs too)
[14:09] <scuttlemonkey> fghaas: yeah, I don't control the new docs...but the wiki had redirects up for quite a while
[14:09] <fghaas> reorganization and streamlining is great, but please do preserve "old" urls so external blog posts and the like don't lead to 404s
[14:10] <scuttlemonkey> but the new docs were pretty different so it was hard to maintain...now it's just a blanket redirect to the base doc url I think
[14:10] <scuttlemonkey> modulo cookies/caching
[14:13] <fghaas> scuttlemonkey: ok, thanks
[14:24] * Footur (~smuxi@31-18-48-121-dynip.superkabel.de) Quit (Read error: Connection reset by peer)
[14:37] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Remote host closed the connection)
[14:38] <cocoy> Gugge-47527: seems you need to compile it for freebsd :)
[14:43] * xdeller (~xdeller@87.245.187.2) has joined #ceph
[14:46] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[15:03] * xdeller (~xdeller@87.245.187.2) Quit (Ping timeout: 480 seconds)
[15:05] * BManojlovic (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[15:06] * xdeller (~xdeller@178.176.13.153) has joined #ceph
[15:08] * joao (~JL@89.181.154.116) Quit (Server closed connection)
[15:08] * joao (~JL@89-181-154-116.net.novis.pt) has joined #ceph
[15:08] * ChanServ sets mode +o joao
[15:12] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) has joined #ceph
[15:22] * nhorman (~nhorman@nat-pool-rdu.redhat.com) has joined #ceph
[15:23] * dpippenger (~riven@cpe-76-166-221-185.socal.res.rr.com) Quit (Quit: Leaving.)
[15:23] * dpippenger (~riven@cpe-76-166-221-185.socal.res.rr.com) has joined #ceph
[15:24] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[15:28] * ron-slc_ (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[15:31] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[15:32] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[15:50] * xdeller (~xdeller@178.176.13.153) Quit (Quit: Leaving)
[15:58] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) Quit (Remote host closed the connection)
[16:00] * PerlStalker (~PerlStalk@72.166.192.70) has joined #ceph
[16:18] * lxo (~aoliva@lxo.user.oftc.net) Quit (Quit: later)
[16:29] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[16:33] * aliguori (~anthony@32.97.110.59) has joined #ceph
[16:38] * itamar (~itamar@82.166.185.149) Quit (Ping timeout: 480 seconds)
[16:38] * cocoy (~Adium@180.190.218.145) Quit (Quit: Leaving.)
[16:59] * gerard_dethier (~Thunderbi@85.234.217.115.static.edpnet.net) Quit (Quit: gerard_dethier)
[17:03] * jlogan1 (~Thunderbi@72.5.59.176) has joined #ceph
[17:03] * aliguori_ (~anthony@32.97.110.59) has joined #ceph
[17:05] * eschnou (~eschnou@85.234.217.115.static.edpnet.net) Quit (Remote host closed the connection)
[17:10] * aliguori (~anthony@32.97.110.59) Quit (Ping timeout: 480 seconds)
[17:19] <ron-slc_> Hello, I have a hopefully quick mds question on Bobtail. I've noticed when rsync scans, read/compare/dry-run only on the CephFS, the mds is causing write activity. I have attempted to mount cephfs with noatime,nodiratime; but these commands seem to have no effect. Is there an optimization that dir/file reads will not cause file stat updates? (which I'm assuming is happening, to cause writes.)
[17:22] <ron-slc_> Or, would this possibly be the mds balancer, writing temperature info to metadata?
[17:23] <fghaas> ron-slc_: you're seeing write activity in osd filestores and journals, I suppose?
[17:23] <ron-slc_> correct, both the osd volume, and also the mds volume
[17:28] <fghaas> well if you didn't mount _those_ with noatime, then that write activity would be entirely expected even if the mds daemon is only reading
[17:29] <fghaas> mounting the cephfs with noatime would make zero difference to that
[17:29] <ron-slc_> fghaas: Hmm the osds are mounted noatime, but the mds volume is not.
[17:30] <fghaas> ttbomk the only thing that's being read in the mds data dir is the keyring, there shouldn't be much else that the mds is reading from or writing to disk
[17:31] <fghaas> I wonder why you have a separate mds filesystem, really
[17:31] <ron-slc_> Well, the /var/lib/ceph/mds is shared with the root / OS in this case.
[17:31] <ron-slc_> and the OSDs are dedicated disks.
[17:32] <fghaas> then how do you detect write activity "in the mds volume"? inotifywatch?
[17:32] <ron-slc_> The reason I ask, is these data-writes seem to really slow down read-only/compare operations when verifying 250k files
[17:33] <ron-slc_> Hmm I am using the default settings for anything like that. I didn't know there was an option in this regard
[17:34] <ron-slc_> oh wait, I misunderstood... I see the write-activity, using iostat -k , and I also see kb/s write info via ceph -w
[17:35] <ron-slc_> nothing too fancy
[17:39] * ScOut3R (~ScOut3R@212.96.47.215) Quit (Ping timeout: 480 seconds)
[17:41] * psomas (~psomas@inferno.cc.ece.ntua.gr) Quit (Ping timeout: 480 seconds)
[17:44] * leseb (~leseb@mx00.stone-it.com) Quit (Remote host closed the connection)
[17:47] * aliguori_ (~anthony@32.97.110.59) Quit (Ping timeout: 480 seconds)
[17:49] * aliguori (~anthony@32.97.110.59) has joined #ceph
[17:51] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[18:00] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[18:12] * gregaf1 (~Adium@2607:f298:a:607:4863:282d:5bfd:a93c) Quit (Quit: Leaving.)
[18:14] * gregaf (~Adium@2607:f298:a:607:1da5:670d:f747:5c40) has joined #ceph
[18:16] * l0nk (~alex@83.167.43.235) Quit (Quit: Leaving.)
[18:17] * loicd (~loic@lvs-gateway1.teclib.net) Quit (Ping timeout: 480 seconds)
[18:25] * leseb (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[18:26] * Vjarjadian (~IceChat77@5ad6d005.bb.sky.com) has joined #ceph
[18:35] * diegows (~diegows@190.188.190.11) Quit (Ping timeout: 480 seconds)
[18:44] * alram (~alram@38.122.20.226) has joined #ceph
[18:48] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[18:56] * gaveen (~gaveen@112.135.137.207) has joined #ceph
[19:03] * chutzpah (~chutz@199.21.234.7) has joined #ceph
[19:05] * Vjarjadian (~IceChat77@5ad6d005.bb.sky.com) Quit (Quit: Clap on! , Clap off! Clap@#&$NO CARRIER)
[19:15] * davidz1 (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[19:17] * themgt (~themgt@97-95-235-55.dhcp.sffl.va.charter.com) has joined #ceph
[19:24] * sjustlaptop (~sam@71-83-191-116.dhcp.gldl.ca.charter.com) has joined #ceph
[19:27] * LeaChim (~LeaChim@b0fac1c4.bb.sky.com) Quit (Ping timeout: 480 seconds)
[19:27] * rturk-away is now known as rturk
[19:29] * rturk is now known as rturk-away
[19:29] * rturk-away is now known as rturk
[19:32] * doubleg (~doubleg@69.167.130.11) Quit (Quit: Lost terminal)
[19:34] * joao (~JL@89-181-154-116.net.novis.pt) Quit (Remote host closed the connection)
[19:36] * LeaChim (~LeaChim@b01bd511.bb.sky.com) has joined #ceph
[19:40] * vata (~vata@2607:fad8:4:6:5473:1c84:4a88:dd00) has joined #ceph
[19:43] * nhorman (~nhorman@nat-pool-rdu.redhat.com) Quit (Quit: Leaving)
[19:45] * fghaas (~florian@91-119-74-57.dynamic.xdsl-line.inode.at) Quit (Ping timeout: 480 seconds)
[19:51] * loicd (~loic@magenta.dachary.org) has joined #ceph
[20:05] <infernix> where does radoswg store the user credentials?
[20:06] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[20:06] * loicd (~loic@magenta.dachary.org) has joined #ceph
[20:11] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[20:11] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) Quit (Quit: Leaving.)
[20:12] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[20:14] * diegows (~diegows@200.16.99.223) has joined #ceph
[20:20] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[20:22] <wido> infernix: in one of the pools
[20:22] <wido> infernix: I thought .rgw.users
[20:22] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has left #ceph
[20:23] * slang1 (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[20:26] * ScOut3R (~scout3r@5400E7C3.dsl.pool.telekom.hu) has joined #ceph
[20:29] * gaveen (~gaveen@112.135.137.207) Quit (Ping timeout: 480 seconds)
[20:31] * diegows (~diegows@200.16.99.223) Quit (Ping timeout: 480 seconds)
[20:37] * gaveen (~gaveen@112.135.133.46) has joined #ceph
[20:44] * diegows (~diegows@host38.181-1-99.telecom.net.ar) has joined #ceph
[20:57] * eschnou (~eschnou@65.72-201-80.adsl-dyn.isp.belgacom.be) has joined #ceph
[20:58] <wer> Hey If I add more radosgw will it help narrow the gap on the large performance difference of rados bench vs rest-bench. IE I have 4 radosgw's for 4 nodes and write performance is close to half when using rest-bench vs rados bench. Also I cannot figure out how to do reads with rados bench as it always begins cleaning up after itself.
[20:58] * buck (~buck@bender.soe.ucsc.edu) has joined #ceph
[21:00] <wer> we have equal sessions to each of the radosgw's using a proxy so distribution is good. I am considering adding more but wanted to hear people experiences.
[21:01] <dmick> wer: --no-cleanup to the write benchmark
[21:01] <dmick> then later when you're done you can use rados cleanup
[21:01] <wer> dmick: does that work for for rados?
[21:01] <dmick> that was the answer for rados bench
[21:01] * nwl (~levine@atticus.yoyo.org) Quit (Quit: leaving)
[21:01] <wer> ok ty.
[21:02] * nwl (~levine@atticus.yoyo.org) has joined #ceph
[21:02] * nwl (~levine@atticus.yoyo.org) Quit ()
[21:04] * nwl (~levine@atticus.yoyo.org) has joined #ceph
[21:05] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[21:05] <wer> dmick: it seems that running multiple instances of rados bench also yields more throughput on writes... so we run 4 and use different pools to try to simulate more traffic... since the seq cannot be managed. Is this a terrible methodology?
[21:08] * nwl (~levine@atticus.yoyo.org) Quit ()
[21:10] * themgt (~themgt@97-95-235-55.dhcp.sffl.va.charter.com) Quit (Quit: themgt)
[21:11] * nwl (~levine@atticus.yoyo.org) has joined #ceph
[21:13] * eschnou (~eschnou@65.72-201-80.adsl-dyn.isp.belgacom.be) Quit (Quit: Leaving)
[21:13] <sjustlaptop> wer: seq?
[21:13] <wer> writes
[21:13] <sjustlaptop> oh, the tag on the rados bench objects?
[21:13] <wer> oh yeah
[21:13] <sjustlaptop> I think it's a pid
[21:14] <sjustlaptop> but yeah, that is not dissimilar from what we have done
[21:14] <wer> oh really? so it shouldn't collide?
[21:14] <sjustlaptop> well, not if they are on the same machine, (I think probably)
[21:14] <sjustlaptop> but using seperate pools is also perfectly fine
[21:15] <wer> ok. ty. And has it been your experience that more radosgw's will help close the performance gap?
[21:16] <sjustlaptop> oh, not sure about radosgw performance exactly, but it would not surprise me at all if more radosgw's helped
[21:16] <sjustlaptop> *radosgws
[21:17] <wer> ok. Well I am going from 4 to 7. 1:1 on the nodes and 3 extra radosgw's.. so I will comment.
[21:17] * fghaas (~florian@91-119-74-57.dynamic.xdsl-line.inode.at) has joined #ceph
[21:24] * ScOut3R (~scout3r@5400E7C3.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[21:25] <wer> does anyone have any insights on what cause's the variation between rados and radowsgw? Or where to tune first?
[21:30] * themgt (~themgt@24-177-232-181.dhcp.gnvl.sc.charter.com) has joined #ceph
[21:37] * gregorg (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[21:38] * gregorg (~Greg@78.155.152.6) has joined #ceph
[21:40] <dmick> wer: other than the obvious, that there's a *lot* more overhead in radosgw requests, I don't have much to offer
[21:40] * eschnou (~eschnou@65.72-201-80.adsl-dyn.isp.belgacom.be) has joined #ceph
[21:41] <wer> k. I figured :P
[21:46] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[21:47] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[21:54] * miroslav (~miroslav@c-98-234-186-68.hsd1.ca.comcast.net) has joined #ceph
[21:56] * miroslav (~miroslav@c-98-234-186-68.hsd1.ca.comcast.net) Quit ()
[21:56] * leakybrain (~steveb@pool-72-66-65-227.washdc.fios.verizon.net) has joined #ceph
[21:59] * Jasson (~Adium@bowser.gs.washington.edu) has joined #ceph
[22:04] * eschnou (~eschnou@65.72-201-80.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[22:11] <Jasson> Hey gang, can I ask for a little help as a new comer to Ceph with a configuration question?
[22:13] <wer> Jasson: just ask the actual question :)
[22:14] <Jasson> I've got 2 servers setup hosting OSD's and am hoping to figure out how to replace some of our gluster systems with Ceph on the new servers. In my little test environment I have an rbd image created that I can mount from a client machine using ceph-fuse but I'm hoping to figure out how to connect to additional rbd images. As it stands I can am only able to get ceph-fuse to connect to the first image created.
[22:15] <phantomcircuit> Jasson, why are you using ceph-fuse instead of rbd in kernel?
[22:15] <gregaf> ceph-fuse is for the CephFS POSIX filesystem; RBD is a network block device
[22:15] <Jasson> Server environment is RHEL6, and at the moment the rbd module is not part of the RedHat kernel. I have not yet tried to build a customer module for it.
[22:15] <gregaf> they don't work together
[22:16] <phantomcircuit> gregaf, oh is that right? i should shut up then
[22:16] <phantomcircuit> cephfs is still unstable right?
[22:16] <gregaf> I'm saying that they don't have any relationship to each other, and yes, CephFS is not yet production-ready
[22:18] <Jasson> ah, ok, then I guess any recommendations for how to create discrete shares that can then be mounted across a larger network of client machines (all running RHEL6). As it stands we have gluster running sharing out shares for specific tasks such as the backup servers data store and some of the research projects each get their own share mounted on the data processing servers. But the geo-replication part of gluster has been a little problematic recentl
[22:18] <gregaf> Jasson: if you're just trying it out for the first time, I'd recommend getting some VMs set up then which have the kernel, or using the QEMU/KVM userspace integration
[22:19] <gregaf> probably CephFS will be what you want eventually; RBD is just a block device so it can be mounted wherever but you stick a regular local FS on top so it's not friendly to multiple concurrent users
[22:19] <Jasson> I was hoping to get this running on the storage servers without having to go through a VM for performance reasons.
[22:19] <gregaf> I believe that gluster volumes are just multiple different filesystems, right? there's not a straight analogue to those in CephFS right now that I'd recommend
[22:19] <Jasson> we're just using XFS formated lvm's shared out with gluster as it stands.
[22:20] <gregaf> and Ceph doesn't have any geo-replication at the moment, so if Gluster's instability with that is your reason for switching… ;)
[22:21] <Jasson> the geo replication is having some issues with some of the larger shares (trying to keep 16TB synced with a large amount changing and a 30 minute delay on the sync).
[22:21] * danieagle (~Daniel@177.133.173.210) has joined #ceph
[22:21] <Jasson> so we were looking at options for a more real time sync across multiple storage servers for redundancy sake.
[22:21] <gregaf> ah, not actually geo-replication? just multiple copies in a data center?
[22:22] * leseb (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[22:22] <gregaf> (because of course Ceph is excellent at that)
[22:22] <Jasson> yes, we were using geo replication because we were spread out across multiple data centers but recently consolidated down to a single location.
[22:22] * leseb (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[22:22] <Jasson> so the consolidation and increase traffic volume is throwing things off a little with the gluster geo-replication.
[22:22] <phantomcircuit> Jasson, can we take a step back and just have you say what problem you're trying to solve, it might be easier to give you a recommendation
[22:23] <Jasson> certainly
[22:23] <Jasson> Basically we need to data redundancy for several large data shares. So we're looking for something to replace our current gluster geo-replication solution
[22:25] <Jasson> 2 large storage servers that we want data replicated across both of incase one goes down. And those then to share out to a network of VM's that are doing a little bit of everything including BackupPC, web servers, DB servers and data processing on some large data sets for gene sequencing data.
[22:25] <phantomcircuit> i believe there is an rpm for RHEL6 in v0.52
[22:25] <phantomcircuit> http://ceph.com/docs/master/install/rpm/
[22:26] <phantomcircuit> gregaf, you have any idea if the rbd kernel module would be included in those rpms?
[22:26] <gregaf> phantomcircuit: no, that's only user-space stuff
[22:26] <gregaf> as far as I know
[22:26] <gregaf> Jasson: Gluster does have just regular replicated volumes which go in real time, rather than the delayed sync of geo-replication
[22:26] <Jasson> I've setup a repo copy of the rhel6 rpm's into our local repo store and I've got v0.56 installed but it does not appear to have the kernel module.
[22:27] <gregaf> much as I hate to suggest something other than Ceph, for now that sounds like the easiest solution to your problem?
[22:27] <gregaf> someday CephFS will do this, but I can't in good conscience recommend it for production use right now
[22:27] <phantomcircuit> gregaf, so to run the rbd kernel module on RHEL6 you'd have to build the module from source?
[22:27] <gregaf> or install a new kernel (whatever's involved in that), yeah
[22:27] <Jasson> yeah, I was working with some of those but they were having some issues when we attempted to expand the gluster volume on the replicated ones that we don't have a problem with on the geo-replication.
[22:27] <gregaf> ah
[22:28] <gregaf> well, maybe scuttlemonkey has heard of good ways to do this, or dmick
[22:28] <phantomcircuit> Jasson, so you only have 2 storage servers connected by a low latency link?
[22:29] <gregaf> but all the "production-ready" Ceph-based solutions I know of are stuff like re-exporting RBD volumes over Samba and managing the mountpoint via Pacemaker in case the other guy dies, or whatever
[22:29] <phantomcircuit> if you dont expect that setup to change anytime soon you might be better off drbd
[22:29] <Jasson> we have 4 at the moment on an isolated data network yes. But 2 of them are running our gluster geo-replication
[22:30] <Jasson> ok, so right now the production level recommendation would be to basically share out over NFS from rbd moutned images, since all our clients are rhel6.
[22:31] <phantomcircuit> Jasson, yeah pretty much
[22:31] <gregaf> yeah
[22:31] <Jasson> so back to I need to get the kernel module working to rbd mount instead of looking at ceph-fuse.
[22:31] <gregaf> yeah
[22:32] <Jasson> am I over looking an rpm for that somewhere? or is that something I'm going to have to build from source?
[22:32] <gregaf> sorry; it's pretty new stuff and it takes a while for Red Hat to bring stuff in to their kernel ;)
[22:32] <gregaf> I think you'd basically have to build a fresh non-RHEL kernel
[22:32] <Jasson> yeah, no worries, I'm pretty excited about the whole concept of Ceph, particularly now that I've spent a couple of days playing with it in my test environment.
[22:32] <lurbs> Jasson: I hacked up a bunch of VMs, backed by RBD, running Gluster. That worked too, even if it felt a bit wrong.
[22:32] <gregaf> their RHEL6 kernel is a mashup of about 3 years worth of kernels and RBD hasn't been backported to it
[22:33] <fghaas> Jasson: rbd was merged upstream in 2.6.37, on rhel6 you're stuck on 2.6..32 plus whatever RH backports, as gregaf says
[22:34] <Jasson> ok, now that's frustrating, it's so close!
[22:35] * joao (~JL@89-181-154-116.net.novis.pt) has joined #ceph
[22:35] * ChanServ sets mode +o joao
[22:35] <iggy> and since redhat leans more toward gluster, I wouldn't expect them to be in a rush to backport rbd/ceph
[22:35] <phantomcircuit> well not really they're using such an ancient version specifically because there was a burst of new features being added
[22:36] <Jasson> yeah, our other problem is we're running a gluster setup that was in place before redhat bought them, and it's not compatible with the current version of gluster from redhat. So we can't mix and match.
[22:36] * leseb (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Remote host closed the connection)
[22:36] <gregaf> there are…certain groups…which are using custom-built kernels under the RHEL userspace and it seems to work fine, if that's good enough *shrug*
[22:36] <gregaf> eek, that sounds truly unpleasant
[22:36] <gregaf> (Ceph should always be forward-compatible :p)
[22:37] <phantomcircuit> gregaf, what are the odds that building just the module separately would work
[22:37] <phantomcircuit> im sort of thinking not great
[22:37] <Jasson> you can't cross connect a 3.2.5 gluster client to a new 3.3 server, and we can't take down the 3.2.5 servers until we can get something else running and playing happy along side them for a little while to shift data over to rebuild those 2 servers.
[22:37] <gregaf> pretty low — a lot of those kernel interfaces changed
[22:37] <phantomcircuit> Jasson, well clearly you should hire a redhat consultant to figure it out for you
[22:37] * phantomcircuit hides
[22:37] <gregaf> I suspect that the block device is stable enough that a backport wouldn't be toooo difficult (versus the CephFS client, which would suck), but I don't spend much time kernel-side
[22:37] <Jasson> lol
[22:38] <Jasson> I'll pass that recommendation on as something to add to the next grant we submit.
[22:38] <gregaf> elder has done backports to 3.4 or something and that went fine, anyway
[22:38] <phantomcircuit> a lot of stuff from redhat seems needlessly contrived
[22:39] <gregaf> Jasson: there is also a *totally experimental* rbd-fuse that dmick can tell you more about, and might be stable enough that it's better than mucking about with your kernel
[22:39] <phantomcircuit> libvirtd for example, it's basically a networked script that calls other programs except they wrote it in c and it's a huge colossal nuisance
[22:39] <phantomcircuit> anyways off topic
[22:39] <Jasson> now that would be very interesting to me (rbd-fuse)
[22:39] * fghaas (~florian@91-119-74-57.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[22:40] <Jasson> I saw the rpm for that I think on the ceph site, I'll grab that and give that a try.
[22:40] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:41] <themgt> phantomcircuit: agreed on libvirt. so much coming out of the linux distro vendors is just … not the right direction at all
[22:42] <phantomcircuit> themgt, i actually recently rewrote my entire vps hosting setup specifically so i can replace libvirtd as soon as i get around to it
[22:42] <phantomcircuit> and it actually doesn't look like it's going to be very hard to do ...
[22:42] <josef> sagewk: ok finally got packages building for rawhide and f18
[22:43] <themgt> heh, yeah, I still use it but try to do so as little as possible. it really solves the wrong problem, and acts like a big annoying "platform" to run VMs on. it's like too many chefs in the kitchen with the question of how to manage a VM
[22:46] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) has joined #ceph
[22:49] <phantomcircuit> themgt, a lot of the problems it has can be worked around
[22:49] <phantomcircuit> for example there is often a huge lag when starting a qemu-kvm domain which can be largely avoided by setting up the vnet interface manually
[22:50] <phantomcircuit> of course at somepoint i stopped and asked what libvirtd was actually doing and came up with a very short list...
[22:50] <infernix> there is a repo that maintains kernel backports for rhel 6
[22:50] <infernix> they have kernels with rbd
[22:50] <infernix> well actually centos
[22:51] <infernix> http://elrepo.org
[22:51] <infernix> elrepo-kernel
[22:52] * carson (~carson@2604:ba00:2:1:fd02:5699:c1b2:2a29) has joined #ceph
[22:52] <Jasson> oh excellent, thanks infernix.
[22:53] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Read error: Connection reset by peer)
[22:54] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[22:58] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[22:59] * mauilion (~dcooley@crawford.dreamhost.com) Quit (Quit: Lost terminal)
[23:04] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[23:10] * mjevans (~mje@209.141.34.79) has joined #ceph
[23:11] * leseb (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[23:12] * leseb (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Remote host closed the connection)
[23:12] <mjevans> When moving from the 5-min quick start to an actual (if minimal) cluster is it sufficient to copy the keyring from one system to all the others before starting, or is there a better guide for managing 'nodes'?
[23:15] * diegows (~diegows@host38.181-1-99.telecom.net.ar) Quit (Ping timeout: 480 seconds)
[23:17] * buck (~buck@bender.soe.ucsc.edu) has left #ceph
[23:21] <gregaf> each daemon needs to be able to contact the monitors and have a keyring which they're included in — the one you used in the quickstart includes the admin keyring which gives you permission to do a lot of that maintenance, but you'll want to check the docs in the "adding an [OSD|MDS|Monitor]" sections for how to go about it
[23:23] <carson> Does anyone know how many cuncurrent connections radosgw supports? I've not been able to push it past about 1020
[23:24] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[23:25] * wschulze (~wschulze@cpe-98-14-182-190.nyc.res.rr.com) has joined #ceph
[23:27] <iggy> maybe hitting the open file descriptor limit?
[23:29] <mjevans> Thanks gregaf I forgot that the TOC doesn't fully expand the tree.
[23:31] <mjevans> Based on some error messages I'm getting I'm going to make a guess and hope it's incorrect; do OSDs need to be sequential? (That is can I have sparse numbering such as giving each server a range of a hundred?)
[23:32] <gregaf> sorry, they basically need to be sequential
[23:32] <gregaf> there are a few ways around it but real sparse numbering is not a good plan
[23:33] <gregaf> at some point we will finish splitting the ID and the name apart so you can call them what you like, but today is not that day
[23:33] <mjevans> well if they have to be in order I suppose I'll just ignore assigning any rational value to it.
[23:33] <gregaf> yeah
[23:33] * wschulze1 (~wschulze@cpe-98-14-182-190.nyc.res.rr.com) has joined #ceph
[23:33] * wschulze (~wschulze@cpe-98-14-182-190.nyc.res.rr.com) Quit (Read error: Connection reset by peer)
[23:33] <mjevans> At that point hopefully you just have the monitor's assign the ODS it's id for us.
[23:34] <mjevans> its id
[23:35] <gregaf> yep, the monitor returns the id from the "ceph osd create" command
[23:36] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) Quit (Quit: Leaving.)
[23:39] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) Quit (Remote host closed the connection)
[23:46] * ron-slc_ (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (Quit: Leaving)
[23:49] <sstan> does anyone know how to check if a computer implements "802.1aq Shortest Path Bridging or SPB"
[23:49] <sstan> I'd like to put multiple ethernet adapters on each OSD host, and make a loop for the internal OSD network
[23:50] * Vjarjadian (~IceChat77@5ad6d005.bb.sky.com) has joined #ceph
[23:52] * dosaboy (~user1@host86-164-227-220.range86-164.btcentralplus.com) Quit (Remote host closed the connection)
[23:53] * wschulze1 (~wschulze@cpe-98-14-182-190.nyc.res.rr.com) Quit (Read error: No route to host)
[23:57] <mjevans> sstan: I think aside from tagging public/private address ranges and manually specifying which address to listen on per osd everything else is handeled by the underlying OS.
[23:58] <scalability-junk> how much ram and cpu speed is recommended for running 2 osds each with one disk?
[23:58] <scalability-junk> perhaps running monitor and some gateway on it too.
[23:59] <scalability-junk> is it general a bad solution to use the same disks for os too?
[23:59] <sstan> ^ yes

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.