#ceph IRC Log

Index

IRC Log for 2013-01-25

Timestamps are in GMT/BST.

[0:09] * mattbenjamin (~matt@aa2.linuxbox.com) Quit (Quit: Leaving.)
[0:13] * aliguori (~anthony@32.97.110.59) Quit (Quit: Ex-Chat)
[0:13] * Vjarjadian (~IceChat77@5ad6d005.bb.sky.com) has joined #ceph
[0:14] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[0:17] * nmartin (~nmartin@adsl-98-90-198-125.mob.bellsouth.net) has joined #ceph
[0:23] * danieagle (~Daniel@177.133.174.142) Quit (Ping timeout: 480 seconds)
[0:23] * nyeates (~nyeates@pool-173-59-239-231.bltmmd.fios.verizon.net) Quit (Quit: Zzzzzz)
[0:24] * houkouonchi-work (~linux@12.248.40.138) Quit (Ping timeout: 480 seconds)
[0:26] * brambles (lechuck@s0.barwen.ch) Quit (Remote host closed the connection)
[0:26] * brambles (lechuck@s0.barwen.ch) has joined #ceph
[0:26] * nmartin (~nmartin@adsl-98-90-198-125.mob.bellsouth.net) has left #ceph
[0:28] * rturk-away is now known as rturk
[0:31] * danieagle (~Daniel@186.214.76.77) has joined #ceph
[0:32] * sander (~chatzilla@c-174-62-162-253.hsd1.ct.comcast.net) Quit (Ping timeout: 480 seconds)
[0:32] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[0:34] * houkouonchi-work (~linux@12.248.40.138) has joined #ceph
[0:43] * MrNPP (~mr.npp@216.152.240.205) has joined #ceph
[0:44] * dosaboy (~gizmo@12.231.120.253) Quit (Quit: Leaving.)
[0:45] * leseb_ (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Remote host closed the connection)
[0:51] * brambles (lechuck@s0.barwen.ch) Quit (Remote host closed the connection)
[0:51] * brambles (lechuck@s0.barwen.ch) has joined #ceph
[0:51] * PerlStalker (~PerlStalk@72.166.192.70) Quit (Quit: ...)
[1:00] * dosaboy (~gizmo@12.231.120.253) has joined #ceph
[1:03] * dosaboy (~gizmo@12.231.120.253) Quit ()
[1:04] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) Quit (Remote host closed the connection)
[1:12] * noob2 (~noob2@135ext-130-156-135-157.rider.edu) has joined #ceph
[1:12] * noob2 (~noob2@135ext-130-156-135-157.rider.edu) Quit ()
[1:13] * mattbenjamin (~matt@adsl-75-45-228-196.dsl.sfldmi.sbcglobal.net) has joined #ceph
[1:14] * danieagle (~Daniel@186.214.76.77) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[1:15] * danieagle (~Daniel@186.214.76.77) has joined #ceph
[1:15] * danieagle (~Daniel@186.214.76.77) Quit ()
[1:29] <ShaunR> where can i get the librbd source?
[1:29] * calebamiles1 (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) Quit (Read error: No route to host)
[1:30] <joshd> ShaunR: https://github.com/ceph/ceph/tree/master/src/librbd/
[1:31] <ShaunR> i'm trying to enable librbd in qemu
[1:31] * ircolle (~ircolle@c-67-172-132-164.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[1:31] <ShaunR> I dont see a make file so i have a feeling this isnt what i want?
[1:32] <jmlowe> ShaunR: which distribution are you using?
[1:32] <ShaunR> centos 6
[1:32] <iggy> I'd say get the latest tarball (0.56.1?) and go from there
[1:32] <iggy> it'll have the Makefile's generated
[1:33] <ShaunR> iggy: ya was just checking that out.
[1:33] <ShaunR> i didnt want to build/install all of ceph just on this host though, seams like a waste to do since ceph wont be running on it
[1:33] <iggy> you should be able to do a make librbd
[1:33] <iggy> instead of building everything
[1:35] <iggy> assuming I'm reading the Makefile.am right... which I'm very likely not
[1:36] <BillK> getting osd.3 [ERR] repair 3.1 188da1b1/rb.0.1862.6b8b4567.0000000006a7/head//3 on disk size (4128768) does not match object info size (4194304)
[1:36] <BillK> osd repair doesnt fx it, is there another way?
[1:37] <jmlowe> BillK: see http://tracker.newdream.net/issues/3810
[1:37] <BillK> tkx
[1:37] <jmlowe> BillK: DO NOT run ceph repair
[1:38] <ShaunR> hmm, tcmalloc where are you!!!!!
[1:38] * mattbenjamin (~matt@adsl-75-45-228-196.dsl.sfldmi.sbcglobal.net) has left #ceph
[1:39] <jmlowe> BillK: I'd be very interested in how similar your situation is to that bug I filed, the dev's haven't had this problem in their testing but I have had it with two different clusters
[1:40] <ShaunR> bah, ok what provides this... checking for malloc in -ltcmalloc... no
[1:40] <iggy> ShaunR: some google thing
[1:40] <iggy> google perf tools or something
[1:40] <ShaunR> been searching
[1:40] <jmlowe> do I remember correctly that there were some rpms built?
[1:40] <iggy> I think that's in epel
[1:40] <jmlowe> I know they were working on making viable spec files
[1:41] <BillK> jmlowe: too late :) , did that before posting here ... should I down the osd and recreate, or kill it so it doesnt move bad data into the rest of the osd's?
[1:41] <BillK> jmlowe: pastebin or add to the bug?
[1:42] <jmlowe> BillK: I'm not sure, I think sjust is the guy you want to ask, I don't have any important data so I am waiting to try a repair
[1:42] <jmlowe> BillK: in my cluster it looks like the primary is always smaller than the secondary, like the primary didn't complete a transaction but the secondary did
[1:43] * amichel (~amichel@salty.uits.arizona.edu) Quit ()
[1:43] <jmlowe> BillK: repair always assumes the primary is correct and just overwrites the secondaries
[1:43] <jmlowe> BillK: how similar is your situation, I think the current guess is that btrfs scrub is causing trouble?
[1:43] <ShaunR> iggy: i'm building gperftools
[1:44] <BillK> jmlowe: repair didnt change anything so it just keeps coming back
[1:44] <jmlowe> BillK: that is probably a good thing
[1:45] <BillK> jmlowe: yes, it was reported during osd scrub (which I presume is btrfs) - just learning about ceph so I am still hazy on many parts
[1:46] <BillK> jmlowe: how can I confirm which osd is primary ... I thought they were all equal in ststus?
[1:46] <jmlowe> BillK: ceph osd scrubs just compare the size and hashes of objects to make sure that the primaries and secondaries are synchronized
[1:46] <jmlowe> you will want to pipe this to grep or less 'ceph pg dump'
[1:47] <jmlowe> ceph pg dump |grep incon
[1:47] <ShaunR> oh goodie, not gpreftools wont build...
[1:47] <jmlowe> the bracketed numbers like [0,7] are primary and secondary in that order
[1:48] <jmlowe> BillK: which ceph version are you running?
[1:48] <jmlowe> BillK: I believe there was a bug in 0.48 that would cause inconsistent pg's
[1:52] <ShaunR> if all i'm looking for is the librbd stuff do i really need tcmalloc?
[1:52] <BillK> jmlowe: 55.1 on gentoo, no inconsistencies show on the pg dump, and the pg's with error show clean+active (if I am reading it right)
[1:54] <jmlowe> BillK: hmm, try ceph osd deep-scrub <N> for all of your osd's and the check the status when it's done, it will tell you something like "pgmap v200035: 1920 pgs: 1880 active+clean, 40 active+clean+inconsistent; 1236 GB data, 4914 GB used, 50948 GB / 55889 GB avail"
[1:54] <iggy> ShaunR: try ./configure --without-tcmalloc
[1:54] <ShaunR> iggy: i am, but at the same time i want to make sure i'm not going to hinder performance by doing that.
[1:54] <jmlowe> also I think 0.56.1 is where you want to be if at all possible
[1:55] * Ryan_Lane (~Adium@216.38.130.164) Quit (Quit: Leaving.)
[1:55] <BillK> jmlowe: errors are on 3.1 and 3.2, both have [3,0], 5 pg's all on 256g partitions on the same hard disk
[1:55] <ShaunR> this build is a nightmare...
[1:55] <BillK> jmlowe: was going to that today ... friday is upgrade day :)
[1:55] <jmlowe> :)
[1:55] <iggy> ShaunR: I'm not really sure tbh, I think it's a pretty new dependency
[1:56] <iggy> ShaunR: yeah, I think most people go with ubuntu for a reason when trying to use ceph
[1:56] <ShaunR> this is a bare centos install, but i'm having to install all kinds of weird deps
[1:56] * Ryan_Lane (~Adium@216.38.130.164) has joined #ceph
[1:56] <BillK> jmlowe: gotta go, will come back and report on bug before upgrading.
[1:56] <iggy> because rhel6 predates ceph?
[1:56] <iggy> not really, but it's probably pretty close
[1:56] <ShaunR> just alot of packages i dont see all that often
[1:57] <ShaunR> well configure finally finished.
[1:57] <jmlowe> BillK: I'm not sure I can do much more for you, if your problem fits in with 3810 then go ahead and append, so far I think I'm the only person who has broken a cluster more than once with this
[1:57] <iggy> ceph isn't exactly your basic open source project
[1:57] <ShaunR> iggy: not seeing a librbd in the make file
[2:05] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[2:12] * LeaChim (~LeaChim@b0faf18a.bb.sky.com) Quit (Ping timeout: 480 seconds)
[2:15] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[2:36] * nwat (~Adium@soenat3.cse.ucsc.edu) Quit (Quit: Leaving.)
[2:37] * alram (~alram@38.122.20.226) Quit (Quit: leaving)
[2:37] * miroslav (~miroslav@c-98-248-210-170.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[2:44] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[2:44] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[2:47] * The_Bishop_ (~bishop@e179007201.adsl.alicedsl.de) has joined #ceph
[2:49] * sagelap1 (~sage@2607:f298:a:607:ccbf:6c78:41bd:da97) Quit (Ping timeout: 480 seconds)
[2:50] <via> is the ceph command supposed to have -i for both reading from a file and specifying id?
[2:50] * sagelap (~sage@mobile-166-137-179-169.mycingular.net) has joined #ceph
[2:54] * The_Bishop (~bishop@e179007201.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[2:55] * sagelap1 (~sage@mobile-166-137-179-093.mycingular.net) has joined #ceph
[2:58] * sagelap (~sage@mobile-166-137-179-169.mycingular.net) Quit (Ping timeout: 480 seconds)
[2:59] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[3:01] * jlogan (~Thunderbi@2600:c00:3010:1:ed7c:64e2:3954:4f7e) Quit (Ping timeout: 480 seconds)
[3:06] * chutzpah (~chutz@199.21.234.7) Quit (Quit: Leaving)
[3:07] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[3:14] * Tamil (~tamil@38.122.20.226) Quit (Quit: Leaving.)
[3:22] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[3:23] * glowell (~glowell@c-98-234-186-68.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[3:23] * rturk is now known as rturk-away
[3:24] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[3:41] * sagelap1 (~sage@mobile-166-137-179-093.mycingular.net) Quit (Ping timeout: 480 seconds)
[3:43] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[3:51] * Cube (~Cube@12.248.40.138) Quit (Quit: Leaving.)
[3:58] * Ryan_Lane (~Adium@216.38.130.164) Quit (Quit: Leaving.)
[4:01] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[4:01] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[4:13] * The_Bishop__ (~bishop@f052096136.adsl.alicedsl.de) has joined #ceph
[4:14] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[4:14] * sjustlaptop (~sam@71-83-191-116.dhcp.gldl.ca.charter.com) Quit (Read error: Operation timed out)
[4:19] * The_Bishop_ (~bishop@e179007201.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[4:21] * xiaoxi (~xiaoxiche@134.134.139.72) has joined #ceph
[4:22] <xiaoxi> hi, what's the option "filestore fsync flushes journal data" stand for?
[4:22] <xiaoxi> seems ceph.com/doc a bit strange
[4:26] <joshd> I think it's meant to be true if your journal is on the same fs as your filestore
[4:26] <joshd> i.e. syncing the filestore's filesystem also persists the journal
[4:32] <xiaoxi> if (m_filestore_fsync_flushes_journal_data) {
[4:32] <xiaoxi> dout(15) << "sync_entry doing fsync on " << current_op_seq_fn << dendl;
[4:32] <xiaoxi> // make the file system's journal commit.
[4:32] <xiaoxi> // this works with ext3, but NOT ext4
[4:32] <xiaoxi> ::fsync(op_fd);
[4:32] <xiaoxi> } else {
[4:32] <xiaoxi> dout(15) << "sync_entry doing a full sync (syncfs(2) if possible)" << dendl;
[4:32] <xiaoxi> sync_filesystem(basedir_fd);
[4:32] <xiaoxi> }
[4:33] <xiaoxi> here is part of the code, It looks different from the code and inline notes
[4:33] <xiaoxi> why if sync flushes journal data is true, OSD can only sync on op_fd instead of basedir_fd
[4:42] <phantomcircuit> pipe(0x53a12ea900 sd=55 pgs=4 cs=7 l=0).reader got old message 1209495 <= 1209783 0x539f328800 osd_sub_op(client.5507.0:3979592 2.2d b1898ad/rb.0.2c.0000000023e5/head//2 [] v 72'2911 snapset=0=[]:[] snapc=0=[]) v7, discarding
[4:42] <phantomcircuit> i see that in osd logs mixed with slow requests
[4:42] <phantomcircuit> the thing that makes me nervous is the discarding part at the end
[4:42] <phantomcircuit> does that mean writes are being discarded?
[4:43] <phantomcircuit> or i guess more generally messages
[4:47] * dmick (~dmick@2607:f298:a:607:1c0b:599b:36a0:41ac) has joined #ceph
[5:00] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[5:03] * Cube (~Cube@173-140-229-134.pools.spcsdns.net) has joined #ceph
[5:05] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[5:08] * Cube (~Cube@173-140-229-134.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[5:15] * Cube (~Cube@173-140-229-134.pools.spcsdns.net) has joined #ceph
[5:18] * Cube1 (~Cube@173-140-229-134.pools.spcsdns.net) has joined #ceph
[5:18] * Cube (~Cube@173-140-229-134.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[5:19] * Cube (~Cube@173-140-229-134.pools.spcsdns.net) has joined #ceph
[5:19] * Cube1 (~Cube@173-140-229-134.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[5:22] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[5:22] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[5:29] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[5:31] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[5:38] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[5:40] * Cube1 (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[5:45] * Cube (~Cube@173-140-229-134.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[5:52] * yoshi (~yoshi@p6124-ipngn1401marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[5:59] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[6:00] * loicd (~loic@magenta.dachary.org) has joined #ceph
[6:02] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[6:03] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[6:03] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[6:10] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) has joined #ceph
[6:11] <sage> xiaoxi: this is meant to be used for ext3 in data=journal mode; fsyncing anything flushes the journal and every write that preceeded it. it doesn't much matter what fd you sync, i don't think. but.. nobody uses it anyway. we should probably just remove that code.
[6:16] * sagelap (~sage@76.89.177.113) has joined #ceph
[6:22] * dmick (~dmick@2607:f298:a:607:1c0b:599b:36a0:41ac) has left #ceph
[6:45] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:50] <xiaoxi> sage:Thanks a lot. It make sense to me now.So the http://ceph.com/docs/master/rados/configuration/filestore-config-ref/ seems misleading
[6:58] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[6:58] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[7:04] <xiaoxi> Is it necessary to have monitor data hosted in SSD? Seems the access volume is not that high?Why this is suggested by official doc?
[7:22] <phantomcircuit> xiaoxi, iirc the time between certain monitor votes and the result going to disk stalls the entire network
[7:22] <phantomcircuit> it's something like that
[7:35] <xiaoxi> Is this happen quite often?
[7:43] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[7:43] * loicd (~loic@magenta.dachary.org) has joined #ceph
[7:46] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[7:46] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[7:52] * tnt (~tnt@91.176.10.129) has joined #ceph
[7:53] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has left #ceph
[7:55] * Qten (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) Quit (Read error: Connection reset by peer)
[7:55] * Qten (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[8:14] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[8:23] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[8:24] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[8:24] * gregorg (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[8:25] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[8:26] * gregorg_taf (~Greg@78.155.152.6) Quit ()
[8:36] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[8:41] * yoshi (~yoshi@p6124-ipngn1401marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[8:44] * nz_monkey (~nz_monkey@222.47.255.123.static.snap.net.nz) Quit (Ping timeout: 480 seconds)
[8:49] * leseb (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[8:53] * nz_monkey (~nz_monkey@222.47.255.123.static.snap.net.nz) has joined #ceph
[8:56] * xdeller (~xdeller@broadband-77-37-224-84.nationalcablenetworks.ru) Quit (Quit: Leaving)
[8:56] * loicd (~loic@90.84.146.203) has joined #ceph
[8:57] * leseb (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Ping timeout: 480 seconds)
[9:05] * yoshi (~yoshi@p6124-ipngn1401marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[9:13] * ScOut3R (~ScOut3R@dsl51B61EED.pool.t-online.hu) has joined #ceph
[9:15] * low (~low@188.165.111.2) has joined #ceph
[9:24] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[9:31] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:36] * leseb (~leseb@mx00.stone-it.com) has joined #ceph
[9:45] * Vjarjadian (~IceChat77@5ad6d005.bb.sky.com) Quit (Read error: Connection reset by peer)
[9:53] <absynth_47215> morning
[9:56] * loicd (~loic@90.84.146.203) Quit (Ping timeout: 480 seconds)
[10:03] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[10:08] * xiaoxi (~xiaoxiche@134.134.139.72) Quit (Ping timeout: 480 seconds)
[10:10] * loicd (~loic@90.84.146.203) has joined #ceph
[10:15] * wer (~wer@wer.youfarted.net) Quit (Ping timeout: 480 seconds)
[10:15] * Morg (d4438402@ircip2.mibbit.com) has joined #ceph
[10:20] * LeaChim (~LeaChim@b0faf18a.bb.sky.com) has joined #ceph
[10:25] * wer (~wer@wer.youfarted.net) has joined #ceph
[10:41] * goodbytes (~goodbytes@2a00:9080:f000:0:69e7:b27e:2d13:652d) Quit (Remote host closed the connection)
[10:52] * coredumb (~coredumb@xxx.coredumb.net) has joined #ceph
[10:52] <coredumb> Hello
[10:54] <coredumb> is a ceph filesystem over wan something imaginable, or it's not advised to and should stay on LAN ?
[10:59] * darkfaded votes for stay on lan
[11:00] * tnt (~tnt@91.176.10.129) Quit (Ping timeout: 480 seconds)
[11:02] * Cube1 (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[11:09] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) has joined #ceph
[11:16] * loicd (~loic@90.84.146.203) Quit (Ping timeout: 480 seconds)
[11:21] * tnt (~tnt@212-166-48-236.win.be) has joined #ceph
[11:22] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[11:26] <Robe> coredumb: ceph is very sensitive to latency and packetloss
[11:26] <Robe> so for good performance you should have reliable network and low latencies
[11:28] <liiwi> just think how long your stat() shoudl take :)
[11:31] <coredumb> Robe, liiwi yeah that's what i was thinking
[11:32] <darkfaded> the writes are syncronous so it would always block until the remote OSDs committed to their journal and RTTed that
[11:33] <darkfaded> it's just like if you had two fat Emc^2 and sync sdrf. you might have 10gig link but you'll still see a few 10mb/s over a few 1000 miles :)
[11:34] * yoshi (~yoshi@p6124-ipngn1401marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[11:35] <liiwi> your options are replication on top of filesystems/containers or then change to something like xtreemfs
[11:37] * loicd (~loic@LPuteaux-156-16-100-112.w80-12.abo.wanadoo.fr) has joined #ceph
[11:40] <coredumb> darkfaded: it would be over 100mb/s but yeah could still be a problem ;)
[11:40] <coredumb> liiwi: yeah i'm thinking of ways trying to see the advantages/inconvenients of all solutions :)
[11:41] <coredumb> how would xtreemfs help ?
[11:41] <liiwi> can't really comment on that without knowing your use case
[11:42] <liiwi> also, beware: https://groups.google.com/d/topic/xtreemfs/4dsh06jH6Tk/discussion
[11:45] <coredumb> liiwi: my use case is a dovecot cluster in different datacenters
[11:46] <liiwi> iow loads of maildirs?
[11:46] <absynth_47215> i am not sure that is a good idea...
[11:46] <coredumb> liiwi: not so much
[11:47] <Gugge-47527> do you have to unmap/map an rbd after resize, or can the size be updated in another way?
[11:47] <coredumb> absynth_47215: it's always a good idea not having all your eggs in the same basket ;)
[11:48] <coredumb> and my actual datacenter have prove me right on this ^^
[11:48] <absynth_47215> well, get a better one :)
[11:48] <coredumb> yeah still "same basket" issue
[11:48] <coredumb> ^^
[11:49] <absynth_47215> if you want both sides of the cluster to be active at all times and loadbalance clients, i think your performance will suck severely
[11:50] <absynth_47215> if you want an active/passive setup, thinks might be different (i.e. have a backup cluster for redundancy)
[11:54] <coredumb> more a active/passive setup than active/active
[11:55] <coredumb> clearly, i'd do an active/active in the same DC
[12:01] <joao> hello all
[12:01] <absynth_47215> did you test ceph as a storage container for mails yet? like, at all? i am not sure how ceph performs with the typical mail server setup
[12:01] <absynth_47215> (i.e. lots and lots of small reads and writes, lots and lots of small files)
[12:01] <coredumb> not at all :D
[12:01] <absynth_47215> that would be an awesome thing to test
[12:02] <absynth_47215> just a small cluster with 10 OSDs and 3 mons or so
[12:02] <coredumb> that was a later question on my "to ask list"
[12:02] <absynth_47215> joao: do you know of any inktank customers or projects that host mail servers on ceph?
[12:02] <coredumb> how ceph behave with small files
[12:02] <joao> absynth_47215, don't know of anyone doing that
[12:02] <coredumb> i've tested glusterfs with smal files and it's catastrophic
[12:02] <joao> although that came up during the ceph workshop last november
[12:02] <absynth_47215> http://ceph.com/community/ceph-bobtail-performance-io-scheduler-comparison/#4kbradoswrite
[12:03] <coredumb> also have you tried ceph on top of ZFS ?
[12:03] <absynth_47215> i have no idea how rados (i.e. object store) compares to rbd in terms of performance
[12:04] <joao> absynth_47215, I think there was some interest in working on a comparison of the two iirc
[12:04] <absynth_47215> usually, ceph would run on xfs, ext4 or btrfs
[12:04] <joao> some *internal* interest
[12:04] <absynth_47215> i don't know if anyone uses it with zfs
[12:04] <darkfaded> i'd mostly be intersted in on-vxfs
[12:04] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[12:05] <darkfaded> but that'll only get interesting when rhel has a non-outdated kernel some day
[12:05] <darkfaded> (never)
[12:05] <absynth_47215> i.e. when hell freezes over.
[12:05] <joao> there were a couple of guys that brought that up (ceph on zfs), mainly with zfs-on-linux I believe, but have no idea where that ended up ;)
[12:06] <coredumb> i'm using zfs on linux, i should find some time to put ceph on top of that
[12:06] <darkfaded> coredumb: how many servers do you run in the one-basket dc?
[12:07] <coredumb> 2
[12:08] <coredumb> backups and secondary DNS in another one
[12:08] <darkfaded> ah, ok, thats somewhere on the scale where i'd also be. i've been shying away from anything distributed until i have more nodes
[12:08] <coredumb> are you darkfader on #cfengine ?
[12:08] <coredumb> i find it disturbing with the two rooms side by side
[12:08] <darkfaded> my lab can be bigger and more fun, but for colo i can't have 4-10 nodes at start
[12:08] <darkfaded> hehe
[12:09] <darkfaded> ok i'll be silent over there.
[12:09] <coredumb> so that's you :D
[12:09] <darkfaded> yes :>
[12:09] <darkfaded> talking in many channels is one of the moments where being a bit attention-challenged is really helpful
[12:10] <coredumb> i'd be happy to test on virtual machines, but results are not usually that great in such a confined setup
[12:10] <darkfaded> as long as you don't have multiple OSDs on the same physical disk
[12:11] <darkfaded> and i've even done that in the lab at some point, as long as it's not too heavily loaded it works out
[12:11] <coredumb> with only two hosts node i don't see how i could do that
[12:12] <darkfaded> ideally you'd want 4 boxes with 2 disks and 2 nics each
[12:12] <coredumb> yeah
[12:13] <darkfaded> anyway, for tiny 2-node setup i've decided to just put one disk "in the other node" and do async drbd to it
[12:13] <darkfaded> sloppy mirror, not for failover, just as DR
[12:14] <Kdecherf> does anyone want a test of ceph on top of zfs?
[12:14] <coredumb> Kdecherf: yes
[12:17] <Kdecherf> coredumb: ok, I will make one soon
[12:18] <coredumb> oh cool
[12:19] <coredumb> i'll hang around here then :)
[12:19] * ScOut3R (~ScOut3R@dsl51B61EED.pool.t-online.hu) Quit (Remote host closed the connection)
[12:38] * loicd (~loic@LPuteaux-156-16-100-112.w80-12.abo.wanadoo.fr) Quit (Quit: Leaving.)
[12:39] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[12:42] <coredumb> Kdecherf: which RC of ZOL are you using ?
[12:44] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[12:45] <Kdecherf> coredumb: we are not using zfs/zol atm, but it is planned for our next storage cluster ;)
[12:45] <coredumb> hehe ok
[12:46] <coredumb> stay on RC12 then if you don't plan on going master
[12:46] <coredumb> or wait for rc14
[12:46] <coredumb> nasty snapshot directory bug on rc13
[12:48] * match (~mrichar1@pcw3047.see.ed.ac.uk) has joined #ceph
[12:49] * KindTwo (KindOne@50.96.231.186) has joined #ceph
[12:49] * KindOne (KindOne@h180.210.89.75.dynamic.ip.windstream.net) Quit (Ping timeout: 480 seconds)
[12:49] * KindTwo is now known as KindOne
[12:50] <Kdecherf> coredumb: thx for the tip
[12:54] <coredumb> rc12 is rock stable
[12:54] <coredumb> at least here ;)
[13:30] * loicd (~loic@soleillescowork-4p-55-10.cnt.nerim.net) has joined #ceph
[13:43] * lollo (lollo@l4m3r5.org) has joined #ceph
[13:46] <lollo> hi, i have problem with ceph template for zabbix
[13:47] <lollo> template is correct?
[13:47] <lollo> source http://zooi.widodh.nl/ceph/zabbix_ceph_templates.xml
[13:47] * sleinen1 (~Adium@2001:620:0:2d:418e:7672:d81b:41dc) has joined #ceph
[13:48] * nyeates (~nyeates@pool-173-59-239-231.bltmmd.fios.verizon.net) has joined #ceph
[13:49] * nyeates (~nyeates@pool-173-59-239-231.bltmmd.fios.verizon.net) Quit ()
[13:52] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[13:55] * sleinen (~Adium@2001:620:0:46:55bd:83a4:cb2e:ee1b) Quit (Ping timeout: 480 seconds)
[13:55] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) Quit (Remote host closed the connection)
[13:56] * sleinen1 (~Adium@2001:620:0:2d:418e:7672:d81b:41dc) Quit (Ping timeout: 480 seconds)
[14:00] * Norman (53a31f10@ircip2.mibbit.com) has joined #ceph
[14:01] <Norman> Hi guys! I have a question, Im trying out Ceph-fuse and it works fast. Now I'm trying to run OpenVZ containers on it and I have allot of errors saying, Value too large for defined data type
[14:01] <Norman> I'm running v0.56.1, what am I doing wrong here?
[14:02] * sleinen (~Adium@2001:620:0:26:3c45:cde1:5c2a:9270) has joined #ceph
[14:03] <absynth_47215> you are getting those errors where...?
[14:06] <Norman> well when creating containers or running updates or even entering containers from the host
[14:16] <absynth_47215> what makes you think this is a ceph issue instead of an OpenVZ issue?
[14:17] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[14:17] * scalability-junk (~stp@188-193-201-35-dynip.superkabel.de) Quit (Ping timeout: 480 seconds)
[14:25] <Norman> as it is working on all local drives
[14:29] <absynth_47215> did you google the error? points to a fuse issue
[14:33] <Norman> is it possible with ceph-fuse to set mount variables? it now mounts as (rw,nosuid,nodev,default_permissions,allow_other) but maybe it needs the suid and dev ?
[14:36] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has joined #ceph
[14:48] * scalability-junk (~stp@89.204.137.93) has joined #ceph
[14:49] <absynth_47215> never used ceph-fuse
[14:49] * WOOHOO (~OOHOOW@46.166.169.30) has joined #ceph
[14:49] <WOOHOO> http://www.iBooter.us - Most Cheapest & Strongest Stresser! [Auto-Buy]
[14:49] * WOOHOO (~OOHOOW@46.166.169.30) has left #ceph
[14:50] * joao sets mode +b *!~OOHOOW@46.166.169.30
[14:51] <absynth_47215> i wonder what a "strong stresser" is
[14:52] <wschulze> absynth_47215: You need to resist the urge to click on the URL
[14:54] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[14:54] * scalability-junk (~stp@89.204.137.93) Quit (Quit: Leaving)
[14:56] <absynth_47215> yeah, i know. but it's so tempting...!
[14:57] <wschulze> I know - that's why I was wondering if you had done it already. (and how many others!) :-)
[15:00] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[15:07] * aliguori (~anthony@cpe-70-112-157-151.austin.res.rr.com) has joined #ceph
[15:08] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[15:10] * eternaleye (~eternaley@tchaikovsky.exherbo.org) Quit (resistance.oftc.net osmotic.oftc.net)
[15:10] * sleinen (~Adium@2001:620:0:26:3c45:cde1:5c2a:9270) Quit (resistance.oftc.net charm.oftc.net)
[15:10] * BManojlovic (~steki@91.195.39.5) Quit (resistance.oftc.net charm.oftc.net)
[15:10] * nz_monkey (~nz_monkey@222.47.255.123.static.snap.net.nz) Quit (resistance.oftc.net charm.oftc.net)
[15:10] * al (quassel@niel.cx) Quit (resistance.oftc.net charm.oftc.net)
[15:10] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit (resistance.oftc.net charm.oftc.net)
[15:10] * rturk-away (~rturk@ds2390.dreamservers.com) Quit (resistance.oftc.net charm.oftc.net)
[15:10] * joao (~JL@89.181.149.199) Quit (resistance.oftc.net charm.oftc.net)
[15:10] * stass (stas@ssh.deglitch.com) Quit (resistance.oftc.net charm.oftc.net)
[15:10] * topro (~topro@host-62-245-142-50.customer.m-online.net) Quit (resistance.oftc.net charm.oftc.net)
[15:10] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (resistance.oftc.net charm.oftc.net)
[15:10] * mistur (~yoann@kewl.mistur.org) Quit (resistance.oftc.net charm.oftc.net)
[15:10] * Meths (~meths@2.27.72.227) Quit (resistance.oftc.net charm.oftc.net)
[15:10] * Tribaal (uid3081@tooting.irccloud.com) Quit (resistance.oftc.net charm.oftc.net)
[15:10] * sstan (~chatzilla@dmzgw2.cbnco.com) Quit (resistance.oftc.net charm.oftc.net)
[15:10] * sbadia (~sbadia@aether.yasaw.net) Quit (resistance.oftc.net charm.oftc.net)
[15:10] * chftosf (uid7988@id-7988.hillingdon.irccloud.com) Quit (resistance.oftc.net charm.oftc.net)
[15:10] * Karcaw (~evan@68-186-68-219.dhcp.knwc.wa.charter.com) Quit (resistance.oftc.net charm.oftc.net)
[15:10] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) Quit (resistance.oftc.net charm.oftc.net)
[15:10] * lurbs (user@uber.geek.nz) Quit (resistance.oftc.net charm.oftc.net)
[15:10] * janisg (~troll@85.254.50.23) Quit (resistance.oftc.net charm.oftc.net)
[15:10] <janos> haha strong stresser
[15:10] <janos> no, i will not click
[15:10] <jmlowe> I like to look at spam links in lynx
[15:11] * eternaleye (~eternaley@tchaikovsky.exherbo.org) has joined #ceph
[15:12] * sleinen (~Adium@2001:620:0:26:3c45:cde1:5c2a:9270) has joined #ceph
[15:12] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[15:12] * nz_monkey (~nz_monkey@222.47.255.123.static.snap.net.nz) has joined #ceph
[15:12] * al (quassel@niel.cx) has joined #ceph
[15:12] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[15:12] * rturk-away (~rturk@ds2390.dreamservers.com) has joined #ceph
[15:12] * joao (~JL@89.181.149.199) has joined #ceph
[15:12] * stass (stas@ssh.deglitch.com) has joined #ceph
[15:12] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[15:12] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[15:12] * mistur (~yoann@kewl.mistur.org) has joined #ceph
[15:12] * Meths (~meths@2.27.72.227) has joined #ceph
[15:12] * Tribaal (uid3081@tooting.irccloud.com) has joined #ceph
[15:12] * sstan (~chatzilla@dmzgw2.cbnco.com) has joined #ceph
[15:12] * janisg (~troll@85.254.50.23) has joined #ceph
[15:12] * sbadia (~sbadia@aether.yasaw.net) has joined #ceph
[15:12] * chftosf (uid7988@id-7988.hillingdon.irccloud.com) has joined #ceph
[15:12] * Karcaw (~evan@68-186-68-219.dhcp.knwc.wa.charter.com) has joined #ceph
[15:12] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) has joined #ceph
[15:12] * lurbs (user@uber.geek.nz) has joined #ceph
[15:21] <absynth_47215> lawl
[15:21] <absynth_47215> ibooter.us is "ddos protected by cloudflare"
[15:21] * sagelap1 (~sage@mobile-166-137-178-153.mycingular.net) has joined #ceph
[15:26] * sagelap (~sage@76.89.177.113) Quit (Ping timeout: 480 seconds)
[15:26] * Morg (d4438402@ircip2.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[15:27] <janos> haha
[15:37] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Read error: Operation timed out)
[15:44] <absynth_47215> hm, i wonder... is there an official dreamhost irc channel?
[15:47] <joao> on freenode, I think
[15:48] * sagelap1 (~sage@mobile-166-137-178-153.mycingular.net) Quit (Ping timeout: 480 seconds)
[15:54] <coredumb> absynth_47215: booter = DDOS platform
[15:55] <coredumb> or that's what they like to call it
[15:57] * BillK (~BillK@124-169-244-193.dyn.iinet.net.au) Quit (Quit: Leaving)
[15:59] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[16:07] * agh (~agh@www.nowhere-else.org) has joined #ceph
[16:08] * dosaboy (~gizmo@12.231.120.253) has joined #ceph
[16:14] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Remote host closed the connection)
[16:14] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[16:15] * sleinen (~Adium@2001:620:0:26:3c45:cde1:5c2a:9270) Quit (Quit: Leaving.)
[16:15] * sleinen (~Adium@130.59.94.77) has joined #ceph
[16:22] <absynth_47215> coredumb: as in, they are selling ddos services?
[16:22] * absynth_47215 *does* click on that URL now.
[16:22] <coredumb> absynth_47215: yes
[16:23] <coredumb> it's usually pretty cheap
[16:23] <coredumb> but usually web booters aren't really _that_ powerful
[16:23] * sleinen (~Adium@130.59.94.77) Quit (Ping timeout: 480 seconds)
[16:23] <coredumb> i mean cheap booters with web interface
[16:23] <coredumb> :P
[16:25] <coredumb> so to make things legit they call that "stresser"
[16:30] * chftosf (uid7988@id-7988.hillingdon.irccloud.com) Quit (Ping timeout: 480 seconds)
[16:30] * chftosf (uid7988@hillingdon.irccloud.com) has joined #ceph
[16:32] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Remote host closed the connection)
[16:32] * ScOut3R (~ScOut3R@catv-89-133-32-74.catv.broadband.hu) has joined #ceph
[16:32] <absynth_47215> smart move to promote this via irc
[16:32] <absynth_47215> catches the kiddies who want to do splitriding
[16:32] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[16:34] * ircolle (~ircolle@c-67-172-132-164.hsd1.co.comcast.net) has joined #ceph
[16:37] * kylehutson (~kylehutso@dhcp231-11.user.cis.ksu.edu) has joined #ceph
[16:47] * sagelap (~sage@209.140.114.61) has joined #ceph
[16:54] * sleinen (~Adium@217-162-132-182.dynamic.hispeed.ch) has joined #ceph
[16:54] * low (~low@188.165.111.2) Quit (Quit: bbl)
[16:55] * sleinen1 (~Adium@2001:620:0:26:2d14:6d7d:266d:dbfe) has joined #ceph
[17:01] * sagelap (~sage@209.140.114.61) Quit (Read error: Connection reset by peer)
[17:01] <coredumb> :)
[17:02] <kylehutson> I've been banging my head against the radosgw for longer than I care to admit.
[17:02] * sleinen (~Adium@217-162-132-182.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[17:02] <kylehutson> This is on an isolated network. I have two OSD hosts, named 'leviathan' and 'minotaur', and leviathan is hosting the radosgw.
[17:02] <kylehutson> Configs can be seen at http://pastebin.com/dVyiLT9D
[17:02] <kylehutson> When I run 'swift -V 1.0 -A http://leviathan/auth -U kylehutson:swift -K [key-redacted] post test' (even from leviathan itself), I get "Auth GET failed: http://leviathan:80/auth/ 500 Internal Server Error", and apache error log shows 6 instances of
[17:02] <kylehutson> [error] [client 127.0.0.1] (2)No such file or directory: FastCGI: failed to connect to server "/var/www/s3gw.fcgi": connect() failed
[17:02] <kylehutson> [error] [client 127.0.0.1] FastCGI: incomplete headers (0 bytes) received from server "/var/www/s3gw.fcgi"
[17:02] <kylehutson> Help?!
[17:05] * agh (~agh@www.nowhere-else.org) Quit (Remote host closed the connection)
[17:07] * sleinen1 (~Adium@2001:620:0:26:2d14:6d7d:266d:dbfe) Quit (Quit: Leaving.)
[17:07] <iggy> kylehutson: the devs should be up and about pretty soon (west coast .us)
[17:08] <kylehutson> iggy: thx
[17:08] * xdeller (~xdeller@62.173.129.210) has joined #ceph
[17:09] <iggy> I'm trying to find the person who can most likely answer your question
[17:09] <absynth_47215> uh, that sounds like a pretty straight forward fastcgi configuration error
[17:10] <absynth_47215> what happens if you call /var/www/s3gw.fcgi manually?
[17:10] <absynth_47215> on leviathan?
[17:10] <absynth_47215> (as in: execute it on the command line and/or via leviathan:80/s3gw.fcgi
[17:11] <kylehutson> It returns nothing
[17:11] <absynth_47215> but it doesn`t error out, either?
[17:11] <kylehutson> no
[17:12] <absynth_47215> can you exchange FastCgiExternalServer with FastCgiServer in the first occurrence?
[17:12] <kylehutson> On leviathan 'wget http://127.0.0.1/s3gw.fcgi' gives '500 Internal Server Error'
[17:12] <absynth_47215> and, err, does the socket exist?
[17:12] <kylehutson> Lemme give it ashot
[17:12] <absynth_47215> /tmp/radosgw.sock
[17:13] <absynth_47215> i am not too sure why it gets defined as an external server, but that might well be correct.
[17:13] <kylehutson> The socket should be /srv/ceph/rgw.sock, since that's what I specified, right?
[17:13] <absynth_47215> the socket name should be the same as the actual rgw socket, yeah. at least afaik
[17:13] <absynth_47215> i am taking informed guesses here, i didn't configure a radosgw in the last months
[17:14] <absynth_47215> maybe you can try keeping it on ...ExternalServer... but correcting the socket path
[17:15] <kylehutson> Trying that now
[17:15] <iggy> and the keyring is known good? I would think so if calling s3gw.fcgi doesn't error out, but you never know...
[17:17] <kylehutson> changed rgw.conf to "FastCgiServer /var/www/s3gw.fcgi -socket /srv/ceph/rgw.sock" and "FastCgiExternalServer /var/www/s3gw.fcgi -socket /srv/ceph/rgw.sock" - both resulted in 404 errors
[17:17] <absynth_47215> 404?
[17:18] <absynth_47215> is /srv/ceph/rgw.sock there?
[17:18] <kylehutson> Ah, apache error log now shows [crit] (98)Address already in use: FastCGI: can't create server "/var/www/s3gw.fcgi": bind() failed [/srv/ceph/rgw.sock]
[17:18] <kylehutson> Yes, it's there, when I there is a radosgw process running that (presumably) created it
[17:18] <absynth_47215> lemme check the documentation
[17:19] <absynth_47215> ok.
[17:19] <absynth_47215> hm
[17:19] <absynth_47215> http://ceph.com/docs/master/man/8/radosgw/
[17:19] * PerlStalker (~PerlStalk@72.166.192.70) has joined #ceph
[17:19] <absynth_47215> according to this page, rgw socket path and the -socket option *should* be identical in the config
[17:20] <kylehutson> And they are now (don't know how I missed that for the last week+)
[17:21] <kylehutson> They are both pointing to /srv/ceph/rgw.sock
[17:21] <absynth_47215> you dont see anything in the ceph log, right?
[17:22] <absynth_47215> radosgw.log
[17:22] <kylehutson> I wasn't before, but I am now...
[17:23] * nyeates (~nyeates@pool-173-59-239-231.bltmmd.fios.verizon.net) has joined #ceph
[17:23] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[17:24] <kylehutson> wget was giving a 404, but the 'swift' command appears to be working. This at least gives me a place to jump off from.
[17:24] <kylehutson> Thanks member:absynth_47215!
[17:25] <kylehutson> This is what happens when you copy/paste/modify from two different configuration pages. :-(
[17:29] <absynth_47215> no prob
[17:30] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has left #ceph
[17:33] * sagelap (~sage@mobile-166-137-212-011.mycingular.net) has joined #ceph
[17:39] * tnt (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[17:43] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:46] * vata (~vata@2607:fad8:4:6:d5a6:3e54:75a8:e21a) has joined #ceph
[17:50] * tnt (~tnt@91.176.10.129) has joined #ceph
[17:53] * loicd (~loic@soleillescowork-4p-55-10.cnt.nerim.net) Quit (Quit: Leaving.)
[18:00] * Norman (53a31f10@ircip2.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[18:03] * sagelap (~sage@mobile-166-137-212-011.mycingular.net) Quit (Quit: Leaving.)
[18:03] * sagelap (~sage@166.137.212.11) has joined #ceph
[18:05] <jamespage> how well is cephx + rbd mapping going to work with the 3.5 kernel in Ubuntu 12.10? I'm seeing auth failure (libceph: no secret set (for auth_x protocol)) when trying to map
[18:06] <jamespage> trying to figure out whether I'm being dumb or its actually broken
[18:06] * leseb (~leseb@mx00.stone-it.com) Quit (Remote host closed the connection)
[18:08] <jamespage> meh - figured it out --user and --keyfile with just the key appears to work OK
[18:09] <topro> after taking out one of the six OSDs of my cluster and giving it some time to rebalance there are some PGs staying "active+remapped" with 0.004% degraded, any hints, please?
[18:11] * match (~mrichar1@pcw3047.see.ed.ac.uk) Quit (Quit: Leaving.)
[18:16] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[18:29] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[18:31] * sagelap1 (~sage@ip-64-134-232-184.public.wayport.net) has joined #ceph
[18:38] * sagelap (~sage@166.137.212.11) Quit (Ping timeout: 480 seconds)
[18:41] * alram (~alram@38.122.20.226) has joined #ceph
[18:42] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Remote host closed the connection)
[18:44] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[18:51] * sander (~chatzilla@c-174-62-162-253.hsd1.ct.comcast.net) has joined #ceph
[18:53] * Vjarjadian (~IceChat77@5ad6d005.bb.sky.com) has joined #ceph
[18:54] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[18:54] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) has joined #ceph
[18:56] * ScOut3R (~ScOut3R@catv-89-133-32-74.catv.broadband.hu) Quit (Remote host closed the connection)
[18:57] * sagelap1 (~sage@ip-64-134-232-184.public.wayport.net) Quit (Ping timeout: 480 seconds)
[19:05] * chutzpah (~chutz@199.21.234.7) has joined #ceph
[19:06] * leseb (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[19:06] * leseb_ (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[19:06] * leseb (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[19:06] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[19:12] * sjustlaptop (~sam@38.122.20.226) has joined #ceph
[19:13] * jluis (~JL@89.181.159.56) has joined #ceph
[19:13] * jlogan1 (~Thunderbi@72.5.59.176) has joined #ceph
[19:17] * joao (~JL@89.181.149.199) Quit (Ping timeout: 480 seconds)
[19:20] * dosaboy (~gizmo@12.231.120.253) Quit (Quit: Leaving.)
[19:25] * dosaboy (~gizmo@12.231.120.253) has joined #ceph
[19:31] * sjustlaptop (~sam@38.122.20.226) Quit (Ping timeout: 480 seconds)
[19:36] * Tamil (~tamil@38.122.20.226) has joined #ceph
[19:40] * madkiss1 (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[19:43] * loicd (~loic@magenta.dachary.org) has joined #ceph
[19:46] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[19:47] * loicd (~loic@magenta.dachary.org) Quit ()
[19:49] * loicd (~loic@magenta.dachary.org) has joined #ceph
[19:51] * loicd (~loic@magenta.dachary.org) Quit ()
[19:53] * xdeller (~xdeller@62.173.129.210) Quit (Quit: Leaving)
[19:54] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[19:56] <jksM> sagewk, better it seems... I restarted the crashed osd right away, and it has been running since without crashing.. and I'm back at HEALTH_OK again again ;-)
[20:04] * loicd (~loic@magenta.dachary.org) has joined #ceph
[20:09] <jmlowe> here is a question for the devs, if I delete a rbd with inconsistent objects what would happen?
[20:12] <joshd> jmlowe: nothing special, although you might have to do another scrub to remove the inconsistent flag from the pg
[20:12] <jmlowe> so if the data in the rbd was disposable I could clear an inconsistent pg safely that way?
[20:13] <joshd> yeah
[20:13] <jmlowe> ok, just checking before I do something dangerous
[20:13] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[20:14] <joshd> of course, if you don't know the root cause of the inconsistency, you might still have problems later
[20:14] <joshd> but if it was just btrfs scrub I guess you'll be fine
[20:14] <jmlowe> joshd: http://tracker.newdream.net/issues/3810
[20:15] <jmlowe> I'm thinking about copying the same data over to rbd and making sure I don't scrub btrfs
[20:15] <jmlowe> then repeat while running a scrub
[20:15] <sjust> jmlowe: that would be a good thing to test
[20:15] <joshd> running a ceph deep scrub in between? that sounds good
[20:16] <jmlowe> yeah
[20:16] <jmlowe> first off, I need some clever grepping to figure out which rbd I should work on
[20:17] * dosaboy (~gizmo@12.231.120.253) Quit (Quit: Leaving.)
[20:22] * jskinner (~jskinner@69.170.148.179) has joined #ceph
[20:24] * noob2 (~noob2@pool-71-244-111-36.phlapa.fios.verizon.net) has joined #ceph
[20:25] <noob2> is there a limit to how many objects one can upload to their bucket with the s3 gateway?
[20:36] * Ryan_Lane (~Adium@216.38.130.161) has joined #ceph
[20:36] * dmick (~dmick@2607:f298:a:607:387f:a346:284c:2903) has joined #ceph
[20:39] * ShaunR (~ShaunR@staff.ndchost.com) Quit (Ping timeout: 480 seconds)
[20:42] * leseb (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[20:42] * leseb_ (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[20:50] * loicd (~loic@magenta.dachary.org) has joined #ceph
[20:51] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[20:51] * ShaunR (~ShaunR@staff.ndchost.com) has joined #ceph
[20:52] * snaff (~z@81-86-160-226.dsl.pipex.com) has joined #ceph
[21:06] <nhm> alright, finally got the performance testing client node setup. Now to get the bonded 10G network going.
[21:08] <noob2> nice :)
[21:08] <jmlowe> bonded 10G is that multiple 10G?
[21:08] * miroslav1 (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[21:08] <nhm> jmlowe: yep
[21:08] <nhm> jmlowe: now I'm kind of kicking myself for not going with Connect-X3 cards with a QSFP+ cable.
[21:09] <jmlowe> wow, how close to line rate are you getting?
[21:09] <nhm> jmlowe: no idea, I just installed ubuntu on the client and connected the SFP cables.
[21:10] <nhm> sorry, SFP+
[21:10] <jmlowe> I would think you would start maxing out the pci bus at around 16Gbs
[21:10] * The_Bishop__ (~bishop@f052096136.adsl.alicedsl.de) Quit (Read error: Operation timed out)
[21:11] <nhm> naw, PCIE can do way more than that. Think about how much data graphics cards can transfer.
[21:12] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[21:12] <nhm> On 16x PCIE 2.0 I remember getting like 6GB/s with CUDA.
[21:16] * dosaboy (~gizmo@12.231.120.253) has joined #ceph
[21:18] * rturk-away is now known as rturk
[21:18] * Cube (~Cube@12.248.40.138) has joined #ceph
[21:20] * jlogan1 (~Thunderbi@72.5.59.176) Quit (Ping timeout: 480 seconds)
[21:24] <jmlowe> hmm, wikipedia says 15.75 GB/s per 16 lane slot, I stand corrected
[21:26] * The_Bishop__ (~bishop@f052096136.adsl.alicedsl.de) has joined #ceph
[21:26] * miroslav1 (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) Quit (Quit: Leaving.)
[21:29] * jlogan1 (~Thunderbi@72.5.59.176) has joined #ceph
[21:30] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[21:31] * loicd (~loic@magenta.dachary.org) has joined #ceph
[21:38] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[21:48] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[21:50] * todin (tuxadero@kudu.in-berlin.de) Quit ()
[21:50] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[21:54] * sleinen (~Adium@217-162-132-182.dynamic.hispeed.ch) has joined #ceph
[21:56] * sleinen1 (~Adium@2001:620:0:25:bdd7:c5c7:9408:a991) has joined #ceph
[21:58] <janos> jmlowe: i do a rough 1GB/s per lane assignment
[21:58] <janos> gotta be careful - some low end raid cards use 4 lanes physically, but only 2 for bandwidth ;(
[21:59] <janos> i was looking for stuff for home cluster and ran into that
[22:00] <dmick> janos: that's horrible
[22:00] * xdeller (~xdeller@broadband-77-37-224-84.nationalcablenetworks.ru) has joined #ceph
[22:02] * sleinen (~Adium@217-162-132-182.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[22:06] <jmlowe> I remember hp coming through a couple of years ago hawking a 10G card that only had 8 lanes claiming it would do line rate
[22:08] <dmick> footnote 1: for jumbo packets of all zero payload with TCP/compression offload :)
[22:10] * jskinner (~jskinner@69.170.148.179) Quit (Remote host closed the connection)
[22:13] * jskinner (~jskinner@69.170.148.179) has joined #ceph
[22:13] <liiwi> well, jumbo mtu networks need to be limited to the networks where they needs to be
[22:14] <liiwi> and then you need a level of network that can handle 9k mtu packets hitting it for going for splitting to smaller mtu packets.
[22:14] <liiwi> .. think for the routers
[22:14] <dmick> liiwi: I hope it's clear that I was only joking
[22:16] * dosaboy (~gizmo@12.231.120.253) Quit (Quit: Leaving.)
[22:21] <liiwi> dmick: and I'm sipping whiskey at late night (hey, redbreast is smooooth..)
[22:28] <dmick> mm whiskey
[22:28] <noob2> agreed
[22:34] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:35] <liiwi> but, people, keep the jumbo networks separated (storage, hey) and talk to internets the most efficient mtu
[22:37] <noob2> yeah 9K mtu's will get chunked into 1500 when they leave your office etc to hit the internet
[22:37] <dmick> if you're sending 9k packets of all zeros
[22:37] <dmick> chunking into MTUs is the least of your worries
[22:38] <liiwi> easy to choke router with 'em
[22:38] <noob2> true
[22:39] <liiwi> since most routers expect such things to be handled by "the distribution layer"
[22:39] <dmick> I feel like I made a joke about shooting myself in the head and you guys are debating whether I should load my own cartridges or buy them :)
[22:39] <liiwi> this is vendor set layer
[22:39] <jmlowe> unless you are at a university, our 100GigE links to the world are 9k mtu
[22:40] <noob2> are you on internet2 there? i heard about this last night when i was teaching
[22:40] <liiwi> jmlowe: would hate to be the next hop :P
[22:41] <jmlowe> http://noc.net.internet2.edu/
[22:41] <jmlowe> we run the noc for I2
[22:42] <liiwi> droolage
[22:42] <janos> hrm noob question, do LAGs isolate jumbo frames from the rest of the traffic? or is that switch-specific
[22:42] <liiwi> meh flash
[22:43] <noob2> jmlowe: sweet :D
[22:43] <liiwi> janos: none that I've seen, but my views are limited.
[22:44] <liiwi> frame is frame untill someone mods it
[22:45] * KindOne (KindOne@50.96.231.186) Quit (Remote host closed the connection)
[22:46] * Ryan_Lane (~Adium@216.38.130.161) Quit (Quit: Leaving.)
[22:47] * Ryan_Lane (~Adium@216.38.130.161) has joined #ceph
[22:54] * sagelap (~sage@64.168.229.50) has joined #ceph
[23:02] * KindOne (KindOne@50.96.231.186) has joined #ceph
[23:07] * Vjarjadian (~IceChat77@5ad6d005.bb.sky.com) Quit (Quit: Never put off till tomorrow, what you can do the day after tomorrow)
[23:07] * Vjarjadian (~IceChat77@5ad6d005.bb.sky.com) has joined #ceph
[23:18] * sagelap (~sage@64.168.229.50) Quit (Ping timeout: 480 seconds)
[23:26] * vata (~vata@2607:fad8:4:6:d5a6:3e54:75a8:e21a) Quit (Quit: Leaving.)
[23:35] * jlogan1 (~Thunderbi@72.5.59.176) Quit (Ping timeout: 480 seconds)
[23:36] * jlogan1 (~Thunderbi@2600:c00:3010:1:986b:ae1d:420:6c44) has joined #ceph
[23:40] * sander (~chatzilla@c-174-62-162-253.hsd1.ct.comcast.net) Quit (Ping timeout: 480 seconds)
[23:44] * sagelap (~sage@97.72.229.91) has joined #ceph
[23:46] <sagelap> paravoid: around?

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.