#ceph IRC Log

Index

IRC Log for 2013-01-09

Timestamps are in GMT/BST.

[0:02] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[0:02] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) has left #ceph
[0:05] <buck> that shirt needs to show up at FAST this year (IMHO)
[0:05] <buck> on what conditions do the ceph-qa-chef scripts get run? Prior to automated tests? On a host nuke? some other condition?
[0:09] <gregaf> not sure…houkouonchi-work or dmick probably know?
[0:10] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) Quit (Ping timeout: 480 seconds)
[0:11] <dmick> buck: there's a chef: taks
[0:11] <dmick> *task
[0:11] <buck> dmick: rad. thanks
[0:15] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[0:17] * benpol (~benp@garage.reed.edu) Quit (Quit: Leaving.)
[0:20] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[0:20] * fghaas (~florian@91-119-215-212.dynamic.xdsl-line.inode.at) has left #ceph
[0:22] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: Leaving)
[0:32] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Remote host closed the connection)
[0:33] * miroslav (~miroslav@c-98-248-210-170.hsd1.ca.comcast.net) has joined #ceph
[0:39] * jjgalvez (~jjgalvez@166.190.13.184) has joined #ceph
[0:39] * allsystemsarego (~allsystem@5-12-241-245.residential.rdsnet.ro) Quit (Quit: Leaving)
[0:41] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[0:43] * rturk-away is now known as rturk
[0:47] * jjgalvez (~jjgalvez@166.190.13.184) Quit (Ping timeout: 480 seconds)
[0:50] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[0:52] * PerlStalker (~PerlStalk@72.166.192.70) Quit (Quit: ...)
[0:55] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[0:59] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[1:00] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[1:07] * brady (~brady@rrcs-64-183-4-86.west.biz.rr.com) Quit (Quit: Konversation terminated!)
[1:07] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[1:20] <kylehutson> n00b question here. I finally figured out all my OSDs and everything is clean. Now I'm trying to add a radosgw. (version 0.49 'cause that's what in the Gentoo repos). "radosgw-admin user create" tells me I need to specify a uid, but "radosgw user gen" says "unrecognized arg gen"
[1:20] <kylehutson> Is this a known issue with this version of ceph?
[1:21] * tnt (~tnt@86.188-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[1:21] <gregaf> I don't think those commands match up…you're moving from radosgw-admin to radosgw? and dropping the create?
[1:22] <kylehutson> Sorry, the latter was also "radosgw-admin user gen"
[1:23] * amichel (~amichel@salty.uits.arizona.edu) has joined #ceph
[1:23] <gregaf> kylehutson: so you want "radosgw-admin user create gen"?
[1:23] <gregaf> which looks like approximately the right syntax, although it's been a while since i used those tools
[1:27] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[1:30] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[1:30] * jlogan1 (~Thunderbi@2600:c00:3010:1:52d:be18:aa69:de7) Quit (Ping timeout: 480 seconds)
[1:30] * mikey (~mikey@catv-213-222-190-74.catv.broadband.hu) Quit (Read error: Connection reset by peer)
[1:32] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Quit: Leseb)
[1:35] * mikey (~mikey@catv-213-222-190-74.catv.broadband.hu) has joined #ceph
[1:36] * fprudhom (c3dc640b@ircip3.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[1:40] * korgon (~Peto@isp-korex-15.164.61.37.korex.sk) has joined #ceph
[1:45] <phantomcircuit> kylehutson, 0.55.1 is in portage but it's masked
[1:45] <phantomcircuit> i couldn't find any bugs that would require it be masked though
[1:46] * BManojlovic (~steki@85.222.223.220) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:48] <phantomcircuit> it would be nice if there was usage logging for rbd like there is for radosgw
[1:50] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[1:54] * Vjarjadian (~IceChat77@5ad6d005.bb.sky.com) Quit (Ping timeout: 480 seconds)
[1:59] * ircolle (~ircolle@c-67-172-132-164.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[2:19] * miroslav (~miroslav@c-98-248-210-170.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[2:28] * buck (~buck@bender.soe.ucsc.edu) has left #ceph
[2:34] <kylehutson> I'm compiling the latest now.
[2:42] * amichel (~amichel@salty.uits.arizona.edu) Quit ()
[2:42] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[2:42] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[2:45] <tore_> damn 0.56 stable... I cna't get through a 1GB iozone test with this
[2:47] * jluis (~JL@89.181.159.29) Quit (Quit: Leaving)
[2:48] <mikedawson> tore_: did you try 0.56.1 yet?
[2:52] * LeaChim (~LeaChim@b0faeeb0.bb.sky.com) Quit (Ping timeout: 480 seconds)
[2:54] * jeffrey4l (~jeffrey@219.237.227.243) Quit (Quit: Leaving)
[2:57] * yoshi (~yoshi@p2100-ipngn4002marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[2:58] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[3:00] * yoshi (~yoshi@p2100-ipngn4002marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[3:01] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[3:02] * korgon (~Peto@isp-korex-15.164.61.37.korex.sk) Quit (Quit: Leaving.)
[3:11] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[3:14] <mikedawson> Tested removing OSDs just now and got to a situation where ceph osd tree told me osds were still running even though I had stopped their processes. Is that state held in the MONs? Bug? This is 0.56.1
[3:17] * mattbenjamin (~matt@65.160.16.87) Quit (Quit: Leaving.)
[3:18] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) Quit (Quit: Leaving.)
[4:11] * korgon (~Peto@isp-korex-15.164.61.37.korex.sk) has joined #ceph
[4:13] * rturk is now known as rturk-away
[4:28] * miroslav (~miroslav@c-98-248-210-170.hsd1.ca.comcast.net) has joined #ceph
[4:29] * Cube (~Cube@12.248.40.138) Quit (Quit: Leaving.)
[4:30] * Cube (~Cube@12.248.40.138) has joined #ceph
[4:30] * Cube (~Cube@12.248.40.138) Quit ()
[4:37] * miroslav (~miroslav@c-98-248-210-170.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[4:37] * calebamiles1 (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[4:40] * yoshi (~yoshi@p2100-ipngn4002marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[4:43] * yoshi (~yoshi@p2100-ipngn4002marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[4:46] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[4:47] * chutzpah (~chutz@199.21.234.7) Quit (Quit: Leaving)
[4:55] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) has joined #ceph
[5:09] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[5:09] * korgon (~Peto@isp-korex-15.164.61.37.korex.sk) Quit (Quit: Leaving.)
[5:13] * sjustlaptop (~sam@71-83-191-116.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[5:17] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[5:33] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) has joined #ceph
[5:43] * aviziva (~aviziva@c-24-6-144-43.hsd1.ca.comcast.net) has joined #ceph
[5:44] * aviziva (~aviziva@c-24-6-144-43.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[5:49] * sjustlaptop (~sam@71-83-191-116.dhcp.gldl.ca.charter.com) has joined #ceph
[6:16] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[6:21] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Quit: ChatZilla 0.9.89 [Firefox 17.0.1/20121128204232])
[6:41] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) has joined #ceph
[7:18] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[7:32] * gregaf1 (~Adium@cpe-76-174-249-52.socal.res.rr.com) has joined #ceph
[7:32] * gregaf1 (~Adium@cpe-76-174-249-52.socal.res.rr.com) Quit ()
[7:42] * ninkotech (~duplo@ip-94-113-217-68.net.upcbroadband.cz) Quit (Remote host closed the connection)
[7:54] * sjustlaptop (~sam@71-83-191-116.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[8:17] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[8:31] * dpippenger (~riven@cpe-76-166-221-185.socal.res.rr.com) Quit (Remote host closed the connection)
[8:32] * dpippenger (~riven@cpe-76-166-221-185.socal.res.rr.com) has joined #ceph
[8:44] * madkiss (~madkiss@213.221.125.229) Quit (Quit: Leaving.)
[8:46] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) has joined #ceph
[8:52] * fghaas (~florian@91-119-215-212.dynamic.xdsl-line.inode.at) has joined #ceph
[8:57] * madkiss (~madkiss@62.96.31.190) has joined #ceph
[9:02] * sleinen (~Adium@user-23-13.vpn.switch.ch) has joined #ceph
[9:07] * tziOm (~bjornar@194.19.106.242) has joined #ceph
[9:12] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[9:13] * jeffrey4l (~jeffrey@219.237.227.243) has joined #ceph
[9:13] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:15] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[9:18] * loicd (~loic@magenta.dachary.org) has joined #ceph
[9:25] * verwilst (~verwilst@d5152D6B9.static.telenet.be) has joined #ceph
[9:26] * Leseb (~Leseb@193.172.124.196) has joined #ceph
[9:43] * tnt (~tnt@212-166-48-236.win.be) has joined #ceph
[9:52] * ScOut3R (~ScOut3R@212.96.47.215) has joined #ceph
[9:55] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[10:17] * loicd (~loic@AMontsouris-651-1-216-82.w92-140.abo.wanadoo.fr) has joined #ceph
[10:18] * JohansGlock (~quassel@kantoor.transip.nl) has joined #ceph
[10:18] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[10:19] <JohansGlock> hey guys, good day!
[10:19] <JohansGlock> Sorry to drop in like this, but do any of you happen to have a sample configuration for libvirt with ceph? :)
[10:20] <JohansGlock> I'm having some issues connecting libvirt/qemu to my ceph cluster ;)
[10:20] <JohansGlock> I'm pretty sure I just made some configuration error
[10:31] * oliver1 (~oliver@jump.filoo.de) has joined #ceph
[10:33] * LeaChim (~LeaChim@b0faeeb0.bb.sky.com) has joined #ceph
[10:33] <wido> JohansGlock: you mean for running a RBD VM?
[10:34] <JohansGlock> wido: yep
[10:35] <wido> JohansGlock: http://pastebin.com/Xtz6nd8U
[10:35] <wido> that should work
[10:36] <JohansGlock> wido: thx, let me check it
[10:37] <wido> About the secrets, the libvirt docs should tell you how to work with it
[10:37] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) has joined #ceph
[10:42] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[10:44] <absynth_47215> and make backups of each vm before you move it into ceph. :)
[10:45] <JohansGlock> hehe, this is my testing setup
[10:45] <JohansGlock> so for now it's fine ;)
[10:46] <JohansGlock> btw, I found some more things, "conf option 6789 has no value"
[10:46] <JohansGlock> seems there is a patch for it in 0.9.9
[10:46] * korgon (~Peto@isp-korex-15.164.61.37.korex.sk) has joined #ceph
[10:46] <JohansGlock> i'm running 0.9.8
[10:46] <JohansGlock> so let me upgrade everything first, I was just running the latest ubuntu stable version, which is one too old appearently
[10:46] * allsystemsarego (~allsystem@5-12-241-245.residential.rdsnet.ro) has joined #ceph
[10:48] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[10:49] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[10:49] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[11:05] * loicd1 (~loic@AMontsouris-651-1-194-53.w83-202.abo.wanadoo.fr) has joined #ceph
[11:07] * loicd (~loic@AMontsouris-651-1-216-82.w92-140.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[11:22] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) has joined #ceph
[11:23] * ScOut3R (~ScOut3R@212.96.47.215) Quit (Remote host closed the connection)
[11:39] * fghaas (~florian@91-119-215-212.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[11:45] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[11:51] * sleinen (~Adium@user-23-13.vpn.switch.ch) Quit (Quit: Leaving.)
[11:51] * sleinen (~Adium@130.59.94.99) has joined #ceph
[11:53] * sleinen1 (~Adium@130.59.94.99) has joined #ceph
[11:53] * sleinen (~Adium@130.59.94.99) Quit (Read error: Connection reset by peer)
[11:54] * ScOut3R (~ScOut3R@dslC3E4E249.fixip.t-online.hu) has joined #ceph
[11:54] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[11:54] <JohansGlock> yeah, working
[11:54] * sleinen (~Adium@2001:620:0:26:3900:77f4:d0f5:3f49) has joined #ceph
[11:54] <JohansGlock> was authentication configuration error
[12:01] * sleinen1 (~Adium@130.59.94.99) Quit (Ping timeout: 480 seconds)
[12:03] * jeffrey4l (~jeffrey@219.237.227.243) Quit (Quit: Leaving)
[12:03] * ScOut3R (~ScOut3R@dslC3E4E249.fixip.t-online.hu) Quit (Remote host closed the connection)
[12:08] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[12:14] * schlitzer (~schlitzer@109.75.189.45) has joined #ceph
[12:15] * ShaunR (~ShaunR@staff.ndchost.com) Quit ()
[12:16] <schlitzer> hey all, is someone around who has a productive CEPH (CEPH-FS, or RADOSGW) installation running. i would be interested in your experience with Clusters bigger then 4 PetaByte
[12:16] <schlitzer> Query welcome^^
[12:17] * thathe (~root@117-102-255-190.atisicloud.com) has joined #ceph
[12:21] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) has joined #ceph
[12:28] * ScOut3R (~ScOut3R@212.96.47.215) has joined #ceph
[12:30] <absynth_47215> schlitzer: your VM is running on one. ;)
[12:33] * loicd1 (~loic@AMontsouris-651-1-194-53.w83-202.abo.wanadoo.fr) Quit (Quit: Leaving.)
[12:33] * loicd (~loic@AMontsouris-651-1-194-53.w83-202.abo.wanadoo.fr) has joined #ceph
[12:35] <absynth_47215> oh, no, i was mistaken, it's not
[12:36] <schlitzer> lol, you whre talking about my private proxy node?
[12:36] <schlitzer> :-D
[12:36] <absynth_47215> yeah, i host it.
[12:36] <absynth_47215> but our ceph installation is way smaller than 4pb and i doubt anyone is running ceph in these dimensions yet
[12:37] * mistur (~yoann@kewl.mistur.org) Quit (Ping timeout: 480 seconds)
[12:38] <schlitzer> ahh nice
[12:38] <schlitzer> how big is your installation?
[12:38] * dosaboy (~gizmo@host86-163-126-180.range86-163.btcentralplus.com) has joined #ceph
[12:39] <absynth_47215> 30tera
[12:39] <absynth_47215> actual storage, two replicas each
[12:39] <schlitzer> not that big at the moment
[12:39] * dosaboy (~gizmo@host86-163-126-180.range86-163.btcentralplus.com) has left #ceph
[12:39] <schlitzer> could be done with a single hosts + replicas
[12:41] * mistur (~yoann@kewl.mistur.org) has joined #ceph
[12:41] * match (~mrichar1@pcw3047.see.ed.ac.uk) has joined #ceph
[12:41] <absynth_47215> which wouldn't make a lot of sense
[12:41] <schlitzer> it depends
[12:42] <schlitzer> 3 systems with 20x3TB devices, and 10GBit Ethernet, could work
[12:43] <schlitzer> are you willing to tell me what your installation looks like?
[12:45] <absynth_47215> we have about a half-dozen nodes with 4 OSDs each and 10gbe backnet
[12:45] <absynth_47215> three mons, that's it
[12:46] <schlitzer> do you use raid for your nodes, or is each disk attached to one osd?
[12:46] <absynth_47215> the latter
[12:46] <schlitzer> ok
[12:47] <schlitzer> and you use only a single network? so your are not splitting OSD-OSD and OSD-Client traffic?
[12:47] <absynth_47215> i think that's the default mode of operation
[12:47] <absynth_47215> errm, no, the 10gb is only ceph
[12:48] * stxShadow (~jens@jump.filoo.de) has joined #ceph
[12:48] <schlitzer> ahh okay, so ceph <-> client communikations goes via (multiple)1GBIT channels?
[12:49] <schlitzer> am i right that you are using RDB?
[12:49] <absynth_47215> yeah
[12:49] <absynth_47215> but i'm not at liberty discussing more details :)
[12:50] <schlitzer> well, thanks anyway
[12:50] <schlitzer> i�m happy for every insight i can get
[12:51] <absynth_47215> no prob. oh, and one little remark regarding your 20x3tb idea - the more data per node, the longer recovery will take in case of node failure
[12:51] <absynth_47215> but i think you know that ;)
[12:51] <schlitzer> yes i know that
[12:52] <schlitzer> but i guess a system with only 4 OSD�s each will have much cpu time left...
[12:52] <schlitzer> so my idea is to have a storage node with 10, or more nodes.
[12:52] <absynth_47215> if you have 3 nodes with 20 OSDs each and one of these crashes, there will be 60 tb data shifted between the two remaining nodes as soon as the osd out period is gone
[12:53] <schlitzer> well, i think in my case i would be more like haveing 100+ nodes :-D
[12:53] <schlitzer> having^^
[12:53] <absynth_47215> who are you with?
[12:53] <ScOut3R> absynth_47215: do you have the liberty to share with us what kind of controllers are you using with your OSDs?
[12:54] <absynth_47215> controlling what? disk or net?
[12:54] <ScOut3R> disks
[12:55] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[12:55] <absynth_47215> lsi with ssd buffer, i dont remember their weird name for it
[12:55] <schlitzer> i guess without raid that would not make a big difference. any recent SAS Controller should be fine....
[12:55] <absynth_47215> nah, it does matter
[12:56] <absynth_47215> a single node with shitty i/o performance becomes a bottleneck for the *whole* cluster in case of reweight/recovery
[12:56] <ScOut3R> yes, that's why i cannot decide on the HBA and i don't have the resources to test each
[12:58] <schlitzer> well, there many reviews out there on SAS controllers
[12:58] <absynth_47215> i think what we have here are LSI MegaRaid 2108, but i am not quite sure which of the pci devices serves which disks
[12:58] <ScOut3R> absynth_47215: does that card support JBOD mode?
[12:58] <absynth_47215> obviously ;)
[12:58] <ScOut3R> schlitzer: yes, reviews are nice, but i don't trust them that much :)
[12:59] <absynth_47215> 12:46:20 < schlitzer> do you use raid for your nodes, or is each disk attached to one osd?
[12:59] <absynth_47215> 12:46:27 < absynth_47215> the latter
[13:00] <ScOut3R> i'm planning to get simple HBAs for the nodes, without RAID functionality to get pure JBOD access for the disks
[13:01] <absynth_47215> whatever you do, you want SSD caching
[13:02] <ScOut3R> absynth_47215: that would narrow down the controller candidates i assume
[13:03] <absynth_47215> from my point of view, the controller choice is essential for the performance of the final cluster
[13:03] <absynth_47215> things like 10gbe or fat cpus on the OSDs don't make a big difference
[13:03] <ScOut3R> absynth_47215: true, that's why i'm thinking on this one for 2 months now
[13:03] <absynth_47215> did you read the lengthy thread about SSD choice?
[13:03] <absynth_47215> on the ML?
[13:03] <ScOut3R> not yet
[13:03] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[13:03] <ScOut3R> does your controller does the SSD caching by firmware?
[13:04] <ScOut3R> do*
[13:04] <absynth_47215> i have no idea, to be honest
[13:04] <absynth_47215> but i think so
[13:05] <absynth_47215> http://www.lsi.com/channel/products/storagesw/Pages/MegaRAIDCacheCadeSoftware2-0.aspx
[13:05] <absynth_47215> here, thats what i didnt remember earlier: CacheCade
[13:05] <ScOut3R> ah yes, just found the CacheCade word
[13:05] <ScOut3R> thanks for the link
[13:05] * verwilst (~verwilst@d5152D6B9.static.telenet.be) Quit (Ping timeout: 480 seconds)
[13:06] <ScOut3R> i wonder wether using CacheCade i could move the journals from the SSDs back to the OSDs or would that separation still make sense?
[13:07] <absynth_47215> not sure how it is in our case, tbh
[13:09] <ScOut3R> hm
[13:09] <ScOut3R> then i have to find a MegaRAID controller that supports CacheCade and JBOD mode
[13:12] <stxShadow> 9260i
[13:12] <stxShadow> with 2 SSDs :)
[13:12] <absynth_47215> what stxShadow says
[13:12] <stxShadow> one for cachecade
[13:12] <stxShadow> one for osd journals
[13:13] <stxShadow> each disk as a singel disc jbod
[13:13] <ScOut3R> stxShadow: so cachcade works for JBOD disks too
[13:13] <stxShadow> attached to the cachecade cache
[13:13] <stxShadow> yes
[13:13] <schlitzer> do you use the SSD�s for controler caching, or for the journal files?
[13:14] <ScOut3R> stxShadow: thanks
[13:14] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[13:14] <oliver1> ScOut3R: IMHO separation still makes sense, even if journal is cached, it's somewhen written to DISKS. double trouble ;)
[13:14] <stxShadow> both .... one ssd for controller caching (cachecade) -> the other one for the osd journals
[13:15] <ScOut3R> oliver1: thanks, it sure speeds up journal operations like hell
[13:15] <oliver1> And what do we want at the end of the day? ;)
[13:16] <ScOut3R> another question regarding controllers: do you recommend multipathing with multiple controllers in a node?
[13:21] <stxShadow> for speed or security reason ?
[13:23] <ScOut3R> stxShadow: for fault tolerance
[13:24] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) has joined #ceph
[13:25] <stxShadow> could make sense if you dont want to loose a whole node on controller failures
[13:28] <ScOut3R> stxShadow: right now i'm checking the documentation of the 9260i and 9270i controllers but i cannot find any trace of JBOD disk opration; is it really supported? the docs are all talking about RAID configuration regarding the disks
[13:29] <absynth_47215> well, if it wasn't, i wonder why our nodes are running ;)
[13:30] <ScOut3R> then why are the docs not mentioning it?
[13:32] <schlitzer> ScOut3R, why do you want to use JBOD at all? why not one OSD per disk?
[13:32] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[13:33] <ScOut3R> schlitzer: JBOD means exactly that :)
[13:33] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[13:33] <ScOut3R> if i'm going to get a RAID controller i have to have JBOD available to directly access each HDD
[13:33] <schlitzer> ScOut3R, erm, no?
[13:34] <ScOut3R> schlitzer: sorry? i don't understand :)
[13:34] <schlitzer> with jbod you let many disks appear as one
[13:34] <schlitzer> so the operating systems will just see one physical disk
[13:36] <ScOut3R> schlitzer: then how is the setup we are talking about called? i though JBOD means that each disk appear as a separate block device in the OS
[13:36] <schlitzer> if you want to use one one osd per disk, simply don't configure any raid at all, just pass them directly to the OS
[13:36] <ScOut3R> schlitzer: The concept of concatenation, where all the physical disks are concatenated and presented as a single disk, is NOT a JBOD, but is properly called BIG or SPAN.
[13:37] <stxShadow> schlitzer: you have to configure them as "raid0" -> with only on disk
[13:37] <stxShadow> here is the output of one of the Virtual Disks:
[13:37] <ScOut3R> stxShadow: so in LSI's regard JBOD is a RAID0 array for every disk?
[13:37] <stxShadow> Virtual Drive: 4 (Target Id: 4)
[13:37] <stxShadow> Name :
[13:37] <stxShadow> RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0
[13:37] <stxShadow> Size : 1.817 TB
[13:37] <stxShadow> Is VD emulated : No
[13:37] <stxShadow> Parity Size : 0
[13:37] <schlitzer> ScOut3R, no that is wrong. with JBOD you bundle multiple disks into one logical disk
[13:37] <stxShadow> State : Optimal
[13:37] <stxShadow> Strip Size : 64 KB
[13:37] <stxShadow> Number Of Drives : 1
[13:37] <stxShadow> Span Depth : 1
[13:37] <stxShadow> Default Cache Policy: WriteBack, ReadAhead, Cached, Write Cache OK if Bad BBU
[13:37] <stxShadow> Current Cache Policy: WriteBack, ReadAhead, Cached, Write Cache OK if Bad BBU
[13:37] <stxShadow> Default Access Policy: Read/Write
[13:37] <stxShadow> Current Access Policy: Read/Write
[13:38] <stxShadow> Disk Cache Policy : Disk's Default
[13:38] <stxShadow> Encryption Type : None
[13:38] <stxShadow> Is VD Cached: Yes
[13:38] <stxShadow> Cache Cade Type : Read and Write
[13:38] <stxShadow> This is a 2 TB DISL with Cachecade Caching enabled
[13:38] <stxShadow> only one disk
[13:38] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[13:38] <ScOut3R> schlitzer: okay, i'll research this more, but anyway, we now know what kind of setup we are talking about ;)
[13:39] <schlitzer> kk
[13:42] <ScOut3R> probably my last question :) is there a good rule to calculate the CacheCade cache size for the underlying storage?
[13:42] * thathe (~root@117-102-255-190.atisicloud.com) Quit (Quit: Lost terminal)
[13:53] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) has joined #ceph
[13:53] * korgon (~Peto@isp-korex-15.164.61.37.korex.sk) Quit (Quit: Leaving.)
[14:00] * fghaas (~florian@91-119-215-212.dynamic.xdsl-line.inode.at) has joined #ceph
[14:10] <stxShadow> ScOut3R ... i dont think so .... if you setup the controller you can define one or more cachecade devices
[14:10] <stxShadow> and thats it
[14:10] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[14:10] * agh (~agh@www.nowhere-else.org) has joined #ceph
[14:10] <stxShadow> we use 240 GB intel SSDs
[14:14] <agh> Hello to all
[14:14] <agh> is someone using CEph in production with a large cluster ? ( > 500 TB)
[14:16] <absynth_47215> is that the new "hello" in this channel? :)
[14:16] <absynth_47215> so, which company are _you_ with? ;)
[14:17] <agh> what ?
[14:17] <janos> agh: i think i've heard of a few in here over time that are
[14:17] <absynth_47215> you are the second guy today who asks that exact same question ;)
[14:18] <agh> absynth_47215: ah... it was not me :)
[14:18] <absynth_47215> as far as i know, there are no production clusters of that size publically known
[14:18] <fghaas> agh: dreamhost definitely is
[14:18] <absynth_47215> apart from maybe dreamhost
[14:19] <agh> fghaas:mm.. Dreamhost, yes. But i'm sure that they have some secret tuning options :(
[14:19] <absynth_47215> i'm not so sure. why would they intentionally want to hurt ceph?
[14:20] <janos> i'll let you know when my home cluster gets to that size
[14:20] <janos> ;)
[14:20] <agh> absynth_47215: no, it's not what i mean, but they must have a really well tuned cluster
[14:20] <agh> janos: :)
[14:20] <absynth_47215> well, there's only so much porn available for download, janos
[14:20] <fghaas> agh: what is your question?
[14:20] <janos> haha
[14:21] * korgon (~Peto@isp-korex-21.167.61.37.korex.sk) has joined #ceph
[14:22] <ScOut3R> stxShadow: thanks, i was thinking the same, getting SSDs around 256GB size
[14:24] <agh> fghaas: i have noe question. I only wanted to speak with people dealing with a huge ceph cluster everyday
[14:25] <fghaas> agh: yes, but about what exactly?
[14:25] <agh> fghaas: monitoring, journal size, etc.
[14:26] <agh> fghaas: hardware too.
[14:26] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) has joined #ceph
[14:26] <absynth_47215> well, the dreamhost guys will probably be awake in about 3 hours
[14:26] <fghaas> well then ask your question and someone will answer either now or later. but a request like "educate me about ceph while I sit back and relax" probably won't get you far
[14:27] <absynth_47215> what fghaas said.
[14:27] <agh> fghaas: I did a lost of bench and tests already
[14:31] <absynth_47215> agh: you aren't the person who posted on the mailing list about an hour ago, are you?
[14:38] <agh> absynth_47215: no i'm not
[14:39] <absynth_47215> so. if you ask a halfway precise question, someone will probably be able to help you
[14:39] <absynth_47215> since a lot of people here are operating production environments
[14:42] <agh> absynth_47215: ok. So First, i don't understand the formula in the documentation about Journal size
[14:43] <agh> absynth_47215: 2 * (desired throughput * min sync interval)
[14:45] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[14:46] * schlitzer (~schlitzer@109.75.189.45) Quit (Remote host closed the connection)
[14:46] <absynth_47215> wschulze: morning :)
[14:47] <absynth_47215> wschulze: got my mail?
[14:47] * schlitzer (~schlitzer@109.75.189.45) has joined #ceph
[14:50] <absynth_47215> which documentation is that from? there are several which might be outdated
[14:53] <wschulze> absynth_47215: surely did!
[14:53] <agh> absynth_47215: http://ceph.com/docs/master/rados/configuration/osd-config-ref/
[14:53] <agh> "Begin with 1GB. Should at least twice the product of the expected speed multiplied by filestore min sync interval."
[14:54] <agh> absynth_47215: and here http://eu.ceph.com/docs.raw/ref/wip-3068/config-cluster/ceph-conf/ (paragraph OSD)
[14:55] <absynth_47215> so lets take default numbers
[14:55] * aliguori (~anthony@cpe-70-113-5-4.austin.res.rr.com) has joined #ceph
[14:55] <agh> absynth_47215: so expected througput = 100 MB/s
[14:55] <absynth_47215> 1 gb suggested journal size = thruput * .01 (minimum default sync interval)
[14:55] <absynth_47215> = 10mb/sec thruput
[14:55] <agh> absynth_47215: min sync = 0.01
[14:56] <agh> absynth_47215: so journal size = 2 * (100 * 0.01) = 2...
[14:56] <agh> 2 MB ?
[14:56] <agh> hum
[14:57] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[14:58] <absynth_47215> let's see the other way
[14:58] <absynth_47215> 1 gb journal = 2* (speed * 0.01), right?
[14:59] <agh> absynth_47215: yes
[14:59] <absynth_47215> so 1gb / (2*0,01) = speed
[14:59] <agh> absynth_47215: right
[14:59] <absynth_47215> that would be 51MB/sec
[15:00] <agh> mm.. in fact there is a missmatch with units...
[15:01] <absynth_47215> yeah, something is odd
[15:02] <agh> but, you, what size do you use ?
[15:03] <agh> absynth_47215: because i'm buying hardware to make a real cluster, and i want to be sure of what type of ssd to buy for journals
[15:03] <absynth_47215> 16gig
[15:03] <absynth_47215> per OSD
[15:03] <agh> absynth_47215: ok, your disks (osd) are SATA 7K ?
[15:03] <absynth_47215> sas
[15:04] <agh> 7K or more ?
[15:04] <absynth_47215> no idea honestly, didn't buy the systems :)
[15:04] <agh> absynth_47215: ok. so. I was on this idea : 16 GB / osd.
[15:05] <fghaas> agh: 5 GB journal per osd should be sufficient
[15:06] <fghaas> as a ballpark, rule-of-thumb number
[15:06] <agh> fghaas: but, something that i don't understand : will a too big journal be dangerous ?
[15:06] <fghaas> nope, it just unlikely to ever be used
[15:06] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[15:06] <agh> fghaas: or, is it like in ZFS (L2ARC) : the bigger it is, the best it is
[15:06] <fghaas> hence, it doesn't get you any additional bang for the buck
[15:07] <rlr219> Hello. Anyone on that can help with the 0.48.3 osd issues. upgraded from 0.48.2 to 0.48.3 per recommended process and last night 6 of my 18 OSDs crashed. now my cluster is hung.
[15:07] <absynth_47215> oh
[15:07] <fghaas> agh: aiui the way osds work it's highly unlikely for a journal to be hit with >5GB at a time
[15:07] <absynth_47215> rlr219: were you able to restart the crashed OSDs?
[15:07] <fghaas> consider that the cluster is distributed, unlike zfs where you're always hammering one box
[15:08] <agh> fghaas: ok. I note that. Thanks a lot
[15:08] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[15:08] <rlr219> they restart. but either crash right away or one did run for several minutes before it crashed again too.
[15:08] <absynth_47215> sounds awfully familiar. the cluster was clean (as in no previous crashes, no inconsistencies) before you upgraded?
[15:09] <loicd> Hi, I'm looking for a detailed explanation of "without a single point of failure" ( http://ceph.com/docs/master/rados/operations/data-placement/ ). If ceph is operating on machines contained in two racks ( and this is reflected in the crush map ) does it mean that no data will be lost at any given time if one of the rack becomes unresponsive ?
[15:09] <rlr219> absynth_47215: correct. it has been running with no issues since I rebuilt it about a month ago.
[15:10] <absynth_47215> do you have stacktraces for the crashing OSDs?
[15:10] <rlr219> Noty to be dense, but you mean logs?
[15:10] <fghaas> loicd, that is the intention, yes. the default crushmap only makes sure that no object is ever replicated to two osds in the same host, but you can modify that so they're split between racks/cabinets/whathaveyou
[15:11] <fghaas> (instead of hosts)
[15:11] <absynth_47215> rlr219: yeah, something that shows the reason of the crash
[15:11] <rlr219> yes. I can put them on pastebin if you would like.
[15:11] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) has joined #ceph
[15:11] * agh (~agh@www.nowhere-else.org) Quit (Remote host closed the connection)
[15:11] <absynth_47215> loicd: you can even split between data centers, given you have a fairly good connection between them
[15:12] * agh (~agh@www.nowhere-else.org) has joined #ceph
[15:12] <absynth_47215> rlr219: yeah, would be helpful to see if these are similar to something we saw before
[15:12] * Joel (~chatzilla@2001:620:0:46:cd4e:ca5:defa:caa3) has joined #ceph
[15:12] <absynth_47215> at this point, i should disclose that i`m not involved with inktank ;)
[15:13] <rlr219> ok. is this something youu have expaerience with?
[15:13] <absynth_47215> osd crashes? uh, yeah.
[15:14] <loicd> fghaas: got it thanks ;-) I'm reading http://ceph.com/docs/master/rados/operations/crush-map/ to better understand how.
[15:14] <rlr219> I can paste a log file. see if you see something, if that's ok.
[15:15] <absynth_47215> go ahead
[15:15] <rlr219> please hold
[15:15] <stxShadow> why not use pastbin
[15:15] <stxShadow> ?
[15:16] <stxShadow> pastebin.com
[15:16] * rlr219 (43c87e04@ircip3.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[15:17] <stxShadow> ;) .... to not be disconnected
[15:17] <fghaas> oops :)
[15:17] <loicd> absynth_47215: 40Gb/s between two datacenters and a latency < 50ms should is good enough ?
[15:18] <stxShadow> latency is the biggest problem here i think ....
[15:18] <stxShadow> should be as small as possible
[15:19] <fghaas> 50ms is about 500 times gigabit ethernet
[15:19] * MarcoA (~aroldi@81.88.224.110) has joined #ceph
[15:19] <fghaas> in terms of latency
[15:19] <stxShadow> yes ... 50 ms is very high .... especialy in terms of storage
[15:20] * rlr219 (43c87e04@ircip1.mibbit.com) has joined #ceph
[15:20] <rlr219> http://pastebin.com/AA2xfUAU
[15:21] <fghaas> assertion failures? ouch
[15:22] <absynth_47215> hang on
[15:22] <rlr219> ouch is right. :(
[15:24] <absynth_47215> all nodes are which exact version?
[15:25] <rlr219> all were upgraded to 0.48.3
[15:25] <fghaas> if that really did occur in the process of a by-the-book upgrade, that would be hurtful... specifically for a "stable" branch
[15:25] <absynth_47215> rlr219: can you paste output of ceph --version somewhere?
[15:25] <rlr219> ceph version 0.48.3argonaut (commit:920f82e805efec2cae05b79c155c07df0f3ed5dd)
[15:26] <absynth_47215> slightly different from our version
[15:27] <stxShadow> absynth_47215 ... we are not on 48.3 right now
[15:28] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[15:28] <absynth_47215> yeah, we are on .2 still, but i was checking the changelog to see which issues could create this behavior
[15:29] <rlr219> IWhen I rebuilt, It was suggested to use XFS. So I did. My understanding was a power outage or large crash could cause data loss.
[15:29] <rlr219> on 48.2
[15:29] <absynth_47215> uhm, yes.
[15:30] * schlitzer is now known as schlitzer_work
[15:30] <stxShadow> kernel panics also
[15:30] <rlr219> so I upgraded following there procedure and it ran ok. Then last night they crashed.
[15:30] <absynth_47215> it might be that the OSDs encounter a strangely inconsistent pg while peering/replaying/backfilling and assert, but that is just a wild guess
[15:30] <absynth_47215> so they did not crash immediately after the upgrade?
[15:31] <rlr219> no. Ran for about 8 hours.
[15:31] <absynth_47215> did you see the cluster in a fully active+clean state after the upgrade?
[15:31] <rlr219> yes.
[15:31] <stxShadow> could be during scrub
[15:31] <absynth_47215> did, by any chance, a scrub happen directly before the crashes?
[15:31] <absynth_47215> err. what shadow said
[15:32] <rlr219> not sure. I did not have eyes on at that time.
[15:32] <rlr219> these crashes occurred right after midnight EST according to my nagios alerts.
[15:33] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) has joined #ceph
[15:35] <rlr219> sorry they were after midnight CST.
[15:36] <fghaas> rlr219: at the risk of asking a silly question, if your cluster was all active+clean, and you've lost "only" a third of your osds, and all your mons are alive, how come the whole cluster is hung?
[15:40] <jtang> q
[15:40] <jtang> doh
[15:41] * fghaas1 (~florian@91-119-215-212.dynamic.xdsl-line.inode.at) has joined #ceph
[15:41] * fghaas (~florian@91-119-215-212.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[15:41] <absynth_47215> the recovery process in argonaut can be very resource-consuming
[15:42] <absynth_47215> as soon as you start seeing slow requests, especially those that are queued for >3600s, the whole i/o can become very, very slow
[15:42] <fghaas1> sure, but very slow != hung, hence my question
[15:42] <absynth_47215> is the cluster not making *any* attempt to recover now, rlr219?
[15:42] * fghaas1 is now known as fghaas
[15:43] <rlr219> health HEALTH_WARN 81 pgs degraded; 10 pgs incomplete; 13 pgs recovering; 132 pgs stale; 10 pgs stuck inactive; 132 pgs stuck stale; 104 pgs stuck unclean; recovery 27057/1881904 degraded (1.438%); 966/940952 unfound (0.103%) monmap e25: 3 mons at {d=172.27.124.153:6789/0,f=172.27.124.141:6789/0,h=172.27.124.145:6789/0}, election epoch 488, quorum 0,1,2 d
[15:43] <rlr219> unfond PG and PGs stuck unclean
[15:43] <absynth_47215> that is usually not a good sign, yeah. the figures are not changing anymore, right?
[15:44] <rlr219> it can't go anyfurther in the recover process, becuase of the unfound PGs, I believe.
[15:44] <absynth_47215> fghaas: as soon as you have unfound PGs, i think that I/O for objects in those is stalled
[15:44] <rlr219> correct
[15:44] <absynth_47215> is data in the cluster accessible?
[15:44] <rlr219> right now I am seeing a tons of slow IO requests
[15:44] <absynth_47215> hung for >XXXX seconds
[15:45] <absynth_47215> what is XXXX in your case?
[15:45] <fghaas> absynth_47215: "objects in those PGs" != whole cluster (not trying to be an asshat, mind you, just getting to the root of the issue)
[15:45] <absynth_47215> fghaas: as am i, so why don't you hold your horses for a sec? :)
[15:46] <fghaas> be my guest
[15:46] <rlr219> 30 - 60 secconds right now.
[15:46] <absynth_47215> trust me, i have seen the exact error description happen
[15:46] <absynth_47215> rlr219: what data can you still access?
[15:47] <absynth_47215> there is a workaround for short-time hung requests
[15:47] <rlr219> my 4 VMs stopped running. I have a RBD that I have formatted as ext4, that I can get to, but not sure how reliable it is.
[15:47] <absynth_47215> go to the osd that has them (its in the log msg) and issue ceph osd down <osdnum>
[15:48] <absynth_47215> then the osd starts complaining "hey, i was wrongly marked as down" and promptly handles the slow requests
[15:48] <rlr219> my concern is that my replication factor is 2, so it is possible I have data that is corrupt right now.
[15:48] <absynth_47215> yes, that is a valid assumption.
[15:48] <absynth_47215> or better: a valid concern
[15:48] <absynth_47215> can you access your VMs via vnc?
[15:49] <absynth_47215> you should see something like
[15:49] <absynth_47215> 72360.552090] INFO: task kjournald:157 blocked for more than 120 seconds.
[15:49] <rlr219> no. even though they were running, the did not respond at all to anything.
[15:49] <absynth_47215> my guess would be: all VMs have data in the unfound PGs
[15:49] <absynth_47215> since I/O is stalled to those PGs, the VMs stall, too
[15:49] <absynth_47215> kvm?
[15:49] <rlr219> that would be logical
[15:50] <absynth_47215> this is not a production environment, right?
[15:50] <rlr219> Just as a background on my layout, I have 8 servers with 2 OSDs per server. all downed OSDs are on 3 servers.
[15:50] <absynth_47215> 1 osd per disk?
[15:51] <absynth_47215> not relevant anyway, just being curious
[15:51] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[15:52] <rlr219> truthfully, we use it as production, but it is separate from our critical functions. The ext4 file system is home directories shared for across all of our servers.
[15:52] <rlr219> yes, 1 osd per
[15:52] <absynth_47215> can you start any one of the OSDs now?
[15:52] <absynth_47215> just to see how soon it crashes
[15:52] <rlr219> I can try.
[15:54] <rlr219> Also, I was never good at math, I have 20 OSDs on 10 servers. :-S
[15:54] <absynth_47215> i would try starting osd.7 first
[15:54] <absynth_47215> it was the first one that crashed, if your log is complete
[15:55] <rlr219> osd.7 is running.
[15:55] <rlr219> the downed OSDs are 10, 11, 16, 17, 18, 19
[15:56] <absynth_47215> in that order?
[15:56] <rlr219> osd.17 is still running at moment.
[15:56] <absynth_47215> this is just, well, voodoo, but my gut feeling would be
[15:57] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) has joined #ceph
[15:57] <absynth_47215> keep it running until it either crashes or the recovery does not change anymore
[15:57] <absynth_47215> (i.e. all PGs that were unfound because they were on that OSD and another crashed one have recovered, same with unclean PGs)
[15:57] <absynth_47215> then start another OSD
[15:57] <absynth_47215> rinse, repeat
[15:58] <absynth_47215> if they keep crashing, wait for sage, joshd or any other inktank employee to have a look at the crashdumsp
[15:59] <absynth_47215> the OSD processes seem to be prone to asserting if they encounter PGs with "weird" corruptions. whether this is a bug or intentional, i cannot say
[15:59] <MarcoA> Hello everybody
[15:59] <rlr219> Thanks Folks. I think thats where I am at.
[15:59] <absynth_47215> err?
[16:00] <MarcoA> I have a few question about planning a Ceph cluster
[16:00] <absynth_47215> shoot
[16:01] <MarcoA> First question it was alerady on this irc channel, but i can't understand the answer: "Are there any rules of thumb regarding how many OSD journals an individual SSD can handle?"
[16:01] <absynth_47215> rlr219: your mons are all up, right?
[16:01] <absynth_47215> just making sure
[16:01] <rlr219> yes
[16:01] <MarcoA> answered: "divide your desired measure (IOPS or throughput) of the SSD by that measure for the number of OSD disks"
[16:02] <fghaas> MarcoA: how about you use a rule of thumb of no more than 4 journals per ssd, and about 5GB per journal
[16:03] <MarcoA> fghaas: this knock me out! We are planning a ceph cluster using this stuff: http://goo.gl/0qOwi
[16:03] <MarcoA> Is a HP MDS600
[16:03] <fghaas> yes, so?
[16:03] <MarcoA> 2 drawers, each one contains 35 hdd
[16:03] * PerlStalker (~PerlStalk@72.166.192.70) has joined #ceph
[16:04] <fghaas> again... so? how many ssds can you stick in there?
[16:05] <MarcoA> fghaas: this mds600 is connected
[16:05] <MarcoA> to a hp c7000 blade enclosure
[16:05] <MarcoA> each blade has only 2 hdd for the operating system
[16:06] <fghaas> and those are the only ssd slots that you have available?
[16:06] <absynth_47215> gotta borrow fghaas: dont want to be an asshat here, but are you sure your choice of hardware is, erm, appropriate?
[16:06] <absynth_47215> fhaas' saying, that is
[16:06] <fghaas> my thoughts exactly absynth_47215
[16:07] <fghaas> that seems like a fairly poor choice for a ceph osd box
[16:07] <MarcoA> I can use the slot in the mds600 itself, but i was thinking to use only the 2 ssd on the blade
[16:07] <absynth_47215> i don't think ceph's concept is very much catered towards that kind of storage/blade solution, but maybe i'm wrong
[16:07] <absynth_47215> how many physical servers will be there running OSD processes?
[16:07] <fghaas> MarcoA: also, what network connectivity do you have to these things?
[16:08] <MarcoA> the blades and the mds are connected thru 4 6gb/s cables
[16:08] <MarcoA> the network is 10gb
[16:08] <absynth_47215> how many blades will run OSD processes?
[16:09] <MarcoA> i was thinking to 1 blade for each drawer
[16:09] <MarcoA> the blades are double intel xeon quad-core, 48Gb ram
[16:09] <mikedawson> MarcoA: I'd encourage you to run ceph, benchmark, create failures, repeat. After a few cycles, most people decide to go with fewer OSDs per server and a low number of journals per ssd (like 4:1 as fghaas mentioned)
[16:10] <fghaas> MarcoA: do you realize that you're trying to build what's fundamentally a scale-out cluster with a scale-up approach?
[16:10] <nhm> mikedawson: interesting, what have you been hearing regarding numbers of OSDs/server?
[16:11] <fghaas> plus, if you actually want to stick your journals on an ssd (one ssd), then if that ssd dies you'll promptly have killed 35 OSDs (assuming 1 osd per disk)
[16:11] <MarcoA> We want to scale-out, beacause we have many enclosures and many mds.
[16:12] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[16:12] <mikedawson> nhm: heard some of the guys who've tried Backblaze pods (45 drives with cheap expanders) and want to run as fast as possible
[16:12] <MarcoA> So we want to know what is the best "template" to apply
[16:13] <wido> My experience is that 4 disks and 1 SSD works best
[16:13] <MarcoA> 1 blade, 20 osd on 20 mds's hdd
[16:13] <nhm> MarcoA: regarding SSD journals, currently ceph writes data to both the journal and to the OSD. If you put all of our journals on a single SSD, you'll not only have the problem fghaas just mentioned, but you'll also limit your write throughput to the speed of the SSD (with a notable exception for btrfs).
[16:13] <wido> You can stick to 1Gbit or even 2Gbit in LACP
[16:13] <iggy> the above being said, there has to be something between 4 drives per U and a backblaze alike or you're going to pay more for rackspace/cooling/electricity than necessary
[16:13] <wido> a ton of 1U machines, a lot of spindles, RAM and CPU power
[16:13] <fghaas> MarcoA: what wido says :)
[16:14] <nhm> mikedawson: ah, interesting. They tried 45 OSDs and it decided to switch to fewer ones due to some problem?
[16:14] <fghaas> I think you burned the "best template" the moment you bought those boxes
[16:15] <MarcoA> fghaas: I never want these boxes, blieve me... Anyone wants this kind of stuff...
[16:15] <mikedawson> nhm: yeah 45 - minus SSDas for journals iirc. The issue is rebalancing
[16:16] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) has joined #ceph
[16:16] <absynth_47215> MarcoA: i personally think it would be very ill-advised to run ceph on that setup
[16:16] <nhm> mikedawson: Memory or CPU?
[16:16] <MarcoA> nobody, not anyone sorry!!!!
[16:16] <MarcoA> nobody wants this kind of stuff!
[16:17] <mikedawson> nhm: Network bandwidth. 1 GigE and large disks
[16:17] <nhm> MarcoA: It's still possible you could make ceph run well, but it may take some tuning.
[16:17] <nhm> MarcoA: I'd probably just skip the SSD honestly and put journals on a small partition at the beginning of each device.
[16:17] <fghaas> ... and if you absolutely need to do so, at least ditch the SSD idea and run your journal on the spinners (sounds paradoxical, but I'm assuming that in that setup SSDs won't be a net win)
[16:18] * noob2 (~noob2@ext.cscinfo.com) has joined #ceph
[16:18] <MarcoA> ok
[16:18] <mikedawson> nhm: did you ever get more than one SC847a?
[16:18] <MarcoA> What about the journal size
[16:18] <MarcoA> ?
[16:18] <nhm> mikedawson: oh, do you mean they went with smaller systems, not fewer OSDs on the same system?
[16:19] <mikedawson> nhm: yeo
[16:19] <mikedawson> yep
[16:19] <absynth_47215> journal size = 2x desired thruput * sync time
[16:19] <noob2> i saw that the 0.56.1 release came out. what is the procedure for upgrading from 0.56?
[16:19] <noob2> do i just upgrade one osd at a time and restart it?
[16:19] <nhm> MarcoA: I usually do like 5-10GB, but less may be fine.
[16:19] <nhm> MarcoA: if you can get really high performance maybe more.
[16:19] <absynth_47215> so, err, if you have a 1gbps network and want to be able to cover a 30 second downtime, i think you should have 125M*30 = a 3.7GB journal
[16:19] <absynth_47215> right?
[16:19] <MarcoA> nhm: ok
[16:19] <iggy> noob2: I think it depends what version you are running
[16:20] <MarcoA> absynth_47215: thanks
[16:20] <noob2> iggy: i'm running 0.56
[16:20] <rlr219> noob2: http://ceph.com/docs/master/install/upgrading-ceph/
[16:20] <iggy> noob2: oh, 0.56.. blind
[16:20] <iggy> noob2: I don't think 0.56 and 0.56.1 can run together
[16:20] <iggy> but verify that with ceph peeps
[16:20] <noob2> ok
[16:20] <mikedawson> noob2: I did 0.56 -> 0.56.1 upgrade all at once
[16:20] <nhm> MarcoA: Do your controllers have writeback cache?
[16:20] <MarcoA> Yes, 1Gb
[16:21] <noob2> mike: i can't do it all at once. i have several things connected to it :(
[16:21] <nhm> mikedawson: just one for now. I might have access to a cluster of them at some point.
[16:21] <nhm> mikedawson: though they may be the version that has expanders in the backplanes.
[16:22] <nhm> MarcoA: you will probably want to have WB cache enabled.
[16:22] <MarcoA> nhm: i was thinkg to start with a setup like so: 12 OSD on 12 hdd
[16:22] <iggy> yes, something like the sc847a seems the perfect mix of rackspace/power/TB
[16:23] <mikedawson> nhm: I think you need a bigger budget (and basement from my understanding)
[16:23] <noob2> well i'll ask the question again when the ceph guys come online at noon. :)
[16:23] <MarcoA> nhm: on each blade
[16:23] <nhm> MarcoA: Also, you will probably want to try different configurations. So far it looks like single drive raid0 arrays behind each OSD work well, but if that results in too many OSDs in the node, you can try switching to 2-drive RAID0s or RAID-1 arrays depending on what you are after.
[16:23] <nhm> mikedawson: sign a support contract and mention that the money should be earmarked for my basement. ;)
[16:24] <elder> Hey I'll take some of that.
[16:24] <nhm> elder: shush, I'm making a sale.
[16:24] <MarcoA> nhm: too many OSD? When the osd starts to be too many?
[16:25] * ircolle (~ircolle@c-67-172-132-164.hsd1.co.comcast.net) has joined #ceph
[16:26] <nhm> MarcoA: depends on memory and CPU. We tend to recommend 2-3GB of memory and 1GHz of CPU power per OSD.
[16:26] * oliver1 (~oliver@jump.filoo.de) has left #ceph
[16:27] <mikedawson> nhm: do you benchmark with more than 1 node regularly? Your published stuff seems be on that single machine?
[16:27] <nhm> MarcoA: honestly though, we don't really know what happens with say 60 OSDs in one node. We haven't been able to do a lot of testing yet in those kinds of configurations.
[16:27] <iggy> that's insane
[16:27] <MarcoA> nhm: Ok, at least the hardware specs are ok: 2x Quad-core Intel Xeon 2.55Ghz - 48Gb ram
[16:27] <iggy> you're spending more on supporting hardware than drives at that point
[16:27] * tziOm (~bjornar@194.19.106.242) Quit (Remote host closed the connection)
[16:28] <nhm> mikedawson: We have a dell cluster that we've done tests on as well, which wasn't performing well. That's ultimately why we bought the supermicro node. I wanted a platform that I trusted and could get better data on.
[16:28] <MarcoA> nhm: no no, We don't want to use *all* the hdd to 1 node. That would be crazy
[16:29] <absynth_47215> sage in yet?
[16:29] <mikedawson> nhm: I can give you access to modest sized clusters that perform well
[16:29] <nhm> iggy: MarcoA: you'd be surprised. The last lustre system I maintained had 60 drives per node.
[16:30] <MarcoA> nhm: :)
[16:30] <noob2> if you only have 2 monitors would that cause the cluster to get into a degraded state?
[16:30] <noob2> just curious
[16:30] <mikedawson> noob2: I've gone from 3->2 for periods of time with no ill affects to RBD
[16:31] <nhm> mikedawson: I might take you up on that, but for now I'm trying to start out by just doing small investigations and working my way up to doing larger cluster tests. I want to figure out how we should tune ceph in simple cases before I move on to large cases.
[16:31] <noob2> ok
[16:31] <absynth_47215> i think one thing that really needs attention is recovery performance
[16:31] <fghaas> noob2: as long as you maintain quorum (> n/2 mons available, where n is the total number of mons in the cluster), you're fine
[16:32] <nhm> absynth_47215: interesting. Performnace of recovery itself, or cluster performance while recovery is happening?
[16:32] <absynth_47215> both
[16:32] <rlr219> sjust, You here?
[16:33] <absynth_47215> you mean sage?
[16:33] <rlr219> I have worked sith sjust before on similar issues, but I am easy. ;)
[16:33] <nhm> absynth_47215: sjust is Sam Just, who is our expert OSD dev.
[16:34] <absynth_47215> ah
[16:34] <mikedawson> absynth_47215: nhm: I have seen very slow recovery even with zero client load and <10GB of used data. Then I determined I likely had too many PGs in the cluster
[16:35] <mikedawson> can anyone confirm that? Does recovery / rebalancing get slow when # of PGs increase?
[16:35] <MarcoA> Guys, about the number of PG... Can I increase safely the nember?
[16:35] <MarcoA> numebr
[16:36] <nhm> mikedawson: Might be. There's an impact with more PGs. I haven't actually looked at it in depth. A side project I'd like to work on some day is investigating whether we can improve distribution of data with fewer PGs, but it could be tough to keep some of the benefits we enjoy such as our PG splitting behavior.
[16:36] <absynth_47215> the performance during recovery is abysmal
[16:37] <absynth_47215> at least in argonaut
[16:37] <absynth_47215> and i think that is a very critical factor
[16:37] <absynth_47215> because what good does a distributed FS give you if recovering from failure carries such a massive penalty?
[16:37] <nhm> absynth_47215: Have you tried it with bobtail at all?
[16:37] <absynth_47215> not in a production-sized cluster, but we have bobtail in the lab
[16:38] <absynth_47215> as you probably know by now, we have... more interesting tasks at hand momentarily ;)
[16:38] <MarcoA> I've read the rule of thumb about the number "Num_PG=100*num_OSD"
[16:38] <sstan> repairing the production cluster?
[16:39] <mikedawson> nhm: During the testing cycle I added some pools to test 2x replication and 3x replication. Both were set to 2048 PGs (consistent with my RBD volumes pool). Read and write performance were as expected. Even with 0 data in those pools, rebalance slowed to a crawl. Once I removed those pools, rebalance speed greatly improved.
[16:39] <MarcoA> When the cluster grow adding more osd, I have to increase the pg number
[16:40] <nhm> MarcoA: oh, where did you see that btw? We really need to add a note that you should keep the OSDs to a power-of-two number. IE if you have 36 OSDs, use 4096 or 8192 PGs.
[16:40] <mikedawson> MarcoA: http://ceph.com/docs/master/rados/operations/pools/#create-a-pool changing number of PGs is experimental right now
[16:41] <tnt> nhm: oh really ? damn ... I don't have a power of two at all ...
[16:41] * fghaas (~florian@91-119-215-212.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[16:41] <MarcoA> nhm: http://ceph.com/docs/master/dev/placement-group/
[16:41] <MarcoA> "you configure the number of PGs you want, number of OSDs * 100 is a good starting point"
[16:41] <nhm> tnt: it's a complicated thing. It may not matter as much with lots of OSDs and PGs.
[16:42] <noob2> i think i found a bug in 0.56
[16:42] <nhm> tnt: I don't fully understand exactly how data gets distributed with the current algorithm, only that a component of that algorithm does much better with 2^n PGs.
[16:42] <noob2> i setup a cluster and my friend rebooted the cluster last night. after it came back it says 400 paged degraded and just sits there
[16:42] <tnt> nhm: well, I don't have a lot of OSDs, only 12 :p Also I didn't realize that if I was using multiple pools, I shouldn't create so many PG per pool so I end up with a lot of PG and I should have less.
[16:43] <noob2> does the data corruption also include reboots or just kernel panics?
[16:43] <mikedawson> nhm: I've seen the power of 2 for pools, and follow that best practice. But do I need to have a power of 2 for all the pools combined? Like 2048, 2048, 2048, and 1024 is bad whereas 2048, 2048, and 4096 is good?
[16:44] <nhm> tnt: the more PGs, theoretically the more even the distribution of data will be (to a point). As Mike said though, there might be an impact during recovery.
[16:44] <nhm> mikedawson: no, the pools should be independent.
[16:44] <mikedawson> nhm: thanks!
[16:44] <nhm> mikedawson: though if one pool is out of whack and there are lots of data writes you still might have an OSD that is overly busy.
[16:46] <MarcoA> nhm: so i have to calculate num_PG=num_OSD^2
[16:47] <mikedawson> nhm: when I had something like 2000 PGs per OSD accidentally, rebalance took forever. iostat showed the OSD drives basically constantly seeking.
[16:47] <nhm> MarcoA: no, do something like num_PG = num_OSD * 100 rounded up to the nearest 2^n value.
[16:47] <MarcoA> nhm: Ok
[16:49] <nhm> mikedawson: interesting. There's so many things to test. I probably should add that to my list.
[16:49] <mikedawson> nhm: they were working very hard to find data to replicate, but there was so little data, it seemed like all the wasted time was just seeking across the admittedly too many PGs
[16:49] <nhm> ah
[16:49] <nhm> I wonder how the ordering works.
[16:51] <mikedawson> nhm: I am happy to report that I see great performance on the things that you seem to test the most
[16:52] <mikedawson> nhm: now please test the things where I see terrible performance ;)
[16:54] <nhm> mikedawson: The next article will be looking at performance tuning of ceph parameters, then probably jumping into RBD testing.
[16:54] <mikedawson> nhm: need an assistant?
[16:54] <nhm> oh, probably io schedulers in there too.
[16:54] * fghaas (~florian@91-119-215-212.dynamic.xdsl-line.inode.at) has joined #ceph
[16:55] <nhm> mikedawson: I think the trick is that there are a ton of things that can destroy performance, and so once you find something that goes fast, changing just a little each time to figure out when things fall apart.
[16:56] <mikedawson> nhm: I've found a few
[16:56] <nhm> mikedawson: Yeah, me too. The problem I've had in the past is distinguishing between them.
[16:57] <nhm> hence why my focus so far has been on disk/controller/osd analysis. I want to get the building blocks right.
[17:00] <nhm> mikedawson: btw, if you've been able to nail down specific examples, that definitely is something we want to hear about.
[17:00] <mikedawson> nhm: will do
[17:04] <JohansGlock> ceph is looking nice guys, I have been doing some testing today
[17:04] <JohansGlock> throughput seems quite good, and comparable to nfs
[17:04] * jlogan1 (~Thunderbi@2600:c00:3010:1:52d:be18:aa69:de7) has joined #ceph
[17:04] <JohansGlock> I have one concern though, with ioping from a virtual environment my latency is little over 2ms
[17:04] <JohansGlock> while with NFS it's below 1 ms
[17:04] <JohansGlock> any recommendations on tweaking this?
[17:04] * agh (~agh@www.nowhere-else.org) Quit (Remote host closed the connection)
[17:05] <nhm> JohansGlock: are you using replication?
[17:05] * agh (~agh@www.nowhere-else.org) has joined #ceph
[17:05] <JohansGlock> nhm: default install so I believe so, that's a replication of 2?
[17:07] <nhm> JohansGlock: you might try creating another pool with no replication and see if that improves it. I don't know exactly how ioping works, but if it's doing a small write, ceph has to both write the data to the journal on the primary OSD and send that data to the secondary OSD and write the data to that osd, and get an ack back before the primary can ack the original request.
[17:08] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[17:08] <nhm> sorry, write the data to the secondary OSD's journal.
[17:08] <JohansGlock> nhm: seems likely indeed, though one would hope ceph would be smart enough to do it parrallel
[17:08] <nhm> so if you can read my garbled description there, basically the replication process has to happen before the client gets an ack for a write.
[17:09] <JohansGlock> nhm: if it's that smart, then replication wouldnt matter (not in this magnitude at least)
[17:09] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[17:09] <JohansGlock> nhm: if not, it seems to be the overhead of the journal
[17:10] <nhm> it does the OSD write and the replication write in parallel, but the replication write involves the additonal network transfers in both directions, and now the overall latency can only be as low as the slowest of the two writes (for 2x replication).
[17:10] <JohansGlock> hmm indeed ok
[17:10] * Joel (~chatzilla@2001:620:0:46:cd4e:ca5:defa:caa3) Quit (Remote host closed the connection)
[17:10] <JohansGlock> I will try, though it's not preferrable for production :)
[17:11] <nhm> JohansGlock: no, and I'm not recommending it, it's just the price you have to pay for replication. :/
[17:11] <JohansGlock> nhm: hehe true
[17:11] <nhm> JohansGlock: though it's entirely possible that there are other things (like journal) playing a role here too.
[17:12] <JohansGlock> nhm: yeah, well, just got started recently, so plenty of time to play around
[17:12] * madkiss (~madkiss@62.96.31.190) Quit (Read error: Operation timed out)
[17:12] <JohansGlock> nhm: like pulling down a monitor + osd at the same time takes 24 sec to recover from a guest os running on kvm, it's small things that you just feel that could be quicker
[17:14] <nhm> JohansGlock: Interesting. Haven't looked into that behavior too much yet.
[17:14] <nhm> JohansGlock: Was it faster if you did the mon or OSD seperately?
[17:15] <JohansGlock> nhm: haven't tested yet cause they are running on the same machine, I need to find out which placement groups map to which server first :) havent gotten around to doing that yet
[17:16] <JohansGlock> just pulled out the power plug ^_^
[17:16] <nhm> JohansGlock: nice! :)
[17:16] <nhm> JohansGlock: I suppose you could kill -f the processes individually
[17:16] <JohansGlock> nhm: good point, will do later, think tomorrow, the day is almost ending for me
[17:17] <JohansGlock> europe based :)
[17:17] <nhm> JohansGlock: ah, ok. Well, enjoy your evening then!
[17:17] <JohansGlock> nhm: thx for the info! have a great day
[17:21] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[17:31] * fghaas (~florian@91-119-215-212.dynamic.xdsl-line.inode.at) has left #ceph
[17:36] <rlr219> sgae or sjust: are you available to help me?
[17:37] <rlr219> Oops, I meant sage or sjust. ;-)
[17:42] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: Leaving)
[17:46] * schlitzer (~schlitzer@ip-109-90-143-216.unitymediagroup.de) has joined #ceph
[17:47] * jlogan1 (~Thunderbi@2600:c00:3010:1:52d:be18:aa69:de7) Quit (Quit: jlogan1)
[17:48] * jlogan1 (~Thunderbi@2600:c00:3010:1:52d:be18:aa69:de7) has joined #ceph
[17:51] * redhot (~kvirc@195.238.92.36) has joined #ceph
[17:51] <redhot> Hello everyone!
[17:51] <redhot> Is there any Ceph expert here? :)
[17:52] <redhot> Does Ceph support striping?
[17:53] <redhot> What is overhead on total data size required for redundancy?
[17:56] <scuttlemonkey> redhot: this might help http://ceph.com/docs/master/dev/file-striping/
[17:57] <redhot> scuttlemonkey> many thanks, I am readin it now :)
[17:57] <redhot> scuttlemonkey: any thanks, I am readin it now :)
[17:57] <scuttlemonkey> the arch page also has some good info on striping and scaling iirc
[17:57] <scuttlemonkey> http://ceph.com/docs/master/architecture/
[17:58] <redhot> scuttlemonkey: Yep, but data overhead information is still missing
[17:59] <redhot> As I see it is like modified RAID 0 hence overhead is 100%, right?
[17:59] <iggy> because it's a factor of your replication
[17:59] <scuttlemonkey> I'm not the guy to give you a definitive answer...but replication is configurable
[17:59] <scuttlemonkey> so it will depend on what you have set up
[18:00] <iggy> if you have replication of 4x, and 4x 1TB OSDs, you'll have 1TB useable
[18:00] <scuttlemonkey> ^^
[18:00] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[18:00] <redhot> iggy: ahha, seems like I've got it :) thanks!
[18:01] <redhot> May the replication be 0.5x for 50% overhead?
[18:01] <iggy> no
[18:01] <redhot> Assuming crash of 1 of 3 servers does not affect data
[18:01] <redhot> Ahh, so 1x is minimum?
[18:01] * tnt (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[18:01] <iggy> yes...
[18:02] <iggy> <1x would mean you wouldn't bother writing out half your data
[18:02] <iggy> it has to be a whole number
[18:03] <iggy> 1x means if you lose an OSD, you lose whatever was on that OSD
[18:05] <redhot> Ok, clear now. It should be integer, right?
[18:05] * ScOut3R (~ScOut3R@212.96.47.215) Quit (Ping timeout: 480 seconds)
[18:08] <iggy> yes, I would think it was obvious, but maybe the docs should state that explicitly
[18:09] <redhot> iggy: Nice, thanks. Is there any documentation concerning these particular replication factor?
[18:09] * tnt (~tnt@86.188-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[18:10] <iggy> I honestly don't know... I haven't even looked at the new docs
[18:10] <iggy> when I got into ceph, there was a wiki with like 3 pages
[18:10] <iggy> I made the 4th one
[18:12] * sleinen1 (~Adium@2001:620:0:46:bc7a:622:8693:fbb5) has joined #ceph
[18:14] <redhot> iggy: ok, thank you for that info
[18:18] * sleinen (~Adium@2001:620:0:26:3900:77f4:d0f5:3f49) Quit (Ping timeout: 480 seconds)
[18:26] <rlr219> sage or sjust: are you available to help me?
[18:26] <rlr219> Any inktank osd expert able to help?
[18:27] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[18:27] <scuttlemonkey> rlr219: it's early yet for west-coasters
[18:28] <scuttlemonkey> btu you could toss yer question out there and we could chew on it a bit
[18:28] <rlr219> man. I get up at 4:45 CST. LOL
[18:29] <scuttlemonkey> hehe
[18:30] <rlr219> I upgraded my ceph cluster from 0.48.2 to 0.48.3 using ceph docs about 4:30CST yesterday. All went well. cluster was stable. Then last nigh about 12:02 AM CST, 6 of my 20 OSDs crashed.
[18:30] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) has joined #ceph
[18:31] <rlr219> now my cluster is stuck with unfound PGs.
[18:32] <rlr219> OSDs will not stay running on restart.
[18:32] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[18:33] <rlr219> and thats what i woke up to today. :-S
[18:33] * loicd (~loic@AMontsouris-651-1-194-53.w83-202.abo.wanadoo.fr) Quit (Read error: Connection reset by peer)
[18:33] * loicd (~loic@AMontsouris-651-1-194-53.w83-202.abo.wanadoo.fr) has joined #ceph
[18:33] <scuttlemonkey> man, that's never a fun way to wake up
[18:33] <scuttlemonkey> what kinds of errors are the OSDs throwing when they try to start?
[18:34] <rlr219> I know. I am seeing assert fails or unfound directory errors. it differes from OSD to OSD
[18:34] <scuttlemonkey> anything like this?
[18:34] <scuttlemonkey> http://tracker.newdream.net/issues/3615
[18:35] * stxShadow (~jens@jump.filoo.de) Quit (Quit: Ex-Chat)
[18:37] * match (~mrichar1@pcw3047.see.ed.ac.uk) Quit (Quit: Leaving.)
[18:38] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[18:39] * agh (~agh@www.nowhere-else.org) Quit (Remote host closed the connection)
[18:39] * agh (~agh@www.nowhere-else.org) has joined #ceph
[18:40] <rlr219> kinda, but different: http://pastebin.com/AA2xfUAU
[18:41] * zK4k7g (~zK4k7g@digilicious.com) has joined #ceph
[18:41] * Leseb (~Leseb@193.172.124.196) Quit (Quit: Leseb)
[18:42] <rlr219> for one thing, there was no power down or HD crash. and the OSD ran for 7 hours or so before the errors.
[18:44] <scuttlemonkey> and you have debug settings turned up in your conf?
[18:44] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[18:44] <redhot> Seems like you are in wrong window, man :)
[18:45] <redhot> I haven't asked about OSD crash
[18:45] <rlr219> I do now. The log i posted here was pre-debug.
[18:45] <rlr219> trust me, you don't want to either.
[18:47] <scuttlemonkey> guessing the devs are just dealing w/ standup and whatnot, but should have someone far more technical to poke at it soon
[18:47] <scuttlemonkey> lemme prod a bit and see what's up
[18:47] <rlr219> thanks.
[18:52] <noob2> does anyone have objections to upgrading one node at a time from 0.56 to 0.56.1? i can't shut down the entire cluster because i have a lot of clients connected to it. I used the dev repo to install 0.56 and now that 0.56.1 is in the stable repo i switched the apt sources to it.
[18:55] <noob2> as an aside: does anyone have instructions for getting lighttpd working with the gateway?
[18:56] <tnt> noob2: it fairly straightforward ...
[18:56] <noob2> tnt: i think i remember you saying you did it before
[18:57] * MarcoA (~aroldi@81.88.224.110) Quit (Quit: Sto andando via)
[18:58] * redhot (~kvirc@195.238.92.36) has left #ceph
[18:58] <tnt> noob2: http://pastebin.com/rGQJztpJ
[18:58] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[18:58] <noob2> you the man!
[18:58] <tnt> that's my lighttpd config for radosgw, then just start radosgw as part of the system startup on that socket.
[18:58] <noob2> ok
[18:59] <noob2> my coworker said he upgraded to the 56.1 on the gateway and radosgw-admin segfaults now
[18:59] <noob2> so i'm not sure what happened exactly yet
[18:59] * ScOut3R (~ScOut3R@dsl5401A397.pool.t-online.hu) has joined #ceph
[18:59] <tnt> yes ... there is a bug.
[18:59] <noob2> ok
[18:59] <noob2> so argonaut is the one to stick with for the gateway for now?
[19:01] <tnt> http://tracker.newdream.net/issues/3735
[19:01] <tnt> There is a patch
[19:01] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[19:02] <noob2> awesome
[19:02] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[19:02] <noob2> tnt: what do you think about my upgrade procedure above? i'm a little nervous to upgrade to 56.1 because 56.0 is working fine so far in production
[19:02] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:03] <iggy> I don't think you gain a lot by upgrading
[19:03] <iggy> except connectivity by old clients
[19:03] <noob2> ok
[19:03] <tnt> AFAIU 0.56 is 'buggy' and has a incompatibility in wire protocol so ...
[19:03] <noob2> it's proving stable so i think i'll leave it alone
[19:05] <noob2> so tnt: you're saying upgrade as soon as possible?
[19:07] <iggy> noob2: https://github.com/ceph/ceph/blob/72674ad4470a2fb4347610d9aeba651f4e79b72b/doc/release-notes.rst
[19:07] <iggy> might help you decide for yourself
[19:08] <noob2> ok i might have an issue then
[19:08] <noob2> i have rbd clients on 0.55
[19:08] * loicd (~loic@AMontsouris-651-1-194-53.w83-202.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[19:10] <noob2> oh it says if they're running 0.55 they can be left alone
[19:11] <tnt> no ... you can leave them if your server is 0.56.1 ...
[19:12] <tnt> basically 0.56 can only talk correctly to itself AFAIU. Trying to interoperate it with anything else can cause issues.
[19:14] <noob2> ok
[19:14] <noob2> my servers are 0.56.0
[19:14] <noob2> my clients are 0.55
[19:14] * sjustlaptop (~sam@71-83-191-116.dhcp.gldl.ca.charter.com) has joined #ceph
[19:14] * korgon (~Peto@isp-korex-21.167.61.37.korex.sk) Quit (Quit: Leaving.)
[19:15] * dpippenger (~riven@cpe-76-166-221-185.socal.res.rr.com) Quit (Remote host closed the connection)
[19:16] <scuttlemonkey> rlr219: sam will be available in ~20 mins and poke his head in here
[19:16] <rlr219> scuttlemonkey Thanks!
[19:16] <scuttlemonkey> np, just sorry it went beyond me :P
[19:17] <rlr219> I am sure when he heard it was me, he groaned a little. ;-)
[19:21] * chutzpah (~chutz@199.21.234.7) has joined #ceph
[19:29] <sjustlaptop> catching up on irc logs, one moment
[19:29] <noob2> ok on my dev ceph cluster that has 0.54-1 quantal when i upgrade to 0.56.1 on the first node the osd's won't start back up :(
[19:30] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) has joined #ceph
[19:30] <noob2> 2013-01-09 13:28:28.319324 7fb5e3297780 1 journal close /mnt/ssd/ceph-0/journal
[19:30] <noob2> 2013-01-09 13:28:28.319712 7fb5e3297780 -1 ** ERROR: osd init failed: (95) Operation not supported
[19:31] <PerlStalker> Can some one point me at docs for using qemu+kvm with a remote ceph/rbd cluster?
[19:31] <PerlStalker> I'm, obviously, missing something.
[19:36] * BManojlovic (~steki@85.222.223.220) has joined #ceph
[19:36] <sjustlaptop> rlr219: how many osds did this happen to?
[19:38] <rlr219> 6 out of 20
[19:38] <sjustlaptop> all with that backtrace?
[19:39] <rlr219> no. it seems some had "no such directory" erros and some had "assert" errors.
[19:39] <rlr219> The log are fairly small on all but one, if yo me to email them.
[19:40] <rlr219> Failed assert errors, I should say
[19:40] <sjustlaptop> you have disk corruption
[19:40] <sjustlaptop> were these all on the same machine?
[19:41] <rlr219> 6 different disks on 3 separate servers.
[19:41] <sjustlaptop> can you tar up and upload the 6 logs to cephdrop@ceph.com?
[19:42] <rlr219> sure.
[19:42] <noob2> ok got my osd's to start. my journal was screwy
[19:43] <mikedawson> PerlStalker: I've had luck starting here and filling in some gaps http://ceph.com/docs/master/rbd/rbd-openstack/
[19:43] <PerlStalker> mikedawson: Thanks.
[19:47] <noob2> when ceph is recovering does it generally wait for low times in I/O to remap blocks?
[19:47] <noob2> i noticed if writes are going on my backfill slows down a lot
[19:48] * joao (~JL@89.181.159.29) has joined #ceph
[19:48] * ChanServ sets mode +o joao
[19:49] <sjustlaptop> rlr219: 5/6 of the osds have inconsistent leveldb stores
[19:49] <sjustlaptop> that is very strange
[19:50] <sjustlaptop> what mount options are you giving to xfs?
[19:50] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) Quit (Quit: Leaving.)
[19:53] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[20:02] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) Quit (Remote host closed the connection)
[20:11] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[20:12] * ScOut3R (~ScOut3R@dsl5401A397.pool.t-online.hu) Quit (Remote host closed the connection)
[20:13] * nwat (~Adium@soenat3.cse.ucsc.edu) has joined #ceph
[20:17] <sjustlaptop> noob2: with 0.55+ it prioritizes client IO over recovery IO
[20:20] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) has joined #ceph
[20:25] * agh (~agh@www.nowhere-else.org) Quit (Remote host closed the connection)
[20:28] * rlr219 (43c87e04@ircip1.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[20:34] * Cube (~Cube@12.248.40.138) has joined #ceph
[20:35] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[20:36] * mattbenjamin (~matt@65.160.16.60) has joined #ceph
[20:39] * korgon (~Peto@isp-korex-15.164.61.37.korex.sk) has joined #ceph
[20:39] * Oliver1 (~oliver1@ip-178-203-175-61.unitymediagroup.de) has joined #ceph
[20:41] * Cube1 (~Cube@12.248.40.138) has joined #ceph
[20:41] * Cube (~Cube@12.248.40.138) Quit (Read error: Connection reset by peer)
[20:42] * dok (~dok@static-50-53-68-158.bvtn.or.frontiernet.net) Quit (Ping timeout: 480 seconds)
[20:42] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) has joined #ceph
[20:46] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Ping timeout: 480 seconds)
[20:46] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[20:47] * dok (~dok@static-50-53-68-158.bvtn.or.frontiernet.net) has joined #ceph
[20:47] * buck (~buck@bender.soe.ucsc.edu) has joined #ceph
[21:01] * miroslav (~miroslav@c-98-234-186-68.hsd1.ca.comcast.net) has joined #ceph
[21:01] * schlitzer (~schlitzer@ip-109-90-143-216.unitymediagroup.de) Quit (Remote host closed the connection)
[21:02] * jluis (~JL@89.181.159.29) has joined #ceph
[21:03] <jluis> well, this is weird
[21:03] <jluis> I must have a irc session going on somewhere
[21:04] * joao (~JL@89.181.159.29) Quit (Remote host closed the connection)
[21:04] * jluis is now known as joao
[21:06] <joao> if someone has the time, a review on the top most commit on wip-3629 would be greatly appreciated ;)
[21:06] <joao> sagewk, gregaf ^
[21:08] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[21:11] * ScOut3R (~ScOut3R@catv-86-101-215-1.catv.broadband.hu) has joined #ceph
[21:12] * ScOut3R (~ScOut3R@catv-86-101-215-1.catv.broadband.hu) Quit (Remote host closed the connection)
[21:29] * loicd (~loic@magenta.dachary.org) has joined #ceph
[21:33] <noob2> sjustlaptop: yeah i noticed that as well :)
[21:35] * miroslav (~miroslav@c-98-234-186-68.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[21:51] * dpippenger (~riven@216.103.134.250) has joined #ceph
[22:00] <mikedawson> sjustlaptop: 0.56.1 with no client IO to speak of. SSD Journals. 4 nodes with 1 7200 SATA OSD each. Just added 4 SSD OSDs. 4672 PGs total. Rebalance going very slow. iostat shows very low disk usage. Is there a way to make rebalancing more parallel or higher priority?
[22:00] <sjustlaptop> osd_recovery_active and osd_recovery_max_chunk
[22:01] <sjustlaptop> increasing osd_recovery_active will increase the number of pushes in flight at a time
[22:01] <sjustlaptop> osd_recovery_max_chunk will allow larger pushes
[22:01] <sjustlaptop> are you using rbd?
[22:02] <mikedawson> sjustlaptop: yes to rbd
[22:02] * andreask1 (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[22:02] <sjustlaptop> then you can probably ignore osd_recovery_max_chunk as long as it's set to 8MB
[22:04] <mikedawson> "osd_recovery_max_active": "5", "osd_recovery_max_chunk": "8388608" right now
[22:06] <Kioob> is there a way to tune read performance ? I'm at 20MB/s when data are not in cache on OSD (130MB/s if in cache), on a 10Gb/s network
[22:06] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[22:12] <mikedawson> sjustlaptop: osd_recovery_max_active at 100 now. Getting better performance. Seems quite "bursty". Most of the time most drives and journals are idle, then a quick hit of traffic. Not getting sustained work
[22:12] <sjustlaptop> one sec
[22:13] <sjustlaptop> try osd_max_backfills = 20
[22:15] <sjustlaptop> and osd_backfill_scan_max = 1024
[22:16] <sjustlaptop> mikedawson: where did you add the 4 new osds?
[22:16] <mikedawson> on the original 4 nodes
[22:17] <sjustlaptop> k
[22:17] <mikedawson> had 8 spindles. removed 4 spindles. adding 4 ssds. will swap 4 remaining spindles for SSDs next
[22:19] <sjustlaptop> k
[22:21] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[22:22] * loicd (~loic@magenta.dachary.org) has joined #ceph
[22:22] * andreask1 (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[22:23] <mikedawson> sjustlaptop: moving a bit faster now, thx
[22:23] <sjustlaptop> no problem, how fast would you estimate?
[22:23] <mikedawson> 2013-01-09 16:23:25.533589 mon.0 [INF] pgmap v185289: 4672 pgs: 2901 active+clean, 1412 active+remapped+wait_backfill, 336 active+recovery_wait, 12 active+remapped+backfilling, 1 active+recovery_wait+remapped, 10 active+recovering; 106 GB data, 196 GB used, 12881 GB / 13077 GB avail; 20504/107791 degraded (19.022%)
[22:24] <mikedawson> .001% a sec
[22:25] <mikedawson> now 20179/107665 degraded (18.742%)
[22:26] <sjustlaptop> oh, did you restart the osds?
[22:26] <mikedawson> yeah. should I inject instead?
[22:26] <sjustlaptop> no, hmm
[22:27] <sjustlaptop> 12 active+remapped+backfilling is odd
[22:27] <sjustlaptop> should be larger
[22:27] <mikedawson> I've been editing ceph.conf, copying it around, then service ceph -a restart osd
[22:27] <sjustlaptop> you set osd_max_backfills = 20?
[22:27] <sjustlaptop> yeah, that's right
[22:28] <mikedawson> typo'ed that one. fixed now
[22:29] * agh (~agh@www.nowhere-else.org) has joined #ceph
[22:29] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) Quit (Quit: Leaving)
[22:31] <mikedawson> 2013-01-09 16:30:58.792494 mon.0 [INF] pgmap v185634: 4672 pgs: 3077 active+clean, 1332 active+remapped+wait_backfill, 189 active+recovery_wait, 40 active+remapped+backfilling, 1 active+recovery_wait+remapped, 33 active+recovering; 106 GB data, 198 GB used, 12878 GB / 13077 GB avail; 19099/107123 degraded (17.829%)
[22:31] <sjustlaptop> much better
[22:31] <sjustlaptop> any faster?
[22:31] <mikedawson> yes
[22:32] <sjustlaptop> ok, that's a tunable we are still trying to nail down
[22:32] <sjustlaptop> we may want to up the default
[22:32] <mikedawson> from gaining 0.001% to 0.01% per second
[22:32] <mikedawson> first order or magnitude accomplished
[22:32] <sjustlaptop> you might try kicking it up to 40
[22:33] <sjustlaptop> there is a fair amount of overhead in backfilling a single pg, not sure how many you need to progress concurrently to cover the overhead
[22:33] <mikedawson> how high would be lunacy?
[22:33] <sjustlaptop> well, argonaut had no limit, so that would be around 400
[22:33] <sjustlaptop> in your case
[22:33] <sjustlaptop> I wouldn't go that high
[22:34] <mikedawson> i heeded your advice. 200 it is ;)
[22:36] <sjustlaptop> you might see memory use baloon
[22:37] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[22:37] <mikedawson> 4988 root 20 0 1617m 1.2g 5564 S 97 2.5 0:33.28 ceph-osd
[22:37] <mikedawson> 3709 root 20 0 950m 548m 5360 S 45 1.1 0:22.26 ceph-osd
[22:39] * korgon (~Peto@isp-korex-15.164.61.37.korex.sk) Quit (Quit: Leaving.)
[22:43] <sjustlaptop> is that much higher than usual?
[22:43] <mikedawson> sjustlaptop: does the amount of used space make any difference in the process? I don't have much (106GB data). CPU is 99% idle on all nodes. Lots of free RAM. Network is 99% idle. Disks seem to be largely idle.
[22:44] <sjustlaptop> not really
[22:44] <sjustlaptop> is it going faster?
[22:44] <sjustlaptop> how many in +backfilling?
[22:44] <mikedawson> sjustlaptop: I plan on 1.5GB of RAM per OSD, so it doesn't bother me
[22:45] <mikedawson> 2013-01-09 16:44:59.059740 mon.0 [INF] pgmap v186375: 4672 pgs: 3371 active+clean, 508 active+remapped+wait_backfill, 30 active+recovery_wait, 722 active+remapped+backfilling, 1 active+recovery_wait+remapped, 40 active+recovering; 106 GB data, 214 GB used, 12863 GB / 13077 GB avail; 15615/105249 degraded (14.836%)
[22:45] <sjustlaptop> k
[22:45] <mikedawson> sjustlaptop: i feel like there are some sleep(1000)s in here somewhere
[22:49] * vata (~vata@208.88.110.46) Quit (Ping timeout: 480 seconds)
[22:50] <mikedawson> sjustlaptop: I've done this with up to 12 nodes and 36 osds and very little data. even replacing one osd seems to take hours. I just can't find the bottleneck in cpu, ram, disk utilization, network utilization
[22:50] * Cube1 (~Cube@12.248.40.138) Quit (Ping timeout: 480 seconds)
[22:50] <sjustlaptop> how quickly is it recovering now?
[22:52] <mikedawson> 2013-01-09 16:52:47.148923 mon.0 [INF] pgmap v186818: 4672 pgs: 3521 active+clean, 338 active+remapped+wait_backfill, 14 active+recovery_wait, 1 active+recovering+remapped, 1 active+remapped, 769 active+remapped+backfilling, 28 active+recovering; 106 GB data, 227 GB used, 12850 GB / 13077 GB avail; 12752/103724 degraded (12.294%)
[22:53] <mikedawson> so 14.836% -> 12.294% in 8 minutes or so
[22:54] <mikedawson> definitely better
[22:57] * Cube (~Cube@12.248.40.138) has joined #ceph
[23:00] * vata (~vata@2607:fad8:4:6:221:5aff:fe2a:d1dd) has joined #ceph
[23:00] * benner (~benner@193.200.124.63) Quit (Read error: Connection reset by peer)
[23:04] <sjustlaptop> mikedawson: actually, try increasing osd_disk_threads to 5
[23:05] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[23:06] * benner (~benner@193.200.124.63) has joined #ceph
[23:10] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[23:10] * buck (~buck@bender.soe.ucsc.edu) Quit (Quit: Leaving.)
[23:11] <mikedawson> changed to osd_disk_threads to 5 and everything seems to have stopped moving. ceph -w is no longer spitting out a line / second it just for 5 or 10 seconds between prints.
[23:11] <mikedawson> and I'm getting 2013-01-09 17:10:36.348132 osd.7 [WRN] slow request 184.449889 seconds old, received at 2013-01-09 17:07:31.898188: osd_op(client.10940.0:67 rbd_header.2ab32ae8944a [assert-version 0~1560,watch 1~0
[23:12] <sjustlaptop> oh, sounds like we have found a bug
[23:12] <sjustlaptop> I'll file a bug, you probably want to put it back to 1
[23:14] <mikedawson> sjustlaptop: thanks! have to head out, but will offer any more details I can
[23:14] <sjustlaptop> k
[23:14] <sjustlaptop> should be easy to reproduce
[23:14] * Oliver1 (~oliver1@ip-178-203-175-61.unitymediagroup.de) has left #ceph
[23:16] * nwat (~Adium@soenat3.cse.ucsc.edu) Quit (Quit: Leaving.)
[23:23] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[23:28] * noob2 (~noob2@ext.cscinfo.com) has left #ceph
[23:42] * dpippenger (~riven@216.103.134.250) Quit (Remote host closed the connection)
[23:43] * agh (~agh@www.nowhere-else.org) Quit (Remote host closed the connection)
[23:43] * agh (~agh@www.nowhere-else.org) has joined #ceph
[23:49] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[23:49] * loicd (~loic@magenta.dachary.org) has joined #ceph
[23:59] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[23:59] * loicd (~loic@magenta.dachary.org) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.