#ceph IRC Log

Index

IRC Log for 2014-08-06

Timestamps are in GMT/BST.

[0:00] <rweeks> yup
[0:00] <sgnut> And do you have an estimation about the number of monitor servers depending on odd and/or storage size?
[0:00] <rweeks> sgnut, the Ceph documentation does have a lot of system requirements and guidelines like that.
[0:02] <sgnut> OK thanx
[0:03] * nljmo (~nljmo@5ED6C263.cm-7-7d.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[0:03] * baylight (~tbayly@69.169.150.21.provo.static.broadweavenetworks.net) Quit (Ping timeout: 480 seconds)
[0:03] * nljmo (~nljmo@5ED6C263.cm-7-7d.dynamic.ziggo.nl) has joined #ceph
[0:03] * rweeks installs a Ceph sticker over the Dell logo on this awful ultrabook POS
[0:04] * sgnut (~holoirc@147.Red-83-61-86.dynamicIP.rima-tde.net) Quit (Quit: sgnut)
[0:05] * rturk is now known as rturk|afk
[0:07] * aarcane (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[0:12] * colinm (~colinm@71-223-134-17.phnx.qwest.net) Quit (Quit: colinm)
[0:14] * baylight (~tbayly@74-220-196-40.unifiedlayer.com) has joined #ceph
[0:19] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[0:20] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[0:21] * colinm (~colinm@71-223-134-17.phnx.qwest.net) has joined #ceph
[0:21] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[0:29] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: leaving)
[0:29] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[0:30] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[0:42] * bbutton (~bbutton@66.192.187.30) Quit (Quit: This computer has gone to sleep)
[0:43] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) Quit (Read error: Connection reset by peer)
[0:48] * rendar (~I@host228-179-dynamic.1-87-r.retail.telecomitalia.it) Quit ()
[0:51] <flaf> Hi, if I follow this point of the doc http://ceph.com/docs/master/install/manual-deployment/#monitor-bootstrapping (just #monitor-bootstrapping), after a reboot (on Ubuntu 14.04), the ceph-mon daemon doesn't start. Must I create an empty "done" file in "/var/lib/ceph/mon/$cluster-$id"?
[0:51] <flaf> Is that correct?
[0:53] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[0:55] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[0:55] <flaf> And an empty "upstart" file too?
[0:57] * lpabon (~quassel@66-189-8-115.dhcp.oxfr.ma.charter.com) Quit (Ping timeout: 480 seconds)
[0:57] <dmick> flaf: that seems to be correct, looking at ceph-mon-all-starter.conf
[0:58] * jksM (~jks@3e6b5724.rev.stofanet.dk) Quit (Quit: jksM)
[0:59] <flaf> dmick: ok thank you. Maybe it could be mentionned in the doc (or maybe I missed something).
[0:59] <dmick> yeah...probably should be there
[0:59] <dmick> issues and pull requests gratefully accepted
[1:00] <flaf> Ok ;)
[1:02] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[1:03] * jksM (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[1:03] * jksM (~jks@3e6b5724.rev.stofanet.dk) Quit ()
[1:04] * baylight (~tbayly@74-220-196-40.unifiedlayer.com) Quit (Ping timeout: 480 seconds)
[1:04] * fsimonce (~simon@host225-92-dynamic.21-87-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[1:05] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:08] * sarob (~sarob@2001:4998:effd:600:e8b0:bbcf:fa1c:e209) Quit (Remote host closed the connection)
[1:08] * sarob (~sarob@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) has joined #ceph
[1:09] * zack_dolby (~textual@p8505b4.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:10] * zack_dolby (~textual@p8505b4.tokynt01.ap.so-net.ne.jp) has joined #ceph
[1:10] * zack_dolby (~textual@p8505b4.tokynt01.ap.so-net.ne.jp) Quit ()
[1:10] * dmsimard is now known as dmsimard_away
[1:11] * Cube (~Cube@65.115.107.67) has joined #ceph
[1:11] * garphy is now known as garphy`aw
[1:12] * oms101 (~oms101@p20030057EA2FF700EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:13] * Gnomethrower (~wings@97.65.103.250) has joined #ceph
[1:16] * sarob (~sarob@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[1:21] * oms101 (~oms101@p20030057EA24DB00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[1:28] * ircolle (~Adium@2601:1:a580:145a:824:35ed:6d6:bb2d) Quit (Quit: Leaving.)
[1:30] * bbutton (~bbutton@66.192.187.30) has joined #ceph
[1:30] * rturk|afk is now known as rturk
[1:30] * ircolle (~Adium@2601:1:a580:145a:1521:a6ec:445c:1933) has joined #ceph
[1:31] <seapasulli> I can't get my ceph cluster to just forget about these objects and move on. I have existing data in the pool that I need but that data is fine.
[1:31] <seapasulli> recovery 52/14024943 objects degraded (0.000%); 22/4674981 unfound (0.000%)
[1:31] * rturk is now known as rturk|afk
[1:31] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[1:31] <seapasulli> anyone know how to go about making just remove the broken objects?
[1:32] * sjustwork (~sam@2607:f298:a:607:d129:8cf9:5f9b:511e) Quit (Quit: Leaving.)
[1:34] * colinm (~colinm@71-223-134-17.phnx.qwest.net) Quit (Quit: colinm)
[1:41] * steveeJ (~junky@HSI-KBW-085-216-022-246.hsi.kabelbw.de) Quit (Ping timeout: 480 seconds)
[1:43] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[1:44] * baylight (~tbayly@204.15.85.169) has joined #ceph
[1:44] * bbutton (~bbutton@66.192.187.30) Quit (Quit: This computer has gone to sleep)
[1:54] * steveeJ (~junky@HSI-KBW-085-216-022-246.hsi.kabelbw.de) has joined #ceph
[1:55] * zipwow (zipwow@b.clients.kiwiirc.com) has joined #ceph
[1:55] <zipwow> I need some help petting the cat backwards.
[1:56] <zipwow> I have a network mounted drive with lots of files, and I'd like to use ceph to expose it as an S3 API.
[1:56] <zipwow> The quirk here is that while I can make it read-only, I can't change its format.
[1:57] <zipwow> Is there some way to tell ceph how to follow a pattern to search in its data directory?
[1:57] <zipwow> eg the key "something/somethingelse" is in the folder "something"
[1:58] <dmick> zipwow: no, rgw has very detailed ideas about how it stores objects
[1:58] <dmick> it's not really a shim layer between a filesystem and an S3 system; it's its own storage system that happens to use a filesystem for its own needs
[1:58] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[1:59] <zipwow> Ok, that's what I expected. Looks like I'll probably dust off some of the S3 mock/fake servers and tweak them.
[1:59] <zipwow> Thanks!
[2:00] <seapasulli> stupid question that I should know the answer to :: is there a way to list the pgs in pool or figure out what pool a PG belongs to?
[2:02] * zipwow (zipwow@b.clients.kiwiirc.com) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[2:02] <dmick> it's not exactly clean, but the pg number is <poolindex>.<pg-within-pool>
[2:02] * zack_dolby (~textual@e0109-114-22-0-42.uqwimax.jp) has joined #ceph
[2:03] <dmick> several commands will show you the pool index, including ceph osd dump
[2:04] <seapasulli> ah thanks!
[2:04] * yuriw (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) has joined #ceph
[2:04] <seapasulli> I am trying to figure out how to get my cluster to forget some objects basically
[2:04] <seapasulli> recovery 52/14024943 objects degraded (0.000%); 22/4674981 unfound (0.000%)
[2:05] <seapasulli> and use the replications to just rebuild (I hope that's right).
[2:06] <seapasulli> Thanks dmick
[2:06] <dmick> gl
[2:06] * lpabon (~quassel@66-189-8-115.dhcp.oxfr.ma.charter.com) has joined #ceph
[2:07] * lpabon (~quassel@66-189-8-115.dhcp.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[2:07] <seapasulli> yeah I don't know why but it says that 2 of the PGs are stuck since forever but are currently propagating. I am hoping I can just drop the pool and re-create it but if it's important and not my test pool I need to figure out where those objects are / what they are
[2:07] <seapasulli> don't know if this is the best way to go about it
[2:07] <seapasulli> but I hope so :)
[2:08] * yuriw1 (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[2:10] <seapasulli> nope part of the most important pool I have. Where is the startrek "no" gif
[2:16] <dmick> :(
[2:16] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:17] * JC1 (~JC@AMontpellier-651-1-420-97.w92-133.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[2:19] * bbutton (~bbutton@66.192.187.30) has joined #ceph
[2:20] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[2:24] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[2:29] * JC (~JC@AMontpellier-651-1-420-97.w92-133.abo.wanadoo.fr) has joined #ceph
[2:30] * rweeks (~rweeks@pat.hitachigst.com) Quit (Quit: Leaving)
[2:30] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:30] * [fred] (fred@earthli.ng) Quit (Ping timeout: 480 seconds)
[2:34] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[2:35] * TiCPU (~jeromepou@12.160.0.155) Quit (Quit: Ex-Chat)
[2:37] * lucas1 (~Thunderbi@222.247.57.50) has joined #ceph
[2:39] * lucas1 (~Thunderbi@222.247.57.50) Quit ()
[2:41] * [fred] (fred@earthli.ng) has joined #ceph
[2:43] * aknapp_ (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) has joined #ceph
[2:44] * bbutton (~bbutton@66.192.187.30) Quit (Quit: This computer has gone to sleep)
[2:45] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:50] * aknapp (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) Quit (Ping timeout: 480 seconds)
[2:51] * aknapp_ (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) Quit (Ping timeout: 480 seconds)
[3:00] <tchmnkyz> d
[3:05] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[3:05] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) Quit (Quit: leaving)
[3:05] * ircolle is now known as ircolle-afk
[3:11] * bazli (bazli@d.clients.kiwiirc.com) has joined #ceph
[3:12] <bazli> hi. anyone here?
[3:13] <Sysadmin88> people are here
[3:13] <bazli> i want to know... is there any ceph tool to list down pgs of specific pool?
[3:13] <bazli> do you have any idea?
[3:17] <dmick> filtering the pg dump output is probably easiest
[3:18] <bazli> i see... there is no other choice right?
[3:19] <bazli> also, is it possible if i want to move/relocate the pgs from osd A to osd B?
[3:19] <dmick> I don't think there's anything to select pgs from a specific pool
[3:19] <dmick> you could conceivably query the pool for pg_num and then write your own loop, but it's perhaps easier to get full json output and parse it
[3:20] <dmick> I don't understand the question for "move from OSD A to OSD B"; a pg is stored redundantly on a set of OSDs, on purpose
[3:20] <bazli> yeah.. no prob. just wonder if there is a tool. no prob, i could do the filtering
[3:21] <bazli> # ceph pg map 1.2
[3:21] <bazli> osdmap e3523 pg 1.2 (1.2) -> up [2,3,0] acting [2,3,0]
[3:22] <bazli> it tells me that pg 1.2 exist in osd.2 osd.3 and osd.0 right?
[3:23] <bazli> could i change it. say 2,3,1 ?
[3:23] <dmick> it tells you that crush says to replicate pg 1.2 onto 2,3,0; it's not a guarantee that 1.2 exists
[3:23] <dmick> and, no, you can't arbitrarily change the specific OSD mappings for a pg. Why would you want to, though? There might be a better way to that endpoint
[3:25] <bazli> yeah.. just curious and want to know actually...
[3:25] * b0e (~aledermue@x2f2a49c.dyn.telefonica.de) Quit (Quit: Leaving.)
[3:25] <dmick> basically placement is a function of the CRUSH meatgrinder
[3:26] <dmick> and it needs to satisfy a bunch of constraints, be reasonably well-distributed, etc.
[3:26] <dmick> that's kind of the meat of the fault-tolerance
[3:26] <dmick> if you have to evict things from an OSD for maintenance, you can do that
[3:26] <bazli> since i'm playing around with rep size 1, i would like to know if i can tell crush to change the map for certain pgs with rep size 1 to else where, so that i could do maintenance on certain osd server..
[3:27] <dmick> the way you do that is to either unweight the OSD gradually until it's at 0, or all at once with "down", and let the cluster move data off until it's empty
[3:27] <dmick> then you can do what you want and bring it back online when done
[3:28] * bbutton (~bbutton@206.169.237.4) has joined #ceph
[3:28] <bazli> ah... i see
[3:28] <dmick> http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-osds-manual
[3:28] <bazli> correct. now i'm clear
[3:29] <bazli> thanks dmick!
[3:29] <dmick> I think of it as setting up a roadblock. you can stop incoming traffic but you still have to let the road drain before you repair the bridge
[3:29] <bazli> yep exactly..
[3:31] <bazli> u cant simply take it down quickly and do what we want straight away. i missed out that part actually cause now i'm having pgs with incomplete status
[3:33] <bazli> nvm this is a test servers. so things were not so important
[3:35] <bazli> ok. i have to go now. thanks again for the explanation dmick!
[3:35] <dmick> you bet
[3:36] * bazli (bazli@d.clients.kiwiirc.com) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[3:38] * LeaChim (~LeaChim@host86-161-89-237.range86-161.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[3:38] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Quit: Leaving.)
[3:39] * zerick (~eocrospom@190.187.21.53) Quit (Ping timeout: 480 seconds)
[3:40] * PureNZ (~paul@122-62-45-132.jetstream.xtra.co.nz) has left #ceph
[3:41] * adamcrume (~quassel@c-71-204-162-10.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[3:42] * lupu (~lupu@86.107.101.214) has joined #ceph
[3:44] * lucas1 (~Thunderbi@222.240.148.130) has joined #ceph
[3:44] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[3:45] * zhaochao (~zhaochao@106.39.255.170) has joined #ceph
[3:50] * bazli (bazli@d.clients.kiwiirc.com) has joined #ceph
[3:54] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[3:56] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[3:57] * lucas1 (~Thunderbi@222.240.148.130) Quit (Quit: lucas1)
[4:02] * bazli (bazli@d.clients.kiwiirc.com) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[4:11] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[4:14] <Jakey> hey dmick
[4:14] <Jakey> [ceph@node7 m_cluster]$ ceph health
[4:14] <Jakey> HEALTH_OK
[4:15] <Jakey> i got that return instead active+clean
[4:15] <Jakey> whats the diff
[4:16] <dmick> what?
[4:17] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[4:19] * bazli (bazli@d.clients.kiwiirc.com) has joined #ceph
[4:19] * bazli (bazli@d.clients.kiwiirc.com) Quit ()
[4:20] * haomaiwang (~haomaiwan@223.223.183.114) Quit (Remote host closed the connection)
[4:20] * haomaiwang (~haomaiwan@124.248.208.2) has joined #ceph
[4:21] <Jakey> dmick: is that the expected result
[4:21] <Jakey> dmick: is my cluster operatable now
[4:22] * RameshN (~rnachimu@101.222.234.179) has joined #ceph
[4:22] <dmick> is OK a term you're not familiar with? It means "all is well"
[4:23] * bbutton_ (~bbutton@206.169.237.4) has joined #ceph
[4:23] <Jakey> lol
[4:23] * bbutton (~bbutton@206.169.237.4) Quit (Read error: Connection reset by peer)
[4:23] <Jakey> Your cluster should return an active + clean state when it has finished peering.
[4:23] <Jakey> thats what it said on the docs
[4:23] <Jakey> what does that mean
[4:26] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[4:26] <dmick> active+clean is a pg state
[4:26] <Jakey> and should i install the monitors on each osd nodes?
[4:28] * vz (~vz@122.167.89.39) has joined #ceph
[4:30] <dmick> http://ceph.com/docs/master/rados/operations/add-or-rm-mons/?highlight=odd%20number#adding-monitors
[4:34] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[4:35] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[4:36] <Jakey> dmick: i've just create new mons on node1 node8 node4 and this is the result of the quorum status
[4:36] <Jakey> https://www.irccloud.com/pastebin/WWWfWYAl
[4:36] <Jakey> what does it mean
[4:37] <dmick> I have to go; perhaps someone else in the channel can help
[4:38] <Jakey> okay be back soon :P
[4:39] * Gnomethrower (~wings@97.65.103.250) Quit (Ping timeout: 480 seconds)
[4:40] * Gnomethrower (~wings@97.65.103.250) has joined #ceph
[4:42] * cok (~chk@46.30.211.29) Quit (Quit: Leaving.)
[4:44] * bbutton__ (~bbutton@206.169.237.4) has joined #ceph
[4:44] * bbutton_ (~bbutton@206.169.237.4) Quit (Read error: Connection reset by peer)
[4:46] <bens> dmick you can't leave, there are #groanjokes
[4:49] <Jakey> he's already gone biatch
[4:52] <Jakey> hi i'm getting this message
[4:52] <Jakey> [node1][INFO ] monitor: mon.node1 is currently at the state of probing
[4:52] <Jakey> [node1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status
[4:52] <Jakey> [node1][WARNIN] node1 is not defined in `mon initial members`
[4:52] <Jakey> [node1][WARNIN] monitor node1 does not exist in monmap
[4:53] <Jakey> why?
[4:53] <Jakey> and i can't see my mon on the mon_status
[4:55] <Jakey> ???
[5:07] * shang (~ShangWu@175.41.48.77) has joined #ceph
[5:09] * lupu (~lupu@86.107.101.214) Quit (Ping timeout: 480 seconds)
[5:11] * Cube1 (~Cube@66-87-79-91.pools.spcsdns.net) has joined #ceph
[5:19] * Cube (~Cube@65.115.107.67) Quit (Ping timeout: 480 seconds)
[5:24] * lucas1 (~Thunderbi@218.76.25.66) Quit (Quit: lucas1)
[5:24] * Vacum (~vovo@i59F79B73.versanet.de) has joined #ceph
[5:30] * lucas1 (~Thunderbi@222.247.57.50) has joined #ceph
[5:31] * Vacum_ (~vovo@i59F79BE3.versanet.de) Quit (Ping timeout: 480 seconds)
[5:32] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[5:33] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[5:35] * burley (~khemicals@cpe-98-28-233-158.woh.res.rr.com) Quit (Quit: burley)
[5:35] * burley (~khemicals@cpe-98-28-233-158.woh.res.rr.com) has joined #ceph
[5:38] * haomaiwa_ (~haomaiwan@223.223.183.114) has joined #ceph
[5:38] * burley (~khemicals@cpe-98-28-233-158.woh.res.rr.com) Quit (Read error: Connection reset by peer)
[5:45] * haomaiwang (~haomaiwan@124.248.208.2) Quit (Ping timeout: 480 seconds)
[5:46] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[5:55] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[5:58] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[6:00] * zack_dolby (~textual@e0109-114-22-0-42.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[6:01] * capri_oner (~capri@212.218.127.222) Quit (Read error: Connection reset by peer)
[6:01] * capri_oner (~capri@212.218.127.222) has joined #ceph
[6:02] * Elbandi_ (~ea333@elbandi.net) has joined #ceph
[6:02] * Vacum_ (~vovo@i59F79B73.versanet.de) has joined #ceph
[6:03] * neurodrone (~neurodron@static-108-30-171-7.nycmny.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[6:03] * nyerup (irc@jespernyerup.dk) Quit (Remote host closed the connection)
[6:03] * nyerup (irc@jespernyerup.dk) has joined #ceph
[6:04] * Elbandi (~ea333@elbandi.net) Quit (Ping timeout: 480 seconds)
[6:04] * Vacum (~vovo@i59F79B73.versanet.de) Quit (Ping timeout: 480 seconds)
[6:05] * flaf (~flaf@2001:41d0:1:7044::1) Quit (Ping timeout: 480 seconds)
[6:05] * flaf (~flaf@2001:41d0:1:7044::1) has joined #ceph
[6:06] * zack_dolby (~textual@e0109-114-22-0-42.uqwimax.jp) has joined #ceph
[6:12] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[6:12] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[6:14] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[6:18] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) has joined #ceph
[6:21] * bbutton__ (~bbutton@206.169.237.4) Quit (Quit: This computer has gone to sleep)
[6:27] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[6:28] * vbellur (~vijay@122.167.203.190) Quit (Ping timeout: 480 seconds)
[6:29] * swami (~swami@110.225.2.187) has joined #ceph
[6:33] * vz (~vz@122.167.89.39) Quit (Remote host closed the connection)
[6:35] <bloodice> I wonder if anyone has created a web based monitoring tool for ceph clusters.... or an add on for a monitoring system like nagios
[6:36] <bloodice> ahh there is a health check for nagios... but thats it... hrm could use nagios graph to graph that
[6:37] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) Quit (Remote host closed the connection)
[6:38] <bloodice> i am thinking more like a graphical representation of pools/hosts osd status grid....
[6:39] <bloodice> throughput
[6:39] <bloodice> rados transactions
[6:39] <iggy> bloodice: kalamari?
[6:39] <iggy> wait, calamari
[6:41] <bloodice> lol, i just found that
[6:41] <bloodice> https://github.com/ceph/calamari-clients/blob/master/screenshots/screenshots.md
[6:43] * swami (~swami@110.225.2.187) Quit (Ping timeout: 480 seconds)
[6:43] <bloodice> wonder how much effort that is to setup
[6:43] * lucas1 (~Thunderbi@222.247.57.50) Quit (Quit: lucas1)
[6:43] * rdas (~rdas@121.244.87.115) has joined #ceph
[6:43] <bloodice> cant be that bad since ceph does all of its own monitoring
[6:44] <bloodice> Hahah "However, early adopters are welcome to try getting Calamari up and running, and feedback to the mailing list will certainly be appreciated."
[6:48] <bloodice> took me two days to get graphs to work with nagios.... then i found out it cant graph service uptime...grr
[6:49] <bloodice> Then there was the lilac addon for nagios
[6:49] <bloodice> which required editing to work 100%
[6:50] <bloodice> linux is definitely not for the lazy admin
[6:50] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[6:53] * aknapp (~aknapp@64.202.160.233) has joined #ceph
[6:53] <bloodice> iggy: i had actually seen that package last year, but it was only available with their enterprise product, glad they took it open source!
[6:55] * swami (~swami@110.225.2.187) has joined #ceph
[6:56] <iggy> it happened when RH bought them
[6:56] <iggy> so thank RH
[6:58] * vbellur (~vijay@121.244.87.117) has joined #ceph
[6:59] * aknapp (~aknapp@64.202.160.233) Quit (Remote host closed the connection)
[7:04] * bandrus (~oddo@216.57.72.205) Quit (Quit: Leaving.)
[7:04] <bloodice> yay red hat
[7:04] <bloodice> i came really close to moving our entire vmware setup to redhat kvm... but the pricing makes it more expensive on redhat than to just stay with vmware
[7:05] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[7:05] * shang (~ShangWu@175.41.48.77) has joined #ceph
[7:07] <bloodice> Technically, being a non-profit, we are better off switching to hyperv given ms pricing....
[7:11] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[7:13] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit ()
[7:13] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[7:14] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit ()
[7:14] * KevinPerks (~Adium@2606:a000:80a1:1b00:42:a31c:3b07:d5e1) Quit (Quit: Leaving.)
[7:14] * swami1 (~swami@49.32.0.106) has joined #ceph
[7:18] * ashishchandra (~ashish@49.32.0.102) has joined #ceph
[7:20] * swami (~swami@110.225.2.187) Quit (Ping timeout: 480 seconds)
[7:48] * swami (~swami@110.225.2.187) has joined #ceph
[7:51] * bkopilov (~bkopilov@213.57.16.224) Quit (Ping timeout: 480 seconds)
[7:51] * swami1 (~swami@49.32.0.106) Quit (Ping timeout: 480 seconds)
[7:52] * ashishchandra (~ashish@49.32.0.102) Quit (Ping timeout: 480 seconds)
[7:55] * michalefty (~micha@p20030071CE6394500D9AE333A7E85DF3.dip0.t-ipconnect.de) has joined #ceph
[7:59] * thb (~me@2a02:2028:131:fc40:6060:d2d3:ad02:67f2) has joined #ceph
[8:02] * lucas1 (~Thunderbi@222.247.57.50) has joined #ceph
[8:07] * vz (~vshankar@121.244.87.117) has joined #ceph
[8:15] * kanagaraj_ (~kanagaraj@nat-pool-blr-t.redhat.com) has joined #ceph
[8:16] * lalatenduM_ (~lalatendu@nat-pool-blr-t.redhat.com) has joined #ceph
[8:17] * rdas_ (~rdas@121.244.87.115) has joined #ceph
[8:18] * ashishchandra (~ashish@49.32.0.102) has joined #ceph
[8:20] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[8:21] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[8:21] * lala__ (~lalatendu@121.244.87.117) has joined #ceph
[8:21] * kanagaraj__ (~kanagaraj@121.244.87.117) has joined #ceph
[8:21] * rdas (~rdas@121.244.87.115) Quit (Ping timeout: 480 seconds)
[8:21] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Ping timeout: 480 seconds)
[8:21] * lalatenduM (~lalatendu@121.244.87.117) Quit (Ping timeout: 480 seconds)
[8:22] * swami (~swami@110.225.2.187) Quit (Quit: Leaving.)
[8:24] * kanagaraj_ (~kanagaraj@nat-pool-blr-t.redhat.com) Quit (Ping timeout: 480 seconds)
[8:24] * lalatenduM_ (~lalatendu@nat-pool-blr-t.redhat.com) Quit (Ping timeout: 480 seconds)
[8:31] * vbellur (~vijay@121.244.87.117) has joined #ceph
[8:31] * fghaas (~florian@91-119-223-7.dynamic.xdsl-line.inode.at) has joined #ceph
[8:36] * lcavassa (~lcavassa@89.184.114.246) has joined #ceph
[8:39] * lupu (~lupu@86.107.101.214) has joined #ceph
[8:45] * rendar (~I@host163-182-dynamic.3-79-r.retail.telecomitalia.it) has joined #ceph
[8:50] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[8:53] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[8:53] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[8:54] * fghaas (~florian@91-119-223-7.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[8:54] * fghaas (~florian@91-119-223-7.dynamic.xdsl-line.inode.at) has joined #ceph
[8:55] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit ()
[8:56] <longguang> what is inc\uosdmap.99__0_F4E987D3__none?
[8:56] <longguang> in directory current/meta/
[9:00] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) has joined #ceph
[9:02] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[9:05] * zack_dolby (~textual@e0109-114-22-0-42.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[9:05] * ultimape (~Ultimape@c-174-62-192-41.hsd1.vt.comcast.net) Quit (Read error: Connection reset by peer)
[9:06] * ultimape (~Ultimape@c-174-62-192-41.hsd1.vt.comcast.net) has joined #ceph
[9:07] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[9:08] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) Quit (Ping timeout: 480 seconds)
[9:10] * bbutton__ (~bbutton@206.169.237.4) has joined #ceph
[9:13] * fghaas (~florian@91-119-223-7.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[9:14] * zack_dolby (~textual@e0109-114-22-0-42.uqwimax.jp) has joined #ceph
[9:20] * bbutton__ (~bbutton@206.169.237.4) Quit (Quit: This computer has gone to sleep)
[9:23] * ashishchandra (~ashish@49.32.0.102) Quit (Ping timeout: 480 seconds)
[9:26] * zack_dolby (~textual@e0109-114-22-0-42.uqwimax.jp) Quit (Read error: Connection reset by peer)
[9:27] * zack_dolby (~textual@e0109-114-22-0-42.uqwimax.jp) has joined #ceph
[9:27] * ikrstic (~ikrstic@109-93-162-27.dynamic.isp.telekom.rs) has joined #ceph
[9:28] * analbeard (~shw@support.memset.com) has joined #ceph
[9:28] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:28] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[9:29] * baylight (~tbayly@204.15.85.169) Quit (Ping timeout: 480 seconds)
[9:39] * Sysadmin88 (~IceChat77@2.218.9.98) Quit (Quit: Friends help you move. Real friends help you move bodies.)
[9:41] <morfair> What is bootstrap-osd dir?
[9:41] * fsimonce (~simon@host225-92-dynamic.21-87-r.retail.telecomitalia.it) has joined #ceph
[9:43] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[9:52] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) has joined #ceph
[9:58] * ksk (~ksk@im.knubz.de) Quit (Quit: leaving)
[10:01] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) Quit (Ping timeout: 480 seconds)
[10:11] * ashishchandra (~ashish@49.32.0.66) has joined #ceph
[10:11] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) has joined #ceph
[10:15] * zack_dol_ (~textual@e0109-114-22-0-42.uqwimax.jp) has joined #ceph
[10:15] * zack_dolby (~textual@e0109-114-22-0-42.uqwimax.jp) Quit (Read error: Connection reset by peer)
[10:16] * lupu (~lupu@86.107.101.214) Quit (Ping timeout: 480 seconds)
[10:18] * b0e (~aledermue@x2f277dd.dyn.telefonica.de) has joined #ceph
[10:22] <kippi> morning
[10:22] <kippi> I have ceph up and running, looking amazing
[10:22] <kippi> What is the best way to get calamari installed?
[10:22] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) Quit (Read error: No route to host)
[10:23] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) has joined #ceph
[10:28] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[10:30] * b0e (~aledermue@x2f277dd.dyn.telefonica.de) Quit (Quit: Leaving.)
[10:30] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (Read error: Operation timed out)
[10:33] * ninkotech (~duplo@217-112-170-132.adsl.avonet.cz) has joined #ceph
[10:34] <kippi> will calamari-server only run with a host with vbox on it?
[10:35] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[10:40] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:40] * lupu (~lupu@86.107.101.214) has joined #ceph
[10:40] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[10:41] * cok (~chk@2a02:2350:18:1012:15ea:4de6:d0cf:429f) has joined #ceph
[10:42] * LeaChim (~LeaChim@host86-159-115-162.range86-159.btcentralplus.com) has joined #ceph
[10:48] <Jakey> https://www.irccloud.com/pastebin/FmL5g9ur
[10:48] <Jakey> why am i just getting rank -1 ????? ^
[10:48] <Jakey> please help
[10:51] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[10:53] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) has joined #ceph
[10:54] * jksM (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[10:57] * kapil (~ksharma@2620:113:80c0:5::2222) has joined #ceph
[10:58] * ade (~abradshaw@a181.wifi.FSV.CVUT.CZ) has joined #ceph
[11:01] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) Quit (Ping timeout: 480 seconds)
[11:03] * drankis (~drankis__@89.111.13.198) Quit (Quit: Leaving)
[11:03] * drankis (~drankis__@89.111.13.198) has joined #ceph
[11:07] * ade (~abradshaw@a181.wifi.FSV.CVUT.CZ) Quit (Ping timeout: 480 seconds)
[11:08] * aarcane_ (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[11:08] <steveeJ> is there a way to retrieve stripe_unit and stripe_count of an image? rbd info only shows the order
[11:10] * zack_dol_ (~textual@e0109-114-22-0-42.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[11:11] * swami (~swami@49.32.0.94) has joined #ceph
[11:11] * JC (~JC@AMontpellier-651-1-420-97.w92-133.abo.wanadoo.fr) Quit (Quit: Leaving.)
[11:12] * zack_dolby (~textual@e0109-114-22-0-42.uqwimax.jp) has joined #ceph
[11:14] * aarcane (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[11:15] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) Quit (Quit: Leaving...)
[11:22] * root________ (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[11:22] * JC (~JC@AMontpellier-651-1-420-97.w92-133.abo.wanadoo.fr) has joined #ceph
[11:22] * JC (~JC@AMontpellier-651-1-420-97.w92-133.abo.wanadoo.fr) Quit ()
[11:23] * JC (~JC@AMontpellier-651-1-420-97.w92-133.abo.wanadoo.fr) has joined #ceph
[11:25] * i_m (~ivan.miro@gbibp9ph1--blueice2n1.emea.ibm.com) has joined #ceph
[11:25] * zack_dolby (~textual@e0109-114-22-0-42.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[11:29] * aarcane_ (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[11:30] * lucas1 (~Thunderbi@222.247.57.50) Quit (Quit: lucas1)
[11:31] * madkiss (~madkiss@46.114.22.67) has joined #ceph
[11:33] <longguang> ceph osd dump, does it get osdmap?
[11:37] * zhangdongmao (~zhangdong@203.192.156.9) Quit (Ping timeout: 480 seconds)
[11:38] * root________ (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) Quit (Read error: No route to host)
[11:38] * root________ (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[11:40] * swami1 (~swami@106.216.136.18) has joined #ceph
[11:43] * ninkotech_ (~duplo@217-112-170-132.adsl.avonet.cz) has joined #ceph
[11:43] * JC (~JC@AMontpellier-651-1-420-97.w92-133.abo.wanadoo.fr) Quit (Quit: Leaving.)
[11:43] * ninkotech (~duplo@217-112-170-132.adsl.avonet.cz) Quit (Read error: Connection reset by peer)
[11:45] * lupu (~lupu@86.107.101.214) Quit (Ping timeout: 480 seconds)
[11:47] * ninkotech_ (~duplo@217-112-170-132.adsl.avonet.cz) Quit (Remote host closed the connection)
[11:47] * ninkotech_ (~duplo@217-112-170-132.adsl.avonet.cz) has joined #ceph
[11:47] * swami (~swami@49.32.0.94) Quit (Ping timeout: 480 seconds)
[11:51] * ninkotech_ (~duplo@217-112-170-132.adsl.avonet.cz) Quit (Remote host closed the connection)
[11:51] * ninkotech_ (~duplo@217-112-170-132.adsl.avonet.cz) has joined #ceph
[11:54] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) has joined #ceph
[11:56] * dis (~dis@109.110.66.143) Quit (Remote host closed the connection)
[11:57] * dis (~dis@109.110.66.143) has joined #ceph
[12:02] * Cube1 (~Cube@66-87-79-91.pools.spcsdns.net) Quit (Quit: Leaving.)
[12:02] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) Quit (Ping timeout: 480 seconds)
[12:11] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[12:14] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[12:14] <kippi> I have installed and vagrant is running
[12:14] <kippi> however I can't find any packages built
[12:16] * dis (~dis@109.110.66.143) Quit (Remote host closed the connection)
[12:16] * dis (~dis@109.110.66.143) has joined #ceph
[12:18] * swami (~swami@49.32.0.94) has joined #ceph
[12:19] * madkiss1 (~madkiss@46.115.135.54) has joined #ceph
[12:24] * swami1 (~swami@106.216.136.18) Quit (Ping timeout: 480 seconds)
[12:24] * madkiss (~madkiss@46.114.22.67) Quit (Ping timeout: 480 seconds)
[12:34] * lupu (~lupu@86.107.101.214) has joined #ceph
[12:36] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[12:41] * zhaochao (~zhaochao@106.39.255.170) has left #ceph
[12:42] * fghaas (~florian@91-119-223-7.dynamic.xdsl-line.inode.at) has joined #ceph
[12:44] * ninkotech_ (~duplo@217-112-170-132.adsl.avonet.cz) Quit (Ping timeout: 480 seconds)
[12:45] <morfair> Guys, help please. I have /etc/ceph/ceph.keyring file with my keys, but `ceph auth list` show me another keys!!! service ceph restart do change it
[12:45] <morfair> it's on mon
[12:52] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[12:53] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[12:55] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) has joined #ceph
[12:56] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[12:58] * lucas1 (~Thunderbi@222.240.148.154) has joined #ceph
[12:58] * lucas1 (~Thunderbi@222.240.148.154) Quit ()
[13:03] * zack_dolby (~textual@p8505b4.tokynt01.ap.so-net.ne.jp) has joined #ceph
[13:03] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) Quit (Ping timeout: 480 seconds)
[13:06] * ade (~abradshaw@natu.fit.cvut.cz) has joined #ceph
[13:06] * allsystemsarego (~allsystem@79.115.170.35) has joined #ceph
[13:07] * joao|lap (~JL@78.29.191.247) has joined #ceph
[13:07] * ChanServ sets mode +o joao|lap
[13:08] * diegows (~diegows@190.190.5.238) has joined #ceph
[13:09] * ninkotech_ (~duplo@217-112-170-132.adsl.avonet.cz) has joined #ceph
[13:15] * ade (~abradshaw@natu.fit.cvut.cz) Quit (Ping timeout: 480 seconds)
[13:16] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[13:17] * CAPSLOCK2000 (~oftc@2001:610:748:1::8) Quit (Quit: ZNC - http://znc.in)
[13:18] * CAPSLOCK2000 (~oftc@2001:610:748:1::8) has joined #ceph
[13:22] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[13:22] * drankis (~drankis__@89.111.13.198) Quit (Ping timeout: 480 seconds)
[13:23] * ade (~abradshaw@h31-3-227-203.host.redstation.co.uk) has joined #ceph
[13:23] * ninkotech_ (~duplo@217-112-170-132.adsl.avonet.cz) Quit (Read error: Connection reset by peer)
[13:24] * ninkotech (~duplo@217-112-170-132.adsl.avonet.cz) has joined #ceph
[13:24] * ninkotech (~duplo@217-112-170-132.adsl.avonet.cz) Quit ()
[13:24] * ninkotech (~duplo@cst-prg-81-4.cust.vodafone.cz) has joined #ceph
[13:25] * ade (~abradshaw@h31-3-227-203.host.redstation.co.uk) Quit ()
[13:25] * ade (~abradshaw@h31-3-227-203.host.redstation.co.uk) has joined #ceph
[13:31] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[13:43] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[13:49] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[13:50] * kippi (~oftc-webi@host-4.dxi.eu) Quit (Remote host closed the connection)
[13:51] * bbutton__ (~bbutton@206.169.237.4) has joined #ceph
[13:54] * lupu (~lupu@86.107.101.214) Quit (Ping timeout: 480 seconds)
[13:56] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) has joined #ceph
[14:03] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:04] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) Quit (Ping timeout: 480 seconds)
[14:07] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Ping timeout: 480 seconds)
[14:16] * fghaas (~florian@91-119-223-7.dynamic.xdsl-line.inode.at) has left #ceph
[14:18] * ade (~abradshaw@h31-3-227-203.host.redstation.co.uk) Quit (Ping timeout: 480 seconds)
[14:19] * madkiss1 (~madkiss@46.115.135.54) Quit (Ping timeout: 480 seconds)
[14:19] * vbellur (~vijay@121.244.87.117) has joined #ceph
[14:19] * cok (~chk@2a02:2350:18:1012:15ea:4de6:d0cf:429f) Quit (Quit: Leaving.)
[14:20] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[14:26] * kanagaraj__ (~kanagaraj@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:28] * vbellur (~vijay@121.244.87.117) Quit (Quit: Leaving.)
[14:28] * burley (~khemicals@cpe-98-28-233-158.woh.res.rr.com) has joined #ceph
[14:32] <burley> narurien: Switching to Centos 7 from Ubuntu solved those soft lockups
[14:36] * KevinPerks (~Adium@2606:a000:80a1:1b00:70ba:1d8d:d355:8182) has joined #ceph
[14:39] * rdas_ (~rdas@121.244.87.115) Quit (Quit: Leaving)
[14:49] * JC (~JC@AMontpellier-651-1-420-97.w92-133.abo.wanadoo.fr) has joined #ceph
[14:52] * pvsa (~pvsa@89.204.138.22) has joined #ceph
[14:56] * pvsa (~pvsa@89.204.138.22) Quit (Remote host closed the connection)
[14:57] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) has joined #ceph
[14:59] * bbutton__ (~bbutton@206.169.237.4) Quit (Ping timeout: 480 seconds)
[14:59] * ade (~abradshaw@h31-3-227-203.host.redstation.co.uk) has joined #ceph
[15:01] * bbutton (~bbutton@206.169.237.4) has joined #ceph
[15:04] * bbutton_ (~bbutton@206.169.237.4) has joined #ceph
[15:04] * bbutton (~bbutton@206.169.237.4) Quit (Read error: Connection reset by peer)
[15:05] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) Quit (Ping timeout: 480 seconds)
[15:06] * ade (~abradshaw@h31-3-227-203.host.redstation.co.uk) Quit (Quit: Too sexy for his shirt)
[15:06] * Cube (~Cube@66-87-79-91.pools.spcsdns.net) has joined #ceph
[15:09] <steveeJ> which context does the local in choose_local_tries refer to?
[15:11] * stein (~stein@91.247.228.48) Quit (Ping timeout: 480 seconds)
[15:14] * tracphil (~tracphil@130.14.71.217) has joined #ceph
[15:14] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[15:15] * Cube (~Cube@66-87-79-91.pools.spcsdns.net) has left #ceph
[15:19] * ade (~abradshaw@a181.wifi.FSV.CVUT.CZ) has joined #ceph
[15:19] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[15:21] * michalefty (~micha@p20030071CE6394500D9AE333A7E85DF3.dip0.t-ipconnect.de) has left #ceph
[15:23] * kippi (~oftc-webi@host-4.dxi.eu) has joined #ceph
[15:26] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[15:27] * bbutton_ (~bbutton@206.169.237.4) Quit (Ping timeout: 480 seconds)
[15:30] * ninkotech (~duplo@cst-prg-81-4.cust.vodafone.cz) Quit (Ping timeout: 480 seconds)
[15:30] * i_m (~ivan.miro@gbibp9ph1--blueice2n1.emea.ibm.com) Quit (Quit: Leaving.)
[15:31] * i_m (~ivan.miro@gbibp9ph1--blueice1n1.emea.ibm.com) has joined #ceph
[15:31] * gregmark (~Adium@68.87.42.115) has joined #ceph
[15:34] * ashishchandra (~ashish@49.32.0.66) Quit (Quit: Leaving)
[15:36] * ade (~abradshaw@a181.wifi.FSV.CVUT.CZ) Quit (Ping timeout: 480 seconds)
[15:38] * ade (~abradshaw@a181.wifi.FSV.CVUT.CZ) has joined #ceph
[15:38] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[15:38] * jeff-YF (~jeffyf@67.23.117.122) Quit ()
[15:38] * ade (~abradshaw@a181.wifi.FSV.CVUT.CZ) Quit ()
[15:39] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[15:40] <morfair> Do I cat create MDS on OpenVZ? Whether Metadata Server require s kernel modules?
[15:40] <morfair> s = a
[15:40] <morfair> cat = can
[15:40] <absynth> you don't want to install any ceph components on container virtualized hosts
[15:40] <absynth> really, you don't want to.
[15:42] <morfair> hm, i'm has deploy mon at openvz (openvz storage is not ceph storage cluster)
[15:45] <morfair> What problems with multiple MDS? How to solve single point failure issue with single MDS?
[15:46] <pressureman> run multiple MDS servers
[15:47] <pressureman> it's not a single point of failure... but it's just more of an active-passive failover model
[15:47] * drankis (~drankis__@91.90.247.98) has joined #ceph
[15:47] <morfair> I read http://ceph.com/docs/master/rados/deployment/ceph-deploy-mds/
[15:48] <morfair> Important
[15:48] <morfair> You must deploy at least one metadata server to use CephFS. There is experimental support for running multiple metadata servers. Do not run multiple metadata servers in production.
[15:48] <pressureman> cephfs is still experimental and not recommended for production ;-)
[15:48] <morfair> wow
[15:48] <steveeJ> is there a way to manipulate which OSDs an rbd client chooses out of the acting sets of the PGs?
[15:49] <pressureman> so you can run an experimental fileystem, and play with an experimental feature of said experimental filesystem
[15:50] <pressureman> steveeJ, not yet... but it's being considered - https://wiki.ceph.com/Planning/Blueprints/Firefly/osdmap%3A_primary_role_affinity
[15:50] <morfair> cephfs is my purpose!! i don't need in block layer (i have iscsi), i need ha and scalable file storage
[15:50] * brad_mssw (~brad@shop.monetra.com) has joined #ceph
[15:51] <pressureman> morfair, some people are running it in production, and it's working for them... but like all open source software, it comes without any warranty
[15:52] <morfair> pressureman, do u know the most problems with cephfs?
[15:52] <pressureman> nope, i don't run it
[15:52] <morfair> pressureman, so, in production i can use ceph only like iscsi?
[15:53] <steveeJ> pressureman: nice feature. i've read about this before. but it's not exactly what i'm looking for, since the client does I/O with multiple OSDs right?
[15:53] <pressureman> you can use it however you like. but the developers don't consider cephfs production-ready yet
[15:54] <pressureman> steveeJ, object writes always go to the primary OSD for the PG, but iirc, reads can be requested from any OSD that has a copy of the PG
[15:55] <steveeJ> pressureman: i'd like to have a say in the client's decision to request data from
[15:56] <flaf> pressureman: is "Ceph block device" production-ready?
[15:56] * drankis (~drankis__@91.90.247.98) Quit (Ping timeout: 480 seconds)
[15:57] <pressureman> flaf, it better be... i've been using it for 18 months now ;-)
[15:57] <flaf> Ah ok. :)
[15:58] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) has joined #ceph
[15:59] <pressureman> steveeJ, i was mistaken - the parallel reads is still a blueprint too - http://wiki.ceph.com/Planning/Blueprints/Giant/librados%3A_support_parallel_reads
[16:00] <pressureman> steveeJ, this is also perhaps relevant (albeit a little dated) - https://www.mail-archive.com/ceph-devel@vger.kernel.org/msg12440.html
[16:01] <steveeJ> pressureman: oh, so i was wrong in the first place thinking it already does distributed reads. someone in here told me that reads are distributed, but to be honest i never verified that through reading
[16:01] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[16:02] <pressureman> steveeJ, some of the more technical guys in the US come in here a bit later in the day. maybe you could ask again then.
[16:02] <steveeJ> that request-all-accept-first idea sounds really bandwidth hungry though :D
[16:02] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[16:02] * swami (~swami@49.32.0.94) Quit (Quit: Leaving.)
[16:04] <pressureman> yeah it's the kind of thing where you'd almost want the client to occasionally throw the request out to all, and take note of which OSD responded fastest, and use that OSD for the next 10 minutes or so... then repeat the process
[16:04] <kapil> Hi... a question on OSD creation. Is it a good idea to create an OSD on an already partitioned drive ? for example -
[16:04] <kapil> ceph-deploy osd prepare node1:sdb2
[16:04] <kapil> ceph-deploy osd activate node1:sdb2
[16:06] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) Quit (Ping timeout: 480 seconds)
[16:08] * gaud (~gaud@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[16:16] * dmsimard_away is now known as dmsimard
[16:17] * root________ (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[16:19] * rwheeler (~rwheeler@173.48.207.57) Quit (Quit: Leaving)
[16:20] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[16:32] * vz (~vshankar@121.244.87.117) Quit (Quit: Leaving)
[16:32] <flaf> I don't understand when I must put [mon.id] section in the conf file of a Ceph cluster.
[16:33] <flaf> For example, in the quick start installation, in the osd node there is no [mon.id] section in the ceph.conf.
[16:33] <flaf> But it works.
[16:34] * steveeJ (~junky@HSI-KBW-085-216-022-246.hsi.kabelbw.de) Quit (Ping timeout: 480 seconds)
[16:35] * bbutton_ (~bbutton@66.192.187.30) has joined #ceph
[16:37] <Gugge-47527> flaf: you should make a [mon.something] section in the config, if you want to configure something specific on mon.something :)
[16:37] <Gugge-47527> by default, you dont need to
[16:41] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:41] <dmsimard> Anyone know if there's been any development for using Ceph with Xenserver ?
[16:41] <dmsimard> Last I heard it was a proof of concept and work in progress like a year ago.
[16:43] <flaf> Gugge-47527: ok so for example "[mon.1] \n host = node1" isn't necessary?
[16:47] <flaf> So, in fact, to create a osds node, I just create and populate (and mount) /var/lib/ceph/osd/$cluster-$id/. Is that correct?
[16:51] <flaf> In fact, I'm reading the doc and I would like to install a cluster via Puppet. And It's difficult for me to understand the difference between plain-text-persistent configuration and the configuration via command line.
[16:53] * lupu (~lupu@86.107.101.214) has joined #ceph
[16:57] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) has joined #ceph
[16:58] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[16:59] * saaby (~as@mail.saaby.com) Quit (Remote host closed the connection)
[17:01] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[17:02] * baylight (~tbayly@204.15.85.169) has joined #ceph
[17:03] <seapasulli> flaf: you do not need mon.id but it can help. When you want to query mons directly later for example. I couldn't until I added the config equivilent even though my mons/cluster are somewhat working.
[17:04] <seapasulli> For osds you just need a path. I couldn't get encryption working so I used luks directly with the drives instead and specified ::
[17:04] <seapasulli> ceph-deploy osd prepare ${host}:/var/lib/ceph/osd/ceph-osd${i}:${drive}${increment}
[17:04] <seapasulli> ceph-deploy osd activate ${host}:/var/lib/ceph/osd/ceph-osd${i}:${drive}${increment}
[17:04] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) has joined #ceph
[17:04] <seapasulli> and that worked for me.
[17:05] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) Quit (Ping timeout: 480 seconds)
[17:05] <seapasulli> The osds will hold their config values within files in /var/lib/ceph/osd/cluster-id/ so you do not need to explicitly define them either but again it helped me later to do so .
[17:06] * vbellur (~vijay@122.166.147.17) has joined #ceph
[17:06] <flaf> seapasulli: ok, I see.
[17:08] * lupu (~lupu@86.107.101.214) Quit (Ping timeout: 480 seconds)
[17:08] <flaf> But when you prepare and activate a disk
[17:08] <flaf> you must import some conf before.
[17:08] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[17:09] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) has joined #ceph
[17:09] <flaf> For example, I must import /var/lib/ceph/bootstrap-osd/$cluster.keyring of the mon node.
[17:09] <seapasulli> oh yeah you need to import the auth for cephx to work.
[17:09] <seapasulli> but it's generated when you make the osd via the ceph auth add nonsense
[17:10] <seapasulli> and stored inside the path to the osd /var/lib/ceph/osd/ceph-osd${i}/keyring
[17:10] <seapasulli> as the command says in the osd deploy::
[17:11] <seapasulli> http://ceph.com/docs/master/rados/operations/add-or-rm-osds/
[17:11] <seapasulli> ceph auth add osd.{osd-num} osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-{osd-num}/keyring
[17:11] <seapasulli> ah sorry here ceph-osd -i {osd-num} --mkfs --mkkey
[17:13] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[17:14] * xarses (~andreww@12.164.168.117) has joined #ceph
[17:14] <seapasulli> so yeah it ishttps://github.com/enovance/puppet-ceph/blob/master/manifests/osd/device.pp this guy seems to have a good manifest for osd deployment
[17:14] <seapasulli> seems to mkpart and add the key
[17:15] * dvanders (~dvanders@2001:1458:202:180::101:f6c7) has joined #ceph
[17:17] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) Quit (Ping timeout: 480 seconds)
[17:17] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[17:19] <bloodice> The documents talk about the ceph.conf needing mon.x and osd.x defined, but ceph-deploy doesnt do it and everything works... are these values now depricated?
[17:22] <flaf> seapasulli: thx for the explanations (I'm thinking ;))
[17:23] * madkiss (~madkiss@46.115.8.72) has joined #ceph
[17:23] <flaf> (I'm thinking about the explanations of course))
[17:26] * reed (~reed@75-101-54-131.dsl.static.sonic.net) has joined #ceph
[17:28] * TiCPU (~jeromepou@12.160.0.155) has joined #ceph
[17:29] * adamcrume (~quassel@50.247.81.99) has joined #ceph
[17:29] * baylight (~tbayly@204.15.85.169) Quit (Ping timeout: 480 seconds)
[17:30] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Ping timeout: 480 seconds)
[17:31] * i_m1 (~ivan.miro@gbibp9ph1--blueice1n1.emea.ibm.com) has joined #ceph
[17:31] * i_m (~ivan.miro@gbibp9ph1--blueice1n1.emea.ibm.com) Quit (Read error: Connection reset by peer)
[17:32] <seapasulli> bloodice: no I know some parts (I can't remember off the top of my head right now) do not seem to work unless osds and mons are defined. I am using ceph firefly btw don't know if that helps
[17:50] * aknapp (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) has joined #ceph
[17:51] * root (~root@190.18.55.15) has joined #ceph
[17:51] * root is now known as ganders
[17:52] <bloodice> yea, i just upgraded to firefly. I have run into a few ceph commands that do not work, but i was able to use other commands to get around that issue. I would really like to avoid maintaining a list of OSDs, mainly because we will end up with 136 initially... monitors i dont mind at all.
[17:53] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:53] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[17:53] * lofejndif (~lsqavnbok@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[17:54] * vz (~vz@122.167.79.244) has joined #ceph
[17:54] <kapil> Hi... a question on OSD creation. Is it a good idea to create an OSD on an already partitioned drive ? for example -
[17:54] <kapil> ceph-deploy osd prepare node1:sdb2
[17:54] <kapil> ceph-deploy osd activate node1:sdb2
[17:56] <bloodice> ya know you can do those two actions in one command: ceph-deploy osd create <hostname>:/dev/<drivehere>
[17:57] <bloodice> as to the question.. last time i had a drive that was already partitioned, i had to destroy the partition before the create command would work.
[17:57] <bloodice> not an expert though...
[17:59] <kapil> Till ceph 0.80.5 I was able to prepare-activate an OSD on a partitioned drive. But now I see an issue
[17:59] * RameshN (~rnachimu@101.222.234.179) Quit (Ping timeout: 480 seconds)
[18:00] <kapil> the OSD gets created and activated actually, the problem is that in ceph osd tree, the osd is shown under the ceph-deploy-node and not under the actual node where osd disk exists
[18:01] * gaud (~gaud@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Ping timeout: 480 seconds)
[18:01] <seapasulli> kapil: do you have any data on the drive already?
[18:02] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[18:02] <seapasulli> what commands are you running?
[18:02] <kapil> nope, I use ceph-deploy purge and ceph-deploy purgedata before I start deploying a new cluster from scratch
[18:02] * primechu_ (~primechuc@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[18:03] * mancdaz (~mancdaz@2a00:1a48:7807:102:94f4:6b56:ff08:886c) Quit (Ping timeout: 480 seconds)
[18:03] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[18:03] <bloodice> last time i tried, i was on .72.1, i got an error on a partitioned drive... but that was with spinning WD Red 4TB drives
[18:04] <bloodice> We did have testing data on the drive this happened to...
[18:05] * bandrus (~oddo@216.57.72.205) has joined #ceph
[18:07] <kapil> seapasulli: ceph-deploy osd prepare node1:vdb1
[18:07] <kapil> ceph-deploy osd activate node1:vdb1
[18:08] * BManojlovic (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[18:09] * Sysadmin88 (~IceChat77@2.218.9.98) has joined #ceph
[18:10] <kapil> seapasulli: If I zap the disk first and prepare an OSD on unpartitioned disk, then it works fine
[18:10] <kapil> e.g after these commands I do not see the issue
[18:10] <kapil> ceph-deploy disk zap node1:vdb
[18:10] <kapil> ceph-deploy osd prepare node1:vdb
[18:10] <kapil> ceph-deploy osd activate node1:vdb1
[18:10] <seapasulli> When you zap it you wipe the drive so that makes a bit of sense.
[18:10] <seapasulli> but not the part where it's mapping the drive to your admin server
[18:11] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Ping timeout: 480 seconds)
[18:11] * vz (~vz@122.167.79.244) Quit (Remote host closed the connection)
[18:12] * b0e (~aledermue@x2f277dd.dyn.telefonica.de) has joined #ceph
[18:12] * aknapp_ (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) has joined #ceph
[18:13] <kapil> seapasulli: The actual osd gets created in my node1 only. In node1 the /var/lib/ceph/osd/ceph-1 also gets mounted to /dev/vdb1. But in osd tree somehow it shows under admin_server
[18:14] <seapasulli> still confused, sorry i'm dumb. Pastebin?
[18:14] * madkiss (~madkiss@46.115.8.72) Quit (Ping timeout: 480 seconds)
[18:16] <kapil> seapasulli: here is a snippet of the logs - http://fpaste.org/123607/34174214/
[18:16] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) has joined #ceph
[18:16] * ChanServ sets mode +o elder
[18:16] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[18:17] <kapil> in line1 you can see I am trying to activate the OSD:drive on hostname teuthida-4-3:vdb2
[18:18] <kapil> and look at line#13 .. the ceph-deploy is starting Ceph osd.0 on teuthida-4-0 and not on teuthida-4-3
[18:19] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[18:19] * aknapp (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) Quit (Ping timeout: 480 seconds)
[18:20] <seapasulli> look over your ceph.conf in /etc/ceph/ and your hosts file? ensure you don't have any crazy maps? Haven't checked pastebin yet but will in a sec
[18:20] * aknapp_ (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) Quit (Ping timeout: 480 seconds)
[18:21] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[18:23] * i_m1 (~ivan.miro@gbibp9ph1--blueice1n1.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[18:24] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[18:24] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[18:26] <kapil> ceph.conf now has very minimal information. mostly about mons. Nothing related to OSDs
[18:26] * b0e (~aledermue@x2f277dd.dyn.telefonica.de) Quit (Quit: Leaving.)
[18:27] <kapil> this is my ceph.conf - http://fpaste.org/123612/34239814/ .. very basic
[18:29] * joef (~Adium@2620:79:0:131:e8a4:2a19:e991:1273) has joined #ceph
[18:30] * gaud (~gaud@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[18:33] <kapil> seapasulli: Is this a valid usecase to create an OSD on an already partitioned drive ? or we should always create an OSD on a zapped/wiped disk ?
[18:36] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[18:38] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[18:39] * lala__ (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[18:46] * aknapp (~aknapp@64.202.160.225) has joined #ceph
[18:48] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[18:49] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[18:51] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[18:53] * bbutton_ (~bbutton@66.192.187.30) Quit (Quit: This computer has gone to sleep)
[18:56] * ircolle-afk is now known as ircolle
[18:57] * bbutton_ (~bbutton@66.192.187.30) has joined #ceph
[18:58] * swami (~swami@223.227.92.57) has joined #ceph
[18:59] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[19:00] * patcable (sid11336@id-11336.highgate.irccloud.com) has joined #ceph
[19:00] <patcable> I'm looking for a good way to debug 'ceph -s' just hanging and not seeing anything particularly useful for debug
[19:00] <patcable> strace isnt giving me any hints, though usually thats my utility knife debug tool
[19:01] * Clabbe (~oftc-webi@alv-global.tietoenator.com) Quit (Remote host closed the connection)
[19:02] <patcable> nothing particularly gross looking in ceph-mon.log
[19:02] <kraken> http://i.imgur.com/XEEI0Rn.gif
[19:04] * bkopilov (~bkopilov@213.57.18.214) has joined #ceph
[19:08] <ganders> in ceph osd tree, how can we remove a host without osd's? is there any command for that?
[19:09] <patcable> answer: it's always a DNS problem. nevermind.
[19:10] * mancdaz (~mancdaz@2a00:1a48:7807:102:94f4:6b56:ff08:886c) has joined #ceph
[19:12] * lpabon (~lpabon@nat-pool-bos-t.redhat.com) has joined #ceph
[19:15] <ganders> nevermind..ceph osd crush remove <hostname>
[19:18] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[19:19] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[19:21] * zerick (~eocrospom@190.187.21.53) Quit (Max SendQ exceeded)
[19:21] * adamcrume (~quassel@50.247.81.99) Quit (Remote host closed the connection)
[19:21] * bbutton_ (~bbutton@66.192.187.30) Quit (Quit: Leaving)
[19:22] * andreask (~andreask@91.224.48.154) has joined #ceph
[19:22] * ChanServ sets mode +v andreask
[19:23] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[19:25] * TiCPU (~jeromepou@12.160.0.155) Quit (Ping timeout: 480 seconds)
[19:25] * via_ (~via@smtp2.matthewvia.info) Quit (Quit: brb)
[19:29] * baylight (~tbayly@204.15.85.169) has joined #ceph
[19:30] <joao|lap> patcable, add '--debug-monc 10 --debug-ms 1' and you should see plenty of stuff
[19:32] * via (~via@smtp2.matthewvia.info) has joined #ceph
[19:35] <flaf> 1. The keyring must be exactly the same for all monitor daemons. Is is the same for all osds daemons (the same keyring for all osds)?
[19:37] <flaf> 2. When I have created my initial monitor, I have this file /var/lib/ceph/bootstrap-osd/$cluster.keyring which is created. What is the meaning of this file? Must I use this file for all my futur osds?
[19:41] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[19:43] * lcavassa (~lcavassa@89.184.114.246) Quit (Quit: Leaving)
[19:45] * andreask (~andreask@91.224.48.154) Quit (Quit: Leaving.)
[19:50] * swami (~swami@223.227.92.57) Quit (Quit: Leaving.)
[19:51] <seapasulli> kapil: I've always run zap against the osds I am adding unless I am using encryption. Then I specify the full path and mount the osds manually. May I ask why you do not want to have ceph zap the disks? are you using them for something else as well?
[19:52] <seapasulli> flaf: /var/lib/ceph/osd$ sudo md5sum ceph-0/keyring
[19:52] <seapasulli> 6bffeb90a8c0021ff3e2f2c6d5a33ec8 ceph-0/keyring
[19:52] <seapasulli> sudo md5sum ceph-1/keyring
[19:52] <seapasulli> bb74437a4fde8f4c220d463025d91e59 ceph-1/keyring
[19:53] <seapasulli> gross (checking krakens response.. didn't know this keyword)
[19:53] <kraken> http://i.imgur.com/XEEI0Rn.gif
[19:53] <seapasulli> wee!
[19:53] * gaud (~gaud@office-mtl1-nat-146-218-70-69.gtcomm.net) has left #ceph
[19:54] * adamcrume (~quassel@c-71-204-162-10.hsd1.ca.comcast.net) has joined #ceph
[19:55] * sjm (~sjm@108.53.250.33) has joined #ceph
[19:59] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[19:59] <flaf> seapasulli: Ok if I understand, one keyring per osd daemon (different than mon daemon -> one keyring for all mons).
[20:01] * ircolle is now known as ircolle-afk
[20:02] <flaf> And what is the meaning of the file /var/lib/ceph/bootstrap-osd/$cluster.keyring in a fresh install of initial mon? Can I discard this file when I install a new osd?
[20:09] * lpabon (~lpabon@nat-pool-bos-t.redhat.com) Quit (Quit: ZNC - http://znc.in)
[20:09] * lpabon (~lpabon@nat-pool-bos-t.redhat.com) has joined #ceph
[20:10] <seapasulli> I think they are required for the monitor
[20:10] <seapasulli> I see this line You may repeat this procedure. If it fails, check to see if the /var/lib/ceph/boostrap-{osd}|{mds} directories on the server node have keyrings. If they do not have keyrings, try adding the monitor again; then, return to this step.
[20:10] <seapasulli> but I have no idea tbh
[20:13] <seapasulli> how can I make ceph forget about these objects? I tried mark_unfound_lost but it doesn't seem to work
[20:13] <seapasulli> http://pastebin.com/HU8yZ1ae
[20:14] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) has joined #ceph
[20:20] * michalefty (~micha@188-195-129-145-dynip.superkabel.de) has joined #ceph
[20:20] * michalefty (~micha@188-195-129-145-dynip.superkabel.de) has left #ceph
[20:26] * dvanders_ (~dvanders@2001:1458:202:180::101:f6c7) has joined #ceph
[20:32] * dvanders (~dvanders@2001:1458:202:180::101:f6c7) Quit (Ping timeout: 480 seconds)
[20:35] <stj> is anyone here familiar with the ceph puppet modules at github.com/ceph/puppet-ceph ?
[20:35] <stj> just evaluating them before I deploy my cluster here, and wondering if they're recommended for use yet
[20:35] <stj> seems like they are still under somewhat heavy development
[20:41] <gchristensen> stj: have you managed a cluster of ceph before?
[20:41] * tupper (~chatzilla@108-83-203-37.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[20:41] <stj> only a small test cluster
[20:42] <stj> which I set up with ceph-deploy
[20:42] <stj> since we use puppet here, I'd like to use that to deploy the production cluster
[20:42] <gchristensen> I'm hesitant to deploy ceph with chef because of handling the what-if case and what happens when stuff goes wrong
[20:42] <stj> my biggest question about the automated stuff is surrounding crush map management
[20:43] <stj> seems like when it adds a new OSD, it updates the crush map... but doesn't necessarily put the new osd/host/bucket where I want it
[20:44] <stj> ....maybe I need to look into making sure the crush location stuff is in place before ceph is installed on the host... or at least before the osd's are in place
[20:45] <stj> in any case, ceph-deploy and chef/puppet modules don't really make it clear how you're supposed to manage the crush map
[20:45] <stj> i only see the ceph docs mentioning the defauly crush map that ceph-deploy sets up is not suitable for production ;)
[20:46] * rendar (~I@host163-182-dynamic.3-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[20:48] * rendar (~I@host163-182-dynamic.3-79-r.retail.telecomitalia.it) has joined #ceph
[20:54] * aknapp (~aknapp@64.202.160.225) Quit (Remote host closed the connection)
[20:55] * aknapp (~aknapp@64.202.160.225) has joined #ceph
[21:03] * aknapp (~aknapp@64.202.160.225) Quit (Ping timeout: 480 seconds)
[21:09] * rwheeler (~rwheeler@173.48.207.57) has joined #ceph
[21:13] <ganders> is there any procedure to uninstall ceph completely? i would like to uninstall it and then installed again fresh
[21:14] <seapasulli> ganders: I am not sure what it should be but I just do ceph-deploy uninstall ${host}; then log in and dpkg --get-selections | grep -iE "ceph|rbd|rados" and remove them as well. The auth keys and /etc/ceph should stay though and then you can re-install and restart ceph
[21:15] <seapasulli> otherwise there is purgedata which should wipe all of that stuff as well
[21:17] <Sysadmin88> ganders, format the drives works :)
[21:18] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[21:20] <ganders> yeah but i don't want to reinstall the os :D
[21:21] <Sysadmin88> uninstallers dont always remove everything
[21:21] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[21:34] <seapasulli> Has anyone set the radosgw maxbuckets settings via quota and have it work?
[21:35] * aknapp (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) has joined #ceph
[21:35] <dmick> ganders: apt-get purge or yum remove
[21:36] <dmick> followed by removing /etc/ceph, /var/lib/ceph, /var/run/ceph
[21:36] <dmick> should do it
[21:36] <dmick> (obviously those last steps lose your data, but presumably you don't care anymore)
[21:37] <seapasulli> hey dmick sorry to bug you again but how can I remove the "unfound" objects from a cluster? I tried mark unfound lost revert but they are still listed.
[21:37] <dmick> seapasulli: it's not trivial and I'm not sure of the best way to go about it
[21:38] <dmick> did you troll ceph.com/docs?
[21:38] <dmick> (using the original fishing meaning, not the Internet-monster meaning)
[21:38] <dmick> (sorry)
[21:39] <seapasulli> both work.
[21:39] <seapasulli> hehe
[21:40] <seapasulli> I got the command (mark_unfound_lost) from http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/
[21:40] <seapasulli> that's what i've been trying to use as my "what do I do now" glossery. Didn't see any other commands to use.
[21:44] <seapasulli> the examples in the docs say " if the osd is down " but i have 245 osds up and in.
[21:44] <seapasulli> oh and total 245
[21:50] * JC (~JC@AMontpellier-651-1-420-97.w92-133.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[21:51] * tracphil (~tracphil@130.14.71.217) Quit (Quit: leaving)
[21:53] * b0e (~aledermue@x2f277dd.dyn.telefonica.de) has joined #ceph
[21:57] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[22:01] * JC (~JC@AMontpellier-651-1-420-97.w92-133.abo.wanadoo.fr) has joined #ceph
[22:04] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[22:07] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[22:12] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[22:14] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit ()
[22:14] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[22:15] * xarses (~andreww@12.164.168.117) has joined #ceph
[22:15] <seapasulli> rados -p volumes rm ${object} on the missing objects is just sitting too :-( darnet
[22:15] <seapasulli> darnit*
[22:15] * TiCPU (~jeromepou@12.160.0.155) has joined #ceph
[22:16] * ircolle-afk is now known as ircolle
[22:16] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[22:23] <dmick> seapasulli: yeah, I'm not sure, sorry.
[22:23] <dmick> there may well have been mailing list threads about this too because I'm certain it's been discussed
[22:23] <seapasulli> I think I definitely have some kind of split weirdness ::
[22:23] <seapasulli> lacadmin@kg37-5:~/CephPDC$ ceph osd map rbd_data.134b16cc60615.0000000000001cf0 --pool=volumes
[22:23] <bens> dmick: did you hear that mysql walked into a bar
[22:23] <seapasulli> Error ENOENT: pool rbd_data.134b16cc60615.0000000000001cf0 does not exist
[22:24] <seapasulli> lacadmin@kg37-5:~/CephPDC$ rados rm rbd_data.134b16cc60615.0000000000001cf0 --pool=volumes
[22:24] <seapasulli> ^C
[22:24] <seapasulli> lacadmin@kg37-5:~/CephPDC$ rados rm jkflehkjsleufsehjfkeles98237983279823IDONTEXIST --pool=volumes
[22:24] <seapasulli> error removing volumes/jkflehkjsleufsehjfkeles98237983279823IDONTEXIST: (2) No such file or directory
[22:24] <bens> he sees two tables and asks "mind if i join you?"
[22:24] <dmick> drop bens;
[22:25] <seapasulli> hahahaha
[22:25] <bens> this is what happens when you laugh at the first one
[22:25] <seapasulli> :-(
[22:25] <bens> 7 months later, and we are still here
[22:25] <dmick> heh.
[22:26] <dmick> so seapasulli: one possibly-confusing thing is that osd map just does the calculation; it doesn't verify that hte object actually exists
[22:26] <seapasulli> http://static.spiceworks.com/attachments/post/0004/8112/SQL.jpg
[22:26] <dmick> but I'm not getting what you were trying to demonstrate with those commands; being a bit thick today
[22:26] <seapasulli> dmick Indeed I read that ceph just assumes the object existst
[22:27] <dmick> more like "osd map shows where that object would be placed, whether it exists or not"
[22:27] <seapasulli> no no it's totally me. So when you try to map the object, it doesn't exist. When you try to delete it. It hangs indefinitely (strace shows a timeout going on but I am too dumb to know any further). Then when I try to delete an object I know doesn't exist I get "no such file or directory"
[22:28] <dmick> osd map is returning ENOENT? rly?
[22:28] <dmick> that's interesting
[22:28] <dmick> I didn't think it would
[22:29] <dmick> oh duh it says the pool doesn't exist
[22:29] <dmick> syntax error
[22:29] <seapasulli> ah i'm dumb super dumb
[22:29] <dmick> osd map <pool> <obj>
[22:29] <dmick> and no, just a mistake
[22:30] <seapasulli> same deal it says the object doesn't exist
[22:30] <seapasulli> error removing volumes/jkflehkjsleufsehjfkeles98237983279823IDONTEXIST: (2) No such file or directory
[22:30] <dmick> right. but at least osd map works, I assume
[22:30] <dmick> and tells you a pg and a set of osds
[22:31] <seapasulli> yup. I got the list of osds etc from ceph pg list_missing 5.27f
[22:32] <seapasulli> (freakout) yeah total syntax error as now it reports correctly. Damnit. I don't know how to remove these damn objects or tell ceph to forget them. The cluster is just reporting "health warn" forever
[22:32] <dmick> yeah. so the remove hangs because there are unfound objects
[22:32] <dmick> so you need a crowbar
[22:37] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[22:38] <seapasulli> ah crowbar?
[22:38] <seapasulli> like a "nuke the pool" and rebuild it type
[22:38] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[22:41] <dmick> some kind of external force, yes, I mean. It may be that you can fix this by hacking at the OSDs filestores directly, maybe
[22:41] <seapasulli> Ah crap that sounds scary but feasible
[22:45] <seapasulli> Thank you for all of your help dmick. I can't really return the favor yet but I'll try to bug you less ^_^
[22:45] <dmick> not really helping; wish I could more
[22:45] <dmick> but I don't want to advise you to do somethign that will break your cluster
[22:46] * sarob (~sarob@ip-64-134-225-62.public.wayport.net) has joined #ceph
[22:46] * sarob (~sarob@ip-64-134-225-62.public.wayport.net) Quit (Remote host closed the connection)
[22:47] * sarob (~sarob@2001:4998:effd:7801::1034) has joined #ceph
[22:47] <seapasulli> haha I agree but I don't know how else to get it working.
[22:49] <seapasulli> it's been rebuilding and peering for almost 2 weeks now with no change.
[22:49] <seapasulli> I tried deep scrubs etc and no chnage.
[22:53] <seapasulli> Right now I am just running a find across /var/lib/ceph/osd/ceph-id/current/ trying to find anything with 12e4e4a053b74 in the name.
[22:56] * ganders (~root@190.18.55.15) Quit (Quit: WeeChat 0.4.2)
[22:57] <dmick> so mark_unfound_lost revert succeeded, but you still can't delete the unfound objects
[22:57] <dmick> what's the recovery state for the pgs?
[22:58] <seapasulli> nope doesn't succeed. It says that the pg has no unfound objects
[22:59] * Defcon_102KALI_LINUX (~Defcon_10@77.79.156.55.dynamic.ufanet.ru) has joined #ceph
[23:00] <seapasulli> ceph health detail
[23:00] <seapasulli> HEALTH_WARN 1 pgs peering; 2 pgs recovering; 1 pgs stuck inactive; 3 pgs stuck unclean; 7 requests are blocked > 32 sec; 2 osds have slow requests; recovery 306/15723657 objects degraded (0.002%); 22/5241219 unfound (0.000%)
[23:00] <seapasulli> pg 5.f4f is stuck unclean since forever, current state active+recovering, last acting [279,115,78]
[23:00] <seapasulli> pg 5.27f is stuck unclean since forever, current state active+recovering, last acting [213,0,258]
[23:00] <seapasulli> lacadmin@kg37-5:~/CephPDC$ ceph pg 5.27f mark_unfound_lost revert
[23:00] <seapasulli> 2014-08-06 15:59:14.729680 7fbd3adfb700 0 -- 10.16.0.117:0/1012658 >> 10.16.64.26:6829/33535 pipe(0x7fbd2c05c170 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fbd2c05c400).fault
[23:01] <seapasulli> 2014-08-06 15:59:24.417119 7fbd3aaf8700 0 -- 10.16.0.117:0/1012658 >> 10.16.64.26:6823/13117 pipe(0x7fbd2c05fcc0 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fbd2c05c2f0).fault
[23:01] <seapasulli> 2014-08-06 15:59:34.540112 7fbd3abf9700 0 -- 10.16.0.117:0/1012658 >> 10.16.64.26:6835/531 pipe(0x7fbd2c060b70 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fbd2c05eb10).fault
[23:01] * Defcon_102KALI_LINUX (~Defcon_10@77.79.156.55.dynamic.ufanet.ru) Quit ()
[23:01] <seapasulli> 2014-08-06 15:59:44.440016 7fbd3aaf8700 0 -- 10.16.0.117:0/1012658 >> 10.16.64.26:6839/20291 pipe(0x7fbd2c0610a0 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fbd2c061330).fault
[23:01] <seapasulli> pg has no unfound objects
[23:01] <seapasulli> when I try to mark them as missing
[23:01] <seapasulli> It faults but the osd is active and listening and I can connect to it
[23:01] <seapasulli> So not sure what that is about
[23:02] * steveeJ (~junky@client156.amh.kn.studentenwohnheim-bw.de) has joined #ceph
[23:08] * sarob_ (~sarob@ip-64-134-225-62.public.wayport.net) has joined #ceph
[23:11] * lupu (~lupu@86.107.101.214) has joined #ceph
[23:12] * joao|lap (~JL@78.29.191.247) Quit (Remote host closed the connection)
[23:12] <stj> has anyone else seen that "ceph-deploy prepare foo bar..." seems to also be activating disks?
[23:12] <seapasulli> is there a set of docs for osd file structure?
[23:13] <seapasulli> stj: it seemed to activate my disks initially as well. It doesn't whne you specify paths to mounted paths that ceph is supposed to use though
[23:13] <stj> docs say that prepare is only supposed to partition the disks... but it seems to be activating the OSD and starting the service too.
[23:13] * sarob (~sarob@2001:4998:effd:7801::1034) Quit (Read error: Connection reset by peer)
[23:13] <seapasulli> indeed. I noticed that too :)
[23:13] <stj> i.e. it makes them Up and In
[23:13] <stj> yeah, I'm specifying host-name:sdc:/dev/sdb1
[23:14] <stj> where sdc is the entire data disk, sdb1 is the journal part
[23:14] <seapasulli> indeed I did the same initially.
[23:14] * madkiss (~madkiss@46.114.16.152) has joined #ceph
[23:14] <seapasulli> I couldn't get on disk encryption to work with ceph-deploy so had to use luks and specify mounted paths.
[23:14] <seapasulli> that doesn't activate them
[23:15] <stj> hmm. I'm just wondering where my window is for putting the new OSDs into the right part of my crush map is :)
[23:16] <stj> I'd rather not bring the OSD up and have the cluster start putting data onto it until I've told the crush map where the osd actually is
[23:17] <stj> seapasulli: so when your OSDs are prepared, but not active, are they Down and Out? Or do they not yet exist in the crush map?
[23:18] <seapasulli> not yet exist
[23:18] <seapasulli> what version of ceph-deploy are you using?
[23:18] <seapasulli> root@kg37-5:~# ceph-deploy --version
[23:18] <seapasulli> 1.5.9
[23:18] <stj> 1.5.10
[23:18] <seapasulli> ah hrm. I wender when that started
[23:19] <stj> i'm just very confused about the osd crush location stuff
[23:19] <stj> it seems vitally important that your crush location is correct before you start putting data on an OSD
[23:20] <stj> but most of the deployment tools don't give you a great opportunity to set that before they activate things
[23:20] <stj> will have to look at it more tomorrow
[23:20] <stj> quitting time :)
[23:26] <seapasulli> byeeeee GL GL
[23:28] * KevinPerks (~Adium@2606:a000:80a1:1b00:70ba:1d8d:d355:8182) Quit (Quit: Leaving.)
[23:29] * b0e (~aledermue@x2f277dd.dyn.telefonica.de) Quit (Quit: Leaving.)
[23:38] <steveeJ> whenever i make big change to the ceph cluster like increasing size for many images, everything seems kind of stuck. i've already decreased the number of simultaneous recoveries. most of the time "ceph -s" shows no recovery activity at all
[23:39] * KevinPerks (~Adium@2606:a000:80a1:1b00:8550:145d:4b79:1890) has joined #ceph
[23:41] <steveeJ> the load on the servers raises up to the 20-30's, but there's no CPU or network activity. what are the disks doing here?
[23:42] * b0e (~aledermue@x2f277dd.dyn.telefonica.de) has joined #ceph
[23:45] <seapasulli> dmick: ok I'm dumb as I know I did this before but I rebooted the 2nd osd in the array and now i'm down to 10 unfound from 22.
[23:45] <seapasulli> don't know why that happened but htanks for all of your help!
[23:45] <dmick> er, that's good!
[23:45] <dmick> so if you start again on an unfound object
[23:46] <seapasulli> didn't mess with FS as couldn't find any docs on how it was structured on a quick google.
[23:46] <dmick> try cranking up the debug on the primary for the object and do the mark unfound lost revert thing, and examine the logs
[23:46] <dmick> to see why it's not willing to clean up the object in question
[23:46] <dmick> because that really ought to fix this
[23:47] <erice> SydneyBridge
[23:47] <seapasulli> indeed. will do once the cluster settles. (I didn't set nodown or no out on reboot)
[23:54] * b0e (~aledermue@x2f277dd.dyn.telefonica.de) Quit (Quit: Leaving.)
[23:56] * madkiss (~madkiss@46.114.16.152) Quit (Ping timeout: 480 seconds)
[23:57] * jeff-YF (~jeffyf@67.23.117.122) Quit (Quit: jeff-YF)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.