#ceph IRC Log

Index

IRC Log for 2015-02-24

Timestamps are in GMT/BST.

[0:02] * agshew_ (~agshew@host-69-145-59-76.bln-mt.client.bresnan.net) Quit (Ping timeout: 480 seconds)
[0:06] <seapasul1i> anyone free to shed some light on a newbish question?
[0:07] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[0:08] <seapasul1i> I have a rack of 3 storage nodes with roughly the same amount of disks in each. A few harddrives failed in the rack. I removed them from the cluster, and now it is not rebalancing properly. A lot of my pgs are undersized with only 2 of 3 osds. I checked a pg and it jus says "recovery_progress": { "backfill_targets": []," which to me means that it is not selecting any new osds to replicate to.
[0:09] <seapasul1i> I am not sure why it is not selecting a third.
[0:09] * redf_ (~red@chello084112110034.11.11.vie.surfer.at) has joined #ceph
[0:09] * avozza (~avozza@83.162.204.36) Quit (Remote host closed the connection)
[0:13] * tupper_ (~tcole@rtp-isp-nat1.cisco.com) has joined #ceph
[0:16] * redf (~red@chello084112110034.11.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[0:17] * peejayz (~peeejayz@cpc69055-oxfd26-2-0-cust848.4-3.cable.virginm.net) has joined #ceph
[0:17] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[0:17] * peejayz (~peeejayz@cpc69055-oxfd26-2-0-cust848.4-3.cable.virginm.net) Quit ()
[0:18] <lurbs> seapasul1i: Have you had a look at: http://ceph.com/docs/master/rados/operations/crush-map/#tunables
[0:18] * togdon (~togdon@74.121.28.6) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[0:19] <lurbs> Could very easily be something else, though. What does 'ceph osd tree' look like?
[0:19] * peeejayz (~peeejayz@vpn-2-034.rl.ac.uk) Quit (Ping timeout: 480 seconds)
[0:19] * togdon (~togdon@74.121.28.6) has joined #ceph
[0:21] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[0:22] <seapasul1i> lurbs ceph osd tree looks all up
[0:22] * OutOfNoWhere (~rpb@199.68.195.102) has joined #ceph
[0:24] * Manshoon (~Manshoon@c-50-181-29-219.hsd1.wv.comcast.net) has joined #ceph
[0:24] * sigsegv (~sigsegv@188.26.161.163) has joined #ceph
[0:24] * sigsegv (~sigsegv@188.26.161.163) Quit ()
[0:25] * Manshoon (~Manshoon@c-50-181-29-219.hsd1.wv.comcast.net) Quit (Remote host closed the connection)
[0:26] * Manshoon (~Manshoon@199.16.199.4) has joined #ceph
[0:27] * moore (~moore@64.202.160.88) Quit (Remote host closed the connection)
[0:28] * moore (~moore@64.202.160.88) has joined #ceph
[0:29] * agshew_ (~agshew@host-69-145-59-76.bln-mt.client.bresnan.net) has joined #ceph
[0:35] * garphy is now known as garphy`aw
[0:36] * moore (~moore@64.202.160.88) Quit (Ping timeout: 480 seconds)
[0:37] * rljohnsn (~rljohnsn@c-73-15-126-4.hsd1.ca.comcast.net) has joined #ceph
[0:41] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[0:42] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:45] * Manshoon (~Manshoon@199.16.199.4) Quit (Ping timeout: 480 seconds)
[0:46] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[0:49] * togdon (~togdon@74.121.28.6) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[0:51] * togdon (~togdon@74.121.28.6) has joined #ceph
[1:02] * danieagle (~Daniel@200-148-39-11.dsl.telesp.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[1:03] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[1:05] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[1:05] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[1:06] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[1:06] * moore (~moore@97-124-123-201.phnx.qwest.net) has joined #ceph
[1:06] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Quit: Kirk out)
[1:09] * jks (~jks@178.155.151.121) Quit (Ping timeout: 480 seconds)
[1:13] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Quit: Exeunt dneary)
[1:14] * qhartman (~qhartman@den.direwolfdigital.com) has joined #ceph
[1:15] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[1:15] * dupont-y (~dupont-y@2a01:e34:ec92:8070:fd55:b0c:ba98:d424) Quit (Quit: Ex-Chat)
[1:21] * nitti_ (~nitti@162.222.47.218) Quit (Ping timeout: 480 seconds)
[1:24] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:24] * Manshoon (~Manshoon@c-50-181-29-219.hsd1.wv.comcast.net) has joined #ceph
[1:38] * togdon (~togdon@74.121.28.6) Quit (Quit: Textual IRC Client: www.textualapp.com)
[1:40] * bandrus (~brian@197.sub-70-211-68.myvzw.com) Quit (Quit: Leaving.)
[1:41] * Manshoon (~Manshoon@c-50-181-29-219.hsd1.wv.comcast.net) Quit (Remote host closed the connection)
[1:45] * moore (~moore@97-124-123-201.phnx.qwest.net) Quit (Remote host closed the connection)
[1:52] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[1:57] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[2:01] * lightspeed (~lightspee@2001:8b0:16e:1:8326:6f70:89f:8f9c) Quit (Ping timeout: 480 seconds)
[2:01] * nitti (~nitti@c-66-41-30-224.hsd1.mn.comcast.net) has joined #ceph
[2:03] * agshew_ (~agshew@host-69-145-59-76.bln-mt.client.bresnan.net) Quit (Ping timeout: 480 seconds)
[2:10] * lightspeed (~lightspee@2001:8b0:16e:1:8326:6f70:89f:8f9c) has joined #ceph
[2:14] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:25] * rmoe (~quassel@173-228-89-134.dsl.static.fusionbroadband.com) has joined #ceph
[2:26] * tupper_ (~tcole@rtp-isp-nat1.cisco.com) Quit (Ping timeout: 480 seconds)
[2:28] * saltlake (~saltlake@pool-71-244-62-208.dllstx.fios.verizon.net) has joined #ceph
[2:29] * jclm (~jclm@209.49.224.62) Quit (Quit: Leaving.)
[2:31] * debian112 (~bcolbert@24.126.201.64) Quit (Quit: Leaving.)
[2:31] * flaf (~flaf@2001:41d0:1:7044::1) Quit (Remote host closed the connection)
[2:31] * flaf (~flaf@2001:41d0:1:7044::1) has joined #ceph
[2:33] * olc (~olecam@93.184.35.82) Quit (Remote host closed the connection)
[2:35] * vasu (~vasu@c-50-185-132-102.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[2:36] * saltlake (~saltlake@pool-71-244-62-208.dllstx.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[2:37] * LeaChim (~LeaChim@host86-159-234-113.range86-159.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:39] * olc (~olecam@93.184.35.82) has joined #ceph
[2:46] <epf> is there a proper way to prevent radosgw via mod_fastcgi logging every single operation to apache's error_log? ex: [warn] FastCGI: 127.0.0.1 PUT http://127.0.0.1......
[2:47] * benh57 (~benh57@sceapdsd43-30.989studios.com) has joined #ceph
[2:48] <benh57> basic RGW question: is every s3 object always only one ceph object?
[2:48] * nitti (~nitti@c-66-41-30-224.hsd1.mn.comcast.net) Quit (Remote host closed the connection)
[2:48] * nitti (~nitti@c-66-41-30-224.hsd1.mn.comcast.net) has joined #ceph
[2:48] <benh57> or will they stripe if i say put a many-GB file in there
[2:48] <epf> I can answer that one: no
[2:48] <epf> it does its own striping
[2:48] <benh57> excellent
[2:50] <benh57> I suppose the 'rgw object stripe size' config setting would have answered that.
[2:50] <benh57> for me.
[2:50] * rljohnsn1 (~rljohnsn@c-73-15-126-4.hsd1.ca.comcast.net) has joined #ceph
[2:53] * rljohnsn (~rljohnsn@c-73-15-126-4.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[2:54] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:56] * nitti (~nitti@c-66-41-30-224.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[3:00] * cholcombe973 (~chris@7208-76ef-ff1f-ed2f-329a-f002-3420-2062.6rd.ip6.sonic.net) has left #ceph
[3:16] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[3:17] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[3:25] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) Quit (Quit: Leaving.)
[3:29] * davidz (~davidz@2605:e000:1313:8003:d933:a608:cb77:4808) Quit (Quit: Leaving.)
[3:36] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) Quit (Quit: Away)
[3:39] * ccheng (~ccheng@128.211.165.1) Quit (Remote host closed the connection)
[3:40] * Manshoon (~Manshoon@c-50-181-29-219.hsd1.wv.comcast.net) has joined #ceph
[3:40] * Manshoon (~Manshoon@c-50-181-29-219.hsd1.wv.comcast.net) Quit (Remote host closed the connection)
[3:41] * wkennington (~william@76.77.180.204) Quit (Remote host closed the connection)
[3:44] * wkennington (~william@76.77.180.204) has joined #ceph
[3:47] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:51] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[3:53] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) has joined #ceph
[3:57] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[4:01] * macjack (~Thunderbi@123.51.160.200) Quit (Remote host closed the connection)
[4:01] * ccheng (~ccheng@c-50-165-131-154.hsd1.in.comcast.net) has joined #ceph
[4:12] * macjack (~Thunderbi@123.51.160.200) has joined #ceph
[4:12] * Concubidated (~Adium@2607:f298:b:635:c8b5:72fa:13b4:3bce) Quit (Ping timeout: 480 seconds)
[4:16] * saltlake (~saltlake@pool-71-244-62-208.dllstx.fios.verizon.net) has joined #ceph
[4:16] * lcurtis (~lcurtis@47.19.105.250) Quit (Ping timeout: 480 seconds)
[4:16] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Ping timeout: 480 seconds)
[4:16] * chasmo77 (~chas77@158.183-62-69.ftth.swbr.surewest.net) Quit (Quit: It's just that easy)
[4:24] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[4:24] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[4:25] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[4:39] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Read error: Connection reset by peer)
[4:39] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[4:42] * bkopilov (~bkopilov@bzq-79-182-164-80.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[4:44] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[4:48] * jclm (~jclm@172.56.41.203) has joined #ceph
[4:54] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[4:54] * saltlake (~saltlake@pool-71-244-62-208.dllstx.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[4:56] * zack_dolby (~textual@nfmv001079100.uqw.ppp.infoweb.ne.jp) has joined #ceph
[5:01] * OutOfNoWhere (~rpb@199.68.195.102) Quit (Ping timeout: 480 seconds)
[5:04] * vilobhmm (~vilobhmm@c-24-23-129-93.hsd1.ca.comcast.net) has joined #ceph
[5:07] * aszeszo (~aszeszo@adrx107.neoplus.adsl.tpnet.pl) Quit (Remote host closed the connection)
[5:07] * aszeszo (~aszeszo@adrj70.neoplus.adsl.tpnet.pl) has joined #ceph
[5:16] * lcurtis (~lcurtis@47.19.105.250) Quit (Ping timeout: 480 seconds)
[5:19] * Vacuum_ (~vovo@88.130.195.110) has joined #ceph
[5:20] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) Quit (Quit: Konversation terminated!)
[5:20] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) has joined #ceph
[5:25] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) Quit ()
[5:25] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) has joined #ceph
[5:26] * lcurtis (~lcurtis@ool-18bfec0b.dyn.optonline.net) has joined #ceph
[5:26] * Vacuum (~vovo@88.130.194.232) Quit (Ping timeout: 480 seconds)
[5:30] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) Quit ()
[5:30] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) has joined #ceph
[5:34] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[5:35] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) Quit ()
[5:35] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) has joined #ceph
[5:40] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) Quit ()
[5:40] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) has joined #ceph
[5:41] * fmanana (~fdmanana@bl5-5-68.dsl.telepac.pt) has joined #ceph
[5:45] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[5:45] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) Quit ()
[5:45] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) has joined #ceph
[5:48] * fdmanana (~fdmanana@bl13-157-248.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[5:50] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) Quit ()
[5:50] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) has joined #ceph
[5:52] * lcurtis (~lcurtis@ool-18bfec0b.dyn.optonline.net) Quit (Ping timeout: 480 seconds)
[5:55] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) Quit ()
[5:55] * sputnik13 (~sputnik13@74.202.214.170) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[5:56] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) has joined #ceph
[6:00] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) Quit ()
[6:00] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) has joined #ceph
[6:01] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[6:05] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) Quit ()
[6:05] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) has joined #ceph
[6:10] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) Quit ()
[6:11] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) has joined #ceph
[6:12] * jwilkins (~jwilkins@c-67-180-123-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[6:14] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:15] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) Quit ()
[6:15] * Janardhan (~janardhan@216.207.42.137) has joined #ceph
[6:16] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) has joined #ceph
[6:17] * Janardhan (~janardhan@216.207.42.137) Quit ()
[6:20] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) Quit ()
[6:21] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) has joined #ceph
[6:22] * jclm (~jclm@172.56.41.203) Quit (Ping timeout: 480 seconds)
[6:25] * aszeszo (~aszeszo@adrj70.neoplus.adsl.tpnet.pl) Quit (Read error: Connection reset by peer)
[6:25] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) Quit ()
[6:26] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) has joined #ceph
[6:26] * aszeszo (~aszeszo@adrx107.neoplus.adsl.tpnet.pl) has joined #ceph
[6:26] * jclm (~jclm@172.56.7.126) has joined #ceph
[6:28] * jclm1 (~jclm@172.56.7.126) has joined #ceph
[6:30] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) Quit ()
[6:31] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) has joined #ceph
[6:34] * jclm (~jclm@172.56.7.126) Quit (Ping timeout: 480 seconds)
[6:37] * Nats_ (~natscogs@114.31.195.238) has joined #ceph
[6:38] * Nats (~natscogs@114.31.195.238) Quit (Read error: Connection reset by peer)
[6:44] * krypto (~oftc-webi@hpm01cs005-ext.asiapac.hp.net) has joined #ceph
[6:45] <krypto> will there be any performance improvement using disk partition over LVM
[6:50] * vbellur (~vijay@121.244.87.117) has joined #ceph
[6:58] * overclk (~overclk@121.244.87.117) has joined #ceph
[7:01] <badone> krypto: why do you want LVM?
[7:02] <krypto> badone : i am not using lvm,but will there be any performance advantage if i switch to lvm
[7:03] <badone> krypto: I don't believe so
[7:04] * rdas (~rdas@110.227.43.64) has joined #ceph
[7:04] <krypto> badone thanks
[7:05] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[7:05] <badone> krypto: np. I'd stick with the preferred arrangement unless you have a good reason to change
[7:08] * jclm1 (~jclm@172.56.7.126) Quit (Ping timeout: 480 seconds)
[7:19] * aszeszo (~aszeszo@adrx107.neoplus.adsl.tpnet.pl) Quit (Quit: Leaving.)
[7:24] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[7:25] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) Quit (Quit: Konversation terminated!)
[7:25] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) has joined #ceph
[7:26] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit ()
[7:30] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) Quit ()
[7:30] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) has joined #ceph
[7:33] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[7:37] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[7:45] * reed (~reed@75-101-54-131.dsl.static.fusionbroadband.com) Quit (Quit: Ex-Chat)
[7:46] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) Quit (Quit: Konversation terminated!)
[7:46] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) has joined #ceph
[7:50] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) Quit (Quit: KVIrc 4.3.1 Aria http://www.kvirc.net/)
[7:51] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) Quit ()
[7:51] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) has joined #ceph
[7:52] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[7:56] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) Quit ()
[7:56] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) has joined #ceph
[8:01] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) Quit ()
[8:01] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) has joined #ceph
[8:04] * ohnomrbill (~ohnomrbil@c-67-174-241-112.hsd1.ca.comcast.net) has joined #ceph
[8:05] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[8:06] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) Quit ()
[8:06] * jfunk (~jfunk@2001:470:b:44d:7e7a:91ff:fee8:e80b) has joined #ceph
[8:07] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[8:08] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit ()
[8:10] * rdas (~rdas@110.227.43.64) Quit (Ping timeout: 480 seconds)
[8:12] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[8:13] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) Quit (Quit: Leaving.)
[8:13] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[8:17] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[8:19] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[8:22] * rdas (~rdas@110.227.47.41) has joined #ceph
[8:23] * linjan (~linjan@195.110.41.9) has joined #ceph
[8:23] * Nats_ (~natscogs@114.31.195.238) Quit (Read error: Connection reset by peer)
[8:24] * Nats_ (~natscogs@114.31.195.238) has joined #ceph
[8:27] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[8:28] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[8:32] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[8:32] * avozza (~avozza@83.162.204.36) has joined #ceph
[8:38] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Remote host closed the connection)
[8:38] * vilobhmm (~vilobhmm@c-24-23-129-93.hsd1.ca.comcast.net) Quit (Quit: Away)
[8:51] * vilobhmm (~vilobhmm@c-24-23-129-93.hsd1.ca.comcast.net) has joined #ceph
[8:53] * vilobhmm (~vilobhmm@c-24-23-129-93.hsd1.ca.comcast.net) Quit ()
[8:54] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[8:58] * thb (~me@port-11419.pppoe.wtnet.de) has joined #ceph
[8:59] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[9:00] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[9:11] * analbeard (~shw@support.memset.com) has joined #ceph
[9:11] * dgurtner (~dgurtner@178.197.235.143) has joined #ceph
[9:12] * tdb_ (~tdb@myrtle.kent.ac.uk) has joined #ceph
[9:12] * tdb (~tdb@myrtle.kent.ac.uk) Quit (Read error: Connection reset by peer)
[9:20] * tdb (~tdb@myrtle.kent.ac.uk) has joined #ceph
[9:20] * tdb_ (~tdb@myrtle.kent.ac.uk) Quit (Remote host closed the connection)
[9:20] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[9:24] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[9:27] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:27] * thomnico (~thomnico@82.166.93.197) has joined #ceph
[9:31] * lalatenduM (~lalatendu@121.244.87.124) has joined #ceph
[9:35] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[9:35] <Be-El> hi
[9:35] <liiwi> howdy
[9:45] * cok (~chk@2a02:2350:18:1010:ecfe:da04:730a:d3fa) has joined #ceph
[9:46] * kawa2014 (~kawa@90.216.134.197) has joined #ceph
[10:00] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) has joined #ceph
[10:02] * thomnico (~thomnico@82.166.93.197) Quit (Quit: Ex-Chat)
[10:02] * thomnico (~thomnico@82.166.93.197) has joined #ceph
[10:06] * zack_dolby (~textual@nfmv001079100.uqw.ppp.infoweb.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[10:09] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has left #ceph
[10:11] * cok (~chk@2a02:2350:18:1010:ecfe:da04:730a:d3fa) Quit (Quit: Leaving.)
[10:15] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[10:17] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[10:17] * nsantos (~Nelson@2001:690:2180:101:a8de:713e:8998:cd7f) has joined #ceph
[10:17] * nsantos (~Nelson@2001:690:2180:101:a8de:713e:8998:cd7f) Quit ()
[10:17] * nsantos (~Nelson@2001:690:2180:101:a8de:713e:8998:cd7f) has joined #ceph
[10:20] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[10:20] * branto (~borix@178-253-141-146.3pp.slovanet.sk) has joined #ceph
[10:21] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[10:22] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[10:24] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[10:26] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[10:28] * rdas (~rdas@110.227.47.41) Quit (Ping timeout: 480 seconds)
[10:31] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[10:36] * linjan (~linjan@195.110.41.9) Quit (Ping timeout: 480 seconds)
[10:36] * linjan (~linjan@195.110.41.9) has joined #ceph
[10:41] * thomnico (~thomnico@82.166.93.197) Quit (Ping timeout: 480 seconds)
[10:42] * kapil (~ksharma@2620:113:80c0:5::2222) Quit (Remote host closed the connection)
[10:42] * fghaas (~florian@zid-vpnn072.uibk.ac.at) has joined #ceph
[10:42] * rdas (~rdas@110.227.40.47) has joined #ceph
[10:50] * nsantos (~Nelson@2001:690:2180:101:a8de:713e:8998:cd7f) Quit (Ping timeout: 480 seconds)
[10:58] * thomnico (~thomnico@82.166.93.197) has joined #ceph
[10:59] * kevinkevin-work (6dbebb8f@107.161.19.109) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[10:59] <krypto> in ceph-deploy is there any way to create OSD with out specifying journal partition,tried this "ceph-deploy osd create ceph1:sdc" and ceph-deploy osd prepare ceph1:/dev/sdc" both show errors
[11:01] * nsantos (~Nelson@193.137.208.253) has joined #ceph
[11:01] * kevinkevin-work (6dbebb8f@107.161.19.109) has joined #ceph
[11:06] * analbeard (~shw@support.memset.com) has joined #ceph
[11:18] <krypto> this was working in firefly,in giant its not working with out specifying separate partition for journaling
[11:18] * thomnico (~thomnico@82.166.93.197) Quit (Ping timeout: 480 seconds)
[11:21] * Vacuum (~vovo@88.130.209.121) has joined #ceph
[11:28] * Vacuum_ (~vovo@88.130.195.110) Quit (Ping timeout: 480 seconds)
[11:40] * tom (~tom@167.88.45.146) has joined #ceph
[11:41] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[11:42] * i_m (~ivan.miro@deibp9eh1--blueice4n2.emea.ibm.com) has joined #ceph
[11:49] * yom007 (~yom007@evi74-1-78-213-114-207.fbx.proxad.net) has joined #ceph
[11:50] * yom007 (~yom007@evi74-1-78-213-114-207.fbx.proxad.net) has left #ceph
[11:51] * yom007 (~yom007@evi74-1-78-213-114-207.fbx.proxad.net) has joined #ceph
[11:51] * yom007 (~yom007@evi74-1-78-213-114-207.fbx.proxad.net) Quit ()
[11:54] * rdas (~rdas@110.227.40.47) Quit (Ping timeout: 480 seconds)
[12:15] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[12:19] * zack_dolby (~textual@p2104-ipbf6307marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[12:28] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[12:33] * thomnico (~thomnico@82.166.93.197) has joined #ceph
[12:33] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[12:34] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[12:35] * fghaas (~florian@zid-vpnn072.uibk.ac.at) Quit (Read error: Connection reset by peer)
[12:36] * lalatenduM (~lalatendu@121.244.87.124) Quit (Quit: Leaving)
[12:37] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[12:39] * diegows (~diegows@190.190.5.238) has joined #ceph
[12:41] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[12:42] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[12:43] * linjan (~linjan@195.110.41.9) Quit (Ping timeout: 480 seconds)
[12:44] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit ()
[12:46] * krypto (~oftc-webi@hpm01cs005-ext.asiapac.hp.net) Quit (Quit: Page closed)
[12:47] * nsantos (~Nelson@193.137.208.253) Quit (Ping timeout: 480 seconds)
[12:47] * linjan (~linjan@195.110.41.9) has joined #ceph
[12:49] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[12:51] * fmanana (~fdmanana@bl5-5-68.dsl.telepac.pt) Quit (Quit: Leaving)
[12:52] * nsantos (~Nelson@193.137.208.253) has joined #ceph
[12:52] * nsantos (~Nelson@193.137.208.253) Quit ()
[12:52] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[12:52] * nsantos (~Nelson@193.137.208.253) has joined #ceph
[12:54] * dalgaaf (uid15138@id-15138.charlton.irccloud.com) has joined #ceph
[12:55] * zack_dolby (~textual@p2104-ipbf6307marunouchi.tokyo.ocn.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[12:55] * dgurtner (~dgurtner@178.197.235.143) Quit (Ping timeout: 480 seconds)
[13:02] * rljohnsn (~rljohnsn@c-73-15-126-4.hsd1.ca.comcast.net) has joined #ceph
[13:02] * rwheeler (~rwheeler@173.48.208.246) has joined #ceph
[13:06] * thomnico (~thomnico@82.166.93.197) Quit (Quit: Ex-Chat)
[13:07] * cok (~chk@nat-cph5-sys.net.one.com) has joined #ceph
[13:07] * rljohnsn1 (~rljohnsn@c-73-15-126-4.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[13:14] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[13:18] * Miouge_ (~Miouge@94.136.92.20) has joined #ceph
[13:18] * Miouge (~Miouge@94.136.92.20) Quit (Read error: Connection reset by peer)
[13:18] * Miouge_ is now known as Miouge
[13:21] * ganders (~root@200.32.121.70) has joined #ceph
[13:22] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[13:25] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[13:25] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[13:26] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[13:33] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[13:41] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[13:58] * capri (~capri@212.218.127.222) Quit (Read error: Connection reset by peer)
[13:59] * dgurtner (~dgurtner@178.197.235.143) has joined #ceph
[14:03] * rdas (~rdas@110.227.43.189) has joined #ceph
[14:03] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit (Ping timeout: 480 seconds)
[14:04] * fghaas (~florian@zid-vpnn109.uibk.ac.at) has joined #ceph
[14:08] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[14:08] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[14:12] * fdmanana (~fdmanana@bl5-5-68.dsl.telepac.pt) has joined #ceph
[14:19] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[14:20] * ade (~abradshaw@193.202.255.218) has joined #ceph
[14:28] <loicd> nwat: while on vacation I played with stackoverflow and found that you've been active for Ceph there :-) I'm now subscribed to the "ceph" tag. I guess you are too ?
[14:30] * fghaas (~florian@zid-vpnn109.uibk.ac.at) Quit (Ping timeout: 480 seconds)
[14:32] * rdas (~rdas@110.227.43.189) Quit (Quit: Leaving)
[14:35] * danieagle (~Daniel@200-148-39-11.dsl.telesp.net.br) has joined #ceph
[14:37] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) has joined #ceph
[14:40] * thomnico (~thomnico@82.166.93.197) has joined #ceph
[14:41] * Manshoon (~Manshoon@c-50-181-29-219.hsd1.wv.comcast.net) has joined #ceph
[14:43] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[14:46] * Manshoon (~Manshoon@c-50-181-29-219.hsd1.wv.comcast.net) Quit (Remote host closed the connection)
[14:47] * tupper (~tcole@108-83-203-37.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[14:47] * Manshoon (~Manshoon@199.16.199.4) has joined #ceph
[14:50] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[14:53] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[14:56] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[14:57] * sjm (~sjm@pool-98-109-11-113.nwrknj.fios.verizon.net) has joined #ceph
[14:57] * Miouge_ (~Miouge@94.136.92.21) has joined #ceph
[14:58] * vbellur (~vijay@122.178.251.29) has joined #ceph
[14:59] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:00] * jabadia (~jabadia@194.90.7.244) has joined #ceph
[15:01] * Miouge (~Miouge@94.136.92.20) Quit (Ping timeout: 480 seconds)
[15:01] * Miouge_ is now known as Miouge
[15:02] <jabadia> ceph-disk prepare fails on Rhel7 with partx -a , anyone can help ? ( version 0.80.8 and also 0.87.x )
[15:04] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[15:05] * Miouge_ (~Miouge@94.136.92.20) has joined #ceph
[15:06] * tupper (~tcole@108-83-203-37.lightspeed.rlghnc.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[15:08] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[15:09] * Miouge (~Miouge@94.136.92.21) Quit (Ping timeout: 480 seconds)
[15:09] * Miouge_ is now known as Miouge
[15:10] * nitti (~nitti@162.222.47.218) has joined #ceph
[15:14] * Manshoon (~Manshoon@199.16.199.4) Quit (Remote host closed the connection)
[15:15] * Manshoon (~Manshoon@199.16.199.4) has joined #ceph
[15:16] * visualne (~oftc-webi@158-147-148-234.harris.com) has joined #ceph
[15:16] * tupper (~tcole@rtp-isp-nat-pool1-1.cisco.com) has joined #ceph
[15:16] <visualne> Hello everytime I try to an osd to my crush map using this command: ceph osd crush set 50 2.73 pool=default I keep getting an the following error (22) Invalid agument
[15:16] <visualne> anyon have any idea?
[15:20] * dyasny (~dyasny@173.231.115.58) has joined #ceph
[15:22] * ccheng (~ccheng@c-50-165-131-154.hsd1.in.comcast.net) Quit (Remote host closed the connection)
[15:24] * vata (~vata@208.88.110.46) has joined #ceph
[15:30] * bkopilov (~bkopilov@bzq-79-182-164-80.red.bezeqint.net) has joined #ceph
[15:34] * dyasny (~dyasny@173.231.115.58) Quit (Ping timeout: 480 seconds)
[15:36] <visualne> Anyone at all
[15:37] * dyasny (~dyasny@173.231.115.58) has joined #ceph
[15:38] * dmsimard_away is now known as dmsimard
[15:40] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[15:43] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[15:51] <Be-El> visualne: you are trying to do...what? there's a word missing
[15:52] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[15:52] * cok (~chk@nat-cph5-sys.net.one.com) Quit (Quit: Leaving.)
[15:53] * linjan (~linjan@195.110.41.9) Quit (Ping timeout: 480 seconds)
[15:53] * linjan (~linjan@195.110.41.9) has joined #ceph
[15:56] * ccheng (~ccheng@c-50-165-131-154.hsd1.in.comcast.net) has joined #ceph
[15:57] <visualne> Basically I want to simply add an osd to a crush map.
[15:58] * dgurtner (~dgurtner@178.197.235.143) Quit (Ping timeout: 480 seconds)
[15:58] <Be-El> does ceph already know about the osd?
[15:59] <visualne> well
[15:59] <visualne> if I do a ceph osd tree
[15:59] <visualne> it sees it
[15:59] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[15:59] * joef1 (~Adium@2601:9:280:f2e:2183:cb4a:d194:ef0c) has joined #ceph
[15:59] <visualne> So the command I was using was ceph osd crush set {id} 2.73 pool=default
[15:59] <visualne> I changed it to ceph osd crush set {id} 2.73 root=default
[15:59] <visualne> and that seemed to work
[15:59] <visualne> however
[16:00] <visualne> when I brough the osd up with ceph -a start osd.50
[16:00] <visualne> it starts however when I do a ceph osd tree
[16:00] <visualne> I dont see it listed as up
[16:01] <visualne> It is still listed as down
[16:01] <Be-El> first of all, the default startup script for osd updates the osd location to root=default, host=<hostname> etc.
[16:01] <Be-El> if you need another location you can configure it in ceph.conf
[16:01] * DV (~veillard@2001:41d0:1:d478::1) Quit (Remote host closed the connection)
[16:01] * rwheeler (~rwheeler@173.48.208.246) Quit (Quit: Leaving)
[16:02] <Be-El> if the osd is not recognized as up after it has started, you should have a look at the osd log file (usually in /var/log/ceph
[16:03] <visualne> http://pastebin.com/S0kuX2Pf
[16:03] <visualne> the last entry was almost 15 minutes ago
[16:03] <visualne> god damnit
[16:03] <visualne> that monitor is down
[16:03] <visualne> 192.168.75.105 is down
[16:04] <visualne> thats another issue I have to deal with
[16:04] <visualne> it cant talk to 6789 because the damn monitor is down
[16:04] <visualne> Why wont it talk to other monitors
[16:04] <visualne> saying Hey, I'm here
[16:04] <Be-El> how should the osd know about other monitors?
[16:05] <visualne> ceph.conf right?
[16:05] <Be-El> upon start it tries to contact the monitors listed in ceph.conf
[16:05] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[16:05] <visualne> ya tcpdump confirms that
[16:06] <Be-El> if you have several monitors you should list more than one in ceph.conf
[16:06] <visualne> ya they are
[16:06] <visualne> each one is listed in ceph.conf
[16:07] <visualne> actually no its not talking to the monitor
[16:07] * ccheng (~ccheng@c-50-165-131-154.hsd1.in.comcast.net) Quit (Remote host closed the connection)
[16:07] * joef1 (~Adium@2601:9:280:f2e:2183:cb4a:d194:ef0c) Quit (Ping timeout: 480 seconds)
[16:07] <visualne> if I do this
[16:07] <Be-El> as [mon] sections or in the global configuration?
[16:07] <visualne> yes
[16:07] * bitserker1 (~toni@178.139.176.225) has joined #ceph
[16:08] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit (Ping timeout: 480 seconds)
[16:08] <Be-El> can put put the global section in pastebin?
[16:09] * moore (~moore@64.202.160.88) has joined #ceph
[16:09] <visualne> http://pastebin.com/4i8qf49E
[16:10] <visualne> wrong one
[16:10] <visualne> sorry
[16:10] <visualne> here you go
[16:10] <visualne> http://pastebin.com/6UtnHHgb
[16:12] <Be-El> looks ok as far as i known
[16:12] <visualne> so check this out
[16:12] <visualne> this is the output of ceph -w
[16:12] <Be-El> if you change the order of the mon hosts, does the osd try to connect to a different host first?
[16:14] <visualne> http://pastebin.com/Dw7HXVLi
[16:14] <visualne> thats a good idea I can try that
[16:14] <visualne> 75.105 the monitor isnt starting
[16:14] <visualne> which is another problem I have to look at
[16:14] <visualne> so I thinking its trying to talk 75.105 over and over and over again
[16:14] <visualne> and cant
[16:14] <visualne> and it will just sit there
[16:14] <visualne> but i ran a capture
[16:15] <visualne> and dont see the source port associated with the new osd trying to talk to the monitor
[16:15] <visualne> at all
[16:15] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Ping timeout: 480 seconds)
[16:15] <visualne> This from the new osds log
[16:15] <visualne> 0.0.0.0:6803/10452 >> 192.168.75.105:6789/0 pipe(0x27cc500 sd=28 :0 s=1 pgs=0 cs=0 l=1).fault
[16:16] <visualne> does that mean its using src port 6803 to talk to 75.105:6789?
[16:16] <Be-El> yes
[16:16] <visualne> because if I do tcpdump i eth7 port 6803
[16:16] <visualne> I dont see anything
[16:16] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[16:16] <visualne> I can try changing the order
[16:17] <Be-El> since it is still trying to talk to 75.105, changing the order may resolve the problem
[16:17] <Be-El> i'm wondering why the osd is not trying the other mons from mon_host
[16:20] * ibuclaw (~ibuclaw@host81-150-190-145.in-addr.btopenworld.com) has joined #ceph
[16:21] <ibuclaw> Hi, I've just noticed that I'm getting timeout errors on my debian mirror of ceph.com/debian-firefly
[16:21] * thomnico (~thomnico@82.166.93.197) Quit (Ping timeout: 480 seconds)
[16:22] * fghaas (~florian@zid-vpnn072.uibk.ac.at) has joined #ceph
[16:22] <nwat> loic: yup. actually you can subscribe to tags across the entire stackexchange network, so you get notifications for serverfault which i think has more ceph related questions.
[16:23] <visualne> how can I turn on more osd verbose logging
[16:23] <visualne> because this isnt telling me why this thing is not joining the cluster at all
[16:27] <volter> Can somebody help me understand bootstraping? When I try to add a new OSD, I need to have some kind of keyring. I don't fully understand where that keyring is supposed to come from and whether it is different for different nodes.
[16:34] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) Quit (Remote host closed the connection)
[16:34] * hellertime (~Adium@a72-246-185-10.deploy.akamaitechnologies.com) has joined #ceph
[16:34] <Manshoon> @volter, the answer as I understand it is it can be, or you can share one
[16:34] <Manshoon> if you use ceph-deploy it handles the copying of the key
[16:34] <Manshoon> if not you can do it manually
[16:34] <Manshoon> one second ill look somethign up for you
[16:34] <ibuclaw> It seems to have been happening since 1st Jan. :\
[16:34] <ibuclaw> rsync: failed to connect to ceph.com (208.113.241.137): Connection timed out (110)
[16:34] <ibuclaw> rsync: failed to connect to ceph.com (2607:f298:4:147::b05:fe2a): Network is unreachable (101)
[16:34] <volter> Manshoon: And it's supposed to live in /var/lib/ceph/bootstrap-osd?
[16:34] <ibuclaw> Not a huge problem at the moment, because I'm not upgrading ceph anytime soon. But is there an alternative repo I can mirror off?
[16:34] <Manshoon> looking
[16:34] <Manshoon> here is my keyring that i use on my test cluster
[16:34] <Manshoon> http://pastebin.com/keJPtBp2
[16:34] * wido (~wido@92.63.168.213) has joined #ceph
[16:34] <Manshoon> there is one there, checking how its being used
[16:34] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[16:34] <volter> Is that really the keyring you use for that?
[16:34] <Manshoon> so you are right
[16:34] <Manshoon> there is a seperate key there
[16:34] <Manshoon> in /var/lib/ceph/bootstrap-osd?
[16:34] <Manshoon> and its being used for
[16:34] <Manshoon> http://pastebin.com/NX1MZXgQ
[16:34] <Manshoon> which is output from 'ceph auth list'
[16:34] <Manshoon> so as long as your key is listed in that location with the correct perms and is listed correctly in ceph auth that part should work
[16:34] <Manshoon> for osd bootstrap auth
[16:34] * sjm (~sjm@pool-98-109-11-113.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[16:34] * fghaas (~florian@zid-vpnn072.uibk.ac.at) Quit (Ping timeout: 480 seconds)
[16:34] * wido (~wido@92.63.168.213) Quit (Remote host closed the connection)
[16:34] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) has joined #ceph
[16:35] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[16:35] <Manshoon> im reviewing teh auth-config-ref for any details
[16:35] <Manshoon> http://ceph.com/docs/master/rados/configuration/auth-config-ref/
[16:37] <Manshoon> my cluster was setup with ceph-deploy and checking i have the same bootstrap key on every node
[16:37] * alram (~alram@38.122.20.226) has joined #ceph
[16:37] <Manshoon> so it copied it during osd creation and add
[16:37] <volter> Thank you for verifying that!
[16:38] <Manshoon> yep, im still digging, but it looks straight forward
[16:38] <Manshoon> not sure on the logging question to see it in action however
[16:38] <Manshoon> i used this to setup the osd
[16:38] <Manshoon> http://pastebin.com/Tm8hR8ud
[16:38] <Manshoon> so i cheated
[16:38] <Manshoon> i have not tackled the puppet creation of osd that is about a week away
[16:39] <Manshoon> i can post my manifest when i doo
[16:39] <Manshoon> im sure i have to answer this question since i used the magic of ceph-deploy to work around the details
[16:39] * itamarl (~itamar@194.90.7.244) has joined #ceph
[16:40] <Manshoon> check out this link for debug
[16:40] <Manshoon> http://ceph.com/docs/master/rados/troubleshooting/log-and-debug/
[16:40] <Manshoon> that should atleast give you more than you have now
[16:40] <itamarl> Hi, anyone had trouble using ceph-disk under rhel7? I cant seem to tbe able to create an OSD.
[16:41] <itamarl> keep nagging me about partx getting wrong params
[16:41] <Manshoon> not yet, but some time later in the week i have to get it done
[16:41] <Manshoon> i can report back
[16:41] <itamarl> \Same thing with ceph-deploy that seems to use tceph-disk in the same way
[16:41] <Manshoon> atleast is consistant
[16:41] <itamarl> ture
[16:43] <jabadia> that does not make sense, no one install ceph on RH7 ?
[16:44] <terje__> yea same problem here
[16:44] * terje__ is now known as terje
[16:45] <terje> so, yea does that mean ceph isn't working on RH7?
[16:45] <burley_> we used parted to partition our drives manually first
[16:46] <burley_> well, it wasn't manual -- but it wasn't done by ceph-disk
[16:46] <itamarl> I happily used ceph-disk up until rhel6.5
[16:46] <itamarl> need to start doing it manually? it doesn't make sense!
[16:47] <burley_> we wanted the partitioning done a bit special, so that might have been why
[16:47] * fghaas (~florian@zid-vpnn082.uibk.ac.at) has joined #ceph
[16:47] <itamarl> so, parted to create the partition and ceph-disk prepare pointed at the partition and not the block device?
[16:47] <burley_> but I do recall a warning or error when we let ceph-disk do it, but iirc it was ignorable
[16:49] * bitserker1 (~toni@178.139.176.225) Quit (Ping timeout: 480 seconds)
[16:49] <burley_> itamarl: here's what we do, sans our logic: https://pastee.org/tk5a8
[16:49] <itamarl> thanks, looking..
[16:49] <wintamut1> using librados with python, do i need to do manual striping? the documentation seems to disagree with releasenotes
[16:51] * _prime_ (~oftc-webi@199.168.44.192) has joined #ceph
[16:51] <ibuclaw> Is there a reason why you can't mirror from the debian-ceph repo anymore?
[16:51] <itamarl> thanks burley_ .. will give this a go
[16:52] * wintamut1 is now known as wintamute
[16:54] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[16:56] * saltlake (~saltlake@pool-71-244-62-208.dllstx.fios.verizon.net) has joined #ceph
[16:57] * vilobhmm (~vilobhmm@c-24-23-129-93.hsd1.ca.comcast.net) has joined #ceph
[16:57] * agshew_ (~agshew@host-69-145-59-76.bln-mt.client.bresnan.net) has joined #ceph
[16:58] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[16:58] * itamarl (~itamar@194.90.7.244) Quit (Quit: leaving)
[16:59] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[17:00] * sjm (~sjm@pool-98-109-11-113.nwrknj.fios.verizon.net) has joined #ceph
[17:02] * linjan (~linjan@195.110.41.9) Quit (Ping timeout: 480 seconds)
[17:04] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:06] * vilobhmm (~vilobhmm@c-24-23-129-93.hsd1.ca.comcast.net) Quit (Quit: Away)
[17:07] * togdon (~togdon@74.121.28.6) has joined #ceph
[17:09] * reed (~reed@75-101-54-131.dsl.static.fusionbroadband.com) has joined #ceph
[17:11] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[17:11] * fghaas (~florian@zid-vpnn082.uibk.ac.at) Quit (Ping timeout: 480 seconds)
[17:16] <flaf> Hi, if I use a block device (instead of a path) for the "osd journal" parameter in ceph device, must I provide too the "osd journal size" parameter?
[17:16] <flaf> Or the the journal size will be automatically the size of the block device?
[17:17] <haomaiwa_> flaf: Yes, if you set "osd journal size=0", the size will be automatically calculated
[17:18] <flaf> haomaiwa_: ok, thx. Sorry it was in the documentation.
[17:19] * jclm (~jclm@209.49.224.62) has joined #ceph
[17:19] * gregmark (~Adium@68.87.42.115) has joined #ceph
[17:20] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[17:21] * Vacuum_ (~vovo@i59F7A16C.versanet.de) has joined #ceph
[17:24] * rljohnsn (~rljohnsn@c-73-15-126-4.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:24] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[17:24] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[17:25] * ccheng (~ccheng@128.211.165.1) has joined #ceph
[17:28] * Vacuum (~vovo@88.130.209.121) Quit (Ping timeout: 480 seconds)
[17:28] * joshd1 (~jdurgin@24-205-54-236.dhcp.gldl.ca.charter.com) has joined #ceph
[17:35] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit (Ping timeout: 480 seconds)
[17:36] * fghaas (~florian@zid-vpnn072.uibk.ac.at) has joined #ceph
[17:37] * ChrisNBl_ (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[17:42] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Ping timeout: 480 seconds)
[17:42] * evilrob0_ (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) Quit (Read error: No route to host)
[17:43] * evilrob00 (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) has joined #ceph
[17:49] * ircolle (~ircolle@38.122.20.226) has joined #ceph
[17:57] * bandrus (~brian@197.sub-70-211-68.myvzw.com) has joined #ceph
[17:57] * thomnico (~thomnico@bzq-218-90-50.red.bezeqint.net) has joined #ceph
[17:58] * cholcombe973 (~chris@pool-108-42-144-175.snfcca.fios.verizon.net) has joined #ceph
[17:58] * evilrob00 (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[18:01] <visualne> apparently I have 362 unclean placement groups
[18:01] <visualne> that I am unable to repair
[18:01] * madkiss (~madkiss@2001:6f8:12c3:f00f:d863:71cb:876d:f4ee) has joined #ceph
[18:01] * ade (~abradshaw@193.202.255.218) Quit (Ping timeout: 480 seconds)
[18:04] * gfidente (~gfidente@0001ef4b.user.oftc.net) has joined #ceph
[18:04] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:04] * gfidente (~gfidente@0001ef4b.user.oftc.net) Quit ()
[18:06] * rmoe (~quassel@173-228-89-134.dsl.static.fusionbroadband.com) Quit (Ping timeout: 480 seconds)
[18:07] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[18:09] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[18:10] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[18:15] * bandrus1 (~brian@50.23.113.232) has joined #ceph
[18:16] * ibuclaw (~ibuclaw@host81-150-190-145.in-addr.btopenworld.com) Quit (Quit: Leaving)
[18:20] * bandrus (~brian@197.sub-70-211-68.myvzw.com) Quit (Ping timeout: 480 seconds)
[18:21] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:22] * BManojlovic (~steki@cable-89-216-240-92.dynamic.sbb.rs) has joined #ceph
[18:23] * Vacuum (~vovo@88.130.215.148) has joined #ceph
[18:24] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[18:25] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Remote host closed the connection)
[18:25] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[18:25] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:25] * Nacer (~Nacer@176.31.89.99) has joined #ceph
[18:27] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[18:27] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) has joined #ceph
[18:30] * Vacuum_ (~vovo@i59F7A16C.versanet.de) Quit (Ping timeout: 480 seconds)
[18:34] * linjan (~linjan@213.8.240.146) has joined #ceph
[18:36] * hellertime1 (~Adium@pool-173-48-56-84.bstnma.fios.verizon.net) has joined #ceph
[18:36] * hellertime (~Adium@a72-246-185-10.deploy.akamaitechnologies.com) Quit (Read error: Connection reset by peer)
[18:37] <fghaas> quick question about a semi-obvious limitation in ceph-deploy that is not so obviously documented:
[18:38] * jabadia (~jabadia@194.90.7.244) Quit (Remote host closed the connection)
[18:38] <fghaas> is it fair to say that ceph-deploy's general assumption is that all hosts will be on the same public_network, so in case you're deploying a mon to a remote network, you will have to manually hack the generated ceph.conf before you run ceph mon create?
[18:39] <fghaas> or is there an easier way of overriding that in ceph-deploy?
[18:40] <fghaas> side note: come to think of it: http://ceph.com/docs/master/rados/deployment/ceph-deploy-new/ doesn't really mention that if you do want to tweak your initial ceph.conf, right after ceph-deploy new would be the right time
[18:41] * Nacer (~Nacer@176.31.89.99) Quit (Remote host closed the connection)
[18:41] * xarses (~andreww@12.164.168.117) has joined #ceph
[18:43] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[18:49] * Nacer (~Nacer@2001:41d0:fe82:7200:80c6:4b7:8064:49ee) has joined #ceph
[18:49] * thomnico (~thomnico@bzq-218-90-50.red.bezeqint.net) Quit (Quit: Ex-Chat)
[18:49] * thomnico (~thomnico@bzq-218-90-50.red.bezeqint.net) has joined #ceph
[18:49] <fghaas> nevermind; grepping the doc tree yielded that it's in http://docs.ceph.com/docs/master/start/quick-ceph-deploy/
[18:50] * chasmo77 (~chas77@158.183-62-69.ftth.swbr.surewest.net) has joined #ceph
[18:53] * agshew_ (~agshew@host-69-145-59-76.bln-mt.client.bresnan.net) Quit (Ping timeout: 480 seconds)
[18:53] * fghaas1 (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[18:54] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[18:54] <jcsp> fghaas: the ???public-network setting to "ceph-deploy new" might be what you're looking for? Saves you the step of manually setting the net in the ceph.conf
[18:55] <jcsp> alfredo heroically added it a while back :-)
[18:56] <fghaas1> jcsp: I did see that, but that's *during* ceph-deploy new... that doesn't really cover the use case where I have 3 mons on initial config and then a 4th and 5th one *on a different network* added later
[18:57] <fghaas1> jcsp: I thought ceph-deploy mon create <host> --address <address> would do the trick, but apparently that still pushes the ceph.conf unchanged
[18:58] <fghaas1> so I guess it's take the old ceph.conf, modify the public_network, and then push that out to the new mon
[18:58] <jcsp> so these are mons that are on different subnets but can route to each other?
[18:58] <jcsp> exotic!
[18:58] <fghaas1> jcsp: yeah, that's the scenario
[18:58] <fghaas1> and yeah I know it's exotic
[18:59] <fghaas1> but given the fact that all mon comms are just tcp, it shouldn't be a hindrance
[19:00] <fghaas1> now of course I do realize that that makes the router itself a point of failure, but if there's only one mon in each remote network (and the network paths to multiple networks don't share a single router), then at least it's not a SPOF that can bring the whole cluster down
[19:00] <fghaas1> or am I missing something obvious here?
[19:00] <devicenull> lol
[19:00] <devicenull> client io 925 GB/s rd, 198 GB/s wr, 1063 kop/s
[19:01] <devicenull> all that on a gigabit network!
[19:01] * fghaas (~florian@zid-vpnn072.uibk.ac.at) Quit (Ping timeout: 480 seconds)
[19:01] * fghaas1 is now known as fghaas
[19:03] <jcsp> fghaas1: yeah I guess it's legal, I wouldn't hold your breath for explicit support in ceph-deploy though
[19:03] <jcsp> *shrug*
[19:04] * i_m (~ivan.miro@deibp9eh1--blueice4n2.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[19:05] <cholcombe973> devicenull: you sure that's not 925Gb/s? that seems awfully high
[19:05] <devicenull> that's what ceph status says... I'm pretty sure it's wrong
[19:05] <gleam> 925gbps would seem awfully high too
[19:05] * thomnico (~thomnico@bzq-218-90-50.red.bezeqint.net) Quit (Quit: Ex-Chat)
[19:05] <cholcombe973> lol
[19:06] <devicenull> it only was showing gbps intermittently
[19:06] <cholcombe973> yeah i'm pretty sure it's wrong also :D
[19:06] * vasu (~vasu@38.122.20.226) has joined #ceph
[19:06] <cholcombe973> i see
[19:11] <fghaas> jcsp: no I wasn't asking for support in ceph-deploy, just checking whether there was any cleverer way than I had thought of
[19:11] * jwilkins (~jwilkins@95.sub-70-211-134.myvzw.com) has joined #ceph
[19:12] * rljohnsn (~rljohnsn@ns25.8x8.com) Quit (Quit: Leaving.)
[19:12] * Nacer (~Nacer@2001:41d0:fe82:7200:80c6:4b7:8064:49ee) Quit (Remote host closed the connection)
[19:14] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[19:16] * evilrob00 (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) has joined #ceph
[19:16] * mykola (~Mikolaj@91.225.201.255) has joined #ceph
[19:16] * nsantos (~Nelson@193.137.208.253) Quit (Ping timeout: 480 seconds)
[19:18] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Remote host closed the connection)
[19:19] * Nacer (~Nacer@176.31.89.99) has joined #ceph
[19:19] * cdelatte (~cdelatte@2606:a000:dd42:9e00:3e15:c2ff:feb8:dff8) has joined #ceph
[19:21] <fghaas> but thanks, jscp :)
[19:23] <fghaas> s/jscp/jcsp/
[19:23] <kraken> fghaas meant to say: but thanks, jcsp :)
[19:29] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) Quit (Quit: Leaving)
[19:29] * vbellur (~vijay@122.178.251.29) Quit (Read error: Connection reset by peer)
[19:31] * diegows (~diegows@190.190.5.238) Quit (Read error: Connection reset by peer)
[19:38] <visualne> does a ceph cluster NEED 3 monitors to run. For example I have one monitor currently out of quorum
[19:38] * lalatenduM (~lalatendu@122.172.32.115) has joined #ceph
[19:38] <devicenull> no
[19:38] <visualne> And this may or may not be the root cause to the issues I am seeing right now in this cluster
[19:39] <devicenull> but if you lose another monitor, your cluster will break (assuming you had 3 monitors initially, and 1 is down)
[19:41] <visualne> do OSDs need to explicitly listed in a ceph.conf file?
[19:42] * joshd1 (~jdurgin@24-205-54-236.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[19:43] * sputnik13 (~sputnik13@74.202.214.170) has joined #ceph
[19:43] * togdon (~togdon@74.121.28.6) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:43] * Nacer (~Nacer@176.31.89.99) Quit (Remote host closed the connection)
[19:44] * Nacer (~Nacer@176.31.89.99) has joined #ceph
[19:44] <devicenull> no
[19:45] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[19:46] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[19:48] * togdon (~togdon@74.121.28.6) has joined #ceph
[19:48] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[19:52] * Nacer (~Nacer@176.31.89.99) Quit (Ping timeout: 480 seconds)
[19:53] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[19:56] * bearkitten (~bearkitte@cpe-66-27-98-26.san.res.rr.com) Quit (Quit: WeeChat 1.1.1)
[19:57] * bearkitten (~bearkitte@cpe-66-27-98-26.san.res.rr.com) has joined #ceph
[19:58] * branto (~borix@178-253-141-146.3pp.slovanet.sk) has left #ceph
[20:01] * debian112 (~bcolbert@24.126.201.64) has left #ceph
[20:03] * xarses (~andreww@12.164.168.117) has joined #ceph
[20:04] * vpol (~vpol@000131a0.user.oftc.net) has joined #ceph
[20:06] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[20:11] * linjan (~linjan@213.8.240.146) Quit (Ping timeout: 480 seconds)
[20:11] * kawa2014 (~kawa@90.216.134.197) Quit (Quit: Leaving)
[20:13] <visualne> when trying to start my monitor I keep getting this unable to open monitor store at /var/lib/ceph/mon/ceph-admin
[20:18] * vpol (~vpol@000131a0.user.oftc.net) Quit (Quit: vpol)
[20:21] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[20:22] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[20:26] * fattaneh (~fattaneh@31.59.50.129) has joined #ceph
[20:28] <visualne> I have a question
[20:28] <visualne> this is a good question
[20:30] <visualne> when I talk to admin-socket for a monitor. With a command like this: ceph --admin-daemon /var/run/ceph/ceph-mon.CEPH01.asok log dump
[20:30] <visualne> It is currently returning to me an empty json string
[20:30] <visualne> {}
[20:30] <visualne> also when I try other commands like
[20:30] * cholcombe973 (~chris@pool-108-42-144-175.snfcca.fios.verizon.net) Quit (Remote host closed the connection)
[20:31] * cholcombe973 (~chris@pool-108-42-144-175.snfcca.fios.verizon.net) has joined #ceph
[20:33] * davidzlap (~Adium@2605:e000:1313:8003:7456:1013:da06:fd6f) has joined #ceph
[20:35] * togdon (~togdon@74.121.28.6) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[20:36] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[20:37] <visualne> I try to start a monitor
[20:37] <visualne> it hangs and never binds to 6789
[20:37] <visualne> the create-keys process just hangs there for some reason
[20:44] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[20:45] <visualne> if I try to communicate with the admin-socket of the monitor with this command: ceph --admin-daemon /var/run/ceph/ceph-mon.CEPH01.asok mon_status. I recieve this error: read only got 0 bytes of 4 expected for response length; invalid command?
[20:45] * togdon (~togdon@74.121.28.6) has joined #ceph
[20:46] <visualne> but if I do that on a ceph monitor that's working I get output. So my question where is that data that the command returns actually "kept" I believe it's in some kind of local data store. Yet I dont know where. I think the reason the monitor wont start is because it cant talk to that datastore for some reason
[20:48] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[20:50] * skorgu (skorgu@pylon.skorgu.net) has joined #ceph
[20:56] * nitindo (~nitindo@49.248.200.184) has joined #ceph
[20:56] * lalatenduM (~lalatendu@122.172.32.115) Quit (Quit: Leaving)
[20:57] * lalatenduM (~lalatendu@122.172.32.115) has joined #ceph
[20:57] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[20:57] <nitindo> Hi all, I have a question, if osd pool default size = 3, then how many copies of a single object will be created? primary+3 or primary+2?
[20:57] * fattaneh (~fattaneh@31.59.50.129) Quit (Remote host closed the connection)
[20:58] * lalatenduM (~lalatendu@122.172.32.115) Quit ()
[20:58] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:58] * anorak (~gabrielp1@x590c3a34.dyn.telefonica.de) has joined #ceph
[21:01] <anorak> hello all...question. I just setup a 4 node ceph cluster consisting of admin node, monitor node and two storage nodes. Inserting a new osd (third storage node) makes the health of my cluster from OK to WARN alongside with the comment of few pgs. Increasing the size of the pgs solves the problem. Question is...what's the logic behind it? Is it warning me that my cluster is underutilized??
[21:01] <devicenull> anorak: with a low PG count, ceph can't spread the data around to all the nodes efficiently
[21:01] <devicenull> so it warns you to raise it
[21:02] <anorak> ah ok. thanks devicenull! :)
[21:02] * froge (~froge@pool-108-24-72-145.cmdnnj.fios.verizon.net) has joined #ceph
[21:04] <anorak> also, i am intending to use my ceph cluster as a cephfs. now by default (using quick install), my cluster consists of 1 pool and 64 pgs. so far so good. as part of the first step to create a cephfs, we create a fs alongside with the number of PG.Question, After I have created a cephFS along with PGs...are the SAME PGs (64) mapped to this FS or are additional PGs created?
[21:05] <devicenull> no, pgs are not shared
[21:05] <devicenull> each pool is made up of a number of pgs
[21:05] <devicenull> when you add a new pool, you get new pgs
[21:05] * hellertime1 (~Adium@pool-173-48-56-84.bstnma.fios.verizon.net) Quit (Read error: Connection reset by peer)
[21:05] <anorak> so this would imply that i should remove the default pool alongsde with the pgs
[21:05] * hellertime (~Adium@pool-173-48-56-84.bstnma.fios.verizon.net) has joined #ceph
[21:06] <devicenull> I dont really think the default pool is going to hurt you
[21:06] <anorak> Ook
[21:07] <anorak> since cephFS is not production ready.... what would be a good approach? create a pool for FS with a N replication factor or an erasure coded pool for FS?
[21:07] <devicenull> I have no experience with cephfs
[21:09] <anorak> if the same question was in context with the block storage...which one would be more advantageous? If i have gotten it correctly, erasure coded pool is both reliable and uses less space whereas with N replication, some space is bound to the replicated pool
[21:09] <devicenull> erasure coded has a cpu hit, iirc
[21:10] <devicenull> we just use normal pools, disk space is cheap enough for us that it doesnt matter
[21:10] <anorak> ah ok. did not know that (cpu hit)!
[21:11] <visualne> ever since a restart a ceph monitor will not come back up
[21:11] <anorak> any hints in the ceph logs?
[21:12] <skorgu> if I have size=3, min_size=2 and I nuke 2 journal+osd disks at the same time should I expect the cluster to be able to recover on its own?
[21:12] <skorgu> (i.e. journal and data on two partitions of the same physical disk)
[21:13] <visualne> I dont see anything of any value in the ceph monitor logs
[21:14] <anorak> @visualne...i had the same problem and i found a hint in the syslog ...assuming you are on linux. My problem had something to do with the system instead of ceph.
[21:14] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[21:15] <anorak> @skorgu...theoretically yes but I have not tried that myself unfortunately.
[21:17] <skorgu> I did. Ended up with 4 pgs incomplete that didn't recover until I brought one of the disks back online.
[21:17] * jwilkins (~jwilkins@95.sub-70-211-134.myvzw.com) Quit (Ping timeout: 480 seconds)
[21:17] <devicenull> skorgu: giant?
[21:17] <skorgu> so I'm not sure if that should work and something is wrong, if it shouldn't work and I don't understand the guarantees of size and min_size (most likely) or if my raid controllers are piles of shit
[21:17] <devicenull> giant is pretty bad at recovery
[21:18] <skorgu> 0.92
[21:18] <skorgu> we're aiming to go prod with hammer
[21:18] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[21:19] <anorak> @devicenull...that's news for me. which one would you recommend if not giant?
[21:19] <devicenull> we had no problems with firefly
[21:19] <devicenull> then we upgraded to giant, and we've been seeing pgs get stuck forever
[21:19] <devicenull> we've had to do entire cluster reboots multiple times to try and get them working
[21:20] <anorak> thanks. Ironic since it is my fav tv show :)
[21:20] <devicenull> I'm up to seeing this on three different clusters now
[21:21] <skorgu> yikes
[21:21] * debian1121 (~bcolbert@24.126.201.64) has joined #ceph
[21:22] <devicenull> we hit the point we were going to pay for a support contract... but they wanted a ton of money given we didn't need anything else
[21:23] <skorgu> did you just end up going back to firefly?
[21:23] <devicenull> no, we're just dealing with it
[21:23] <devicenull> afaik downgrades not well supported
[21:23] <skorgu> charming
[21:24] <anorak> great (sarcasm)
[21:25] * nitindo (~nitindo@49.248.200.184) Quit (Quit: Leaving)
[21:25] <skorgu> my plan of running raid 1 under OSDs is looking better and better
[21:26] <anorak> skorgu: that is also a wastage of hard disks in my opinion. Provided firefly does not suffer from the same things as giant....
[21:26] * lcurtis (~lcurtis@47.19.105.250) Quit (Remote host closed the connection)
[21:27] * dalgaaf (uid15138@id-15138.charlton.irccloud.com) Quit (Quit: Connection closed for inactivity)
[21:27] <skorgu> cost(disk) <<<< cost(downtime)
[21:27] <anorak> true :)
[21:28] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[21:29] <skorgu> I'm not quite sure how to compare size=4 on bare disks with size=2 on raid1'd disks
[21:29] * georgem (~Adium@fwnat.oicr.on.ca) has left #ceph
[21:29] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[21:30] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[21:30] * rljohnsn (~rljohnsn@ns25.8x8.com) Quit (Quit: Leaving.)
[21:31] * linjan (~linjan@213.8.240.146) has joined #ceph
[21:32] <anorak> skorgu: sorry. did not get you. how many slots does your physical server has for hard disks? sorry for being fundamental but trying to picture your setup. :)
[21:34] * L2SHO__ (~L2SHO@2001:19f0:1000:5123:f42b:da79:464:6ac9) has joined #ceph
[21:35] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[21:37] * ganders (~root@200.32.121.70) Quit (Quit: WeeChat 0.4.2)
[21:38] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[21:41] * shaunm (~shaunm@74.215.76.114) Quit (Read error: Connection timed out)
[21:41] * L2SHO_ (~L2SHO@2001:19f0:1000:5123:8c84:23f:8ca:f675) Quit (Ping timeout: 480 seconds)
[21:41] <visualne> Anyone know when trying to add another ceph monitor ceph -w doesnt give me any output at all
[21:42] * shaunm (~shaunm@74.215.76.114) has joined #ceph
[21:42] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[21:44] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[21:45] <anorak> visualne: how many monitor nodes do you have?
[21:45] <visualne> well we just tried to add a fourth using this document http://ceph.com/docs/master/rados/operations/add-or-rm-mons/
[21:46] * L2SHO__ is now known as L2SHO
[21:46] <anorak> i have three and had the same problem with only two monitor nodes. I assumed that it must be due to the quorum principle. after adding the third, my issue got solved
[21:47] <anorak> resolved*
[21:50] * nsantos (~Nelson@bl21-94-62.dsl.telepac.pt) has joined #ceph
[21:52] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[21:53] * Manshoon_ (~Manshoon@c-50-181-29-219.hsd1.wv.comcast.net) has joined #ceph
[21:54] * rwheeler (~rwheeler@173.48.208.246) has joined #ceph
[21:54] * anorak (~gabrielp1@x590c3a34.dyn.telefonica.de) Quit (Quit: Leaving)
[21:56] * ggray (~gabrielp1@x590c3a34.dyn.telefonica.de) has joined #ceph
[21:57] * ggray (~gabrielp1@x590c3a34.dyn.telefonica.de) Quit ()
[21:58] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[21:58] * PaulC (~paul@122-60-36-115.jetstream.xtra.co.nz) has joined #ceph
[21:59] * Manshoon (~Manshoon@199.16.199.4) Quit (Ping timeout: 480 seconds)
[22:01] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[22:02] * rljohnsn (~rljohnsn@ns25.8x8.com) Quit ()
[22:02] * jskinner (~jskinner@host-95-2-129.infobunker.com) has joined #ceph
[22:03] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[22:04] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[22:05] * nsantos (~Nelson@bl21-94-62.dsl.telepac.pt) Quit (Quit: Leaving)
[22:05] * nsantos (~Nelson@bl21-94-62.dsl.telepac.pt) has joined #ceph
[22:05] * Manshoon (~Manshoon@c-50-181-29-219.hsd1.wv.comcast.net) has joined #ceph
[22:08] * linjan (~linjan@213.8.240.146) Quit (Ping timeout: 480 seconds)
[22:10] * nitti (~nitti@162.222.47.218) Quit (Ping timeout: 480 seconds)
[22:11] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[22:11] * Manshoon_ (~Manshoon@c-50-181-29-219.hsd1.wv.comcast.net) Quit (Ping timeout: 480 seconds)
[22:11] * shaunm (~shaunm@74.215.76.114) Quit (Read error: Connection timed out)
[22:12] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[22:12] * shaunm (~shaunm@74.215.76.114) has joined #ceph
[22:14] * jwilkins (~jwilkins@38.122.20.226) has joined #ceph
[22:14] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[22:14] * xarses (~andreww@216-31-228-210.static-ip.telepacific.net) has joined #ceph
[22:18] <visualne> If I have a down monitor
[22:18] <visualne> why the hell does ceph still try to talk to it
[22:18] <visualne> ceph -s shows me this 192.168.75.106:0/4584 >> 192.168.75.105:6789/0
[22:19] <visualne> the monitor on 75.105 is down
[22:19] <visualne> yet the other monitors are always trying to talk to it
[22:19] * Manshoon (~Manshoon@c-50-181-29-219.hsd1.wv.comcast.net) Quit (Remote host closed the connection)
[22:19] <visualne> and I believe its preventing me from getting basic information from ceph -w
[22:21] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[22:21] * alram_ (~alram@38.122.20.226) has joined #ceph
[22:23] * diegows (~diegows@190.190.5.238) has joined #ceph
[22:25] * Sysadmin88 (~IceChat77@2.125.213.8) has joined #ceph
[22:25] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[22:28] * alram (~alram@38.122.20.226) Quit (Ping timeout: 480 seconds)
[22:33] <visualne> what exactly is being contacted with you do ceph -w
[22:33] <visualne> because we have a cluster we have no visiblity to right now at all
[22:33] <visualne> we tried to add another monitor, and that monitor apparently has messed up the entire thing
[22:35] * mykola (~Mikolaj@91.225.201.255) Quit (Quit: away)
[22:37] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[22:44] * cdelatte (~cdelatte@2606:a000:dd42:9e00:3e15:c2ff:feb8:dff8) Quit (Quit: This computer has gone to sleep)
[22:45] * georgem (~Adium@184.151.190.234) has joined #ceph
[22:45] * jdillaman (~jdillaman@pool-108-56-67-212.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[22:46] * diegows (~diegows@190.190.5.238) Quit (Remote host closed the connection)
[22:48] * jdillaman (~jdillaman@pool-108-56-67-212.washdc.fios.verizon.net) has joined #ceph
[22:48] * dyasny (~dyasny@173.231.115.58) Quit (Ping timeout: 480 seconds)
[22:51] * Doug (uid69720@id-69720.ealing.irccloud.com) has joined #ceph
[22:51] * jdillaman (~jdillaman@pool-108-56-67-212.washdc.fios.verizon.net) Quit ()
[22:53] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) has joined #ceph
[22:55] * sjm (~sjm@pool-98-109-11-113.nwrknj.fios.verizon.net) has left #ceph
[22:57] * mfa298 (~mfa298@gateway.yapd.net) Quit (Ping timeout: 480 seconds)
[22:58] * mfa298 (~mfa298@gateway.yapd.net) has joined #ceph
[23:00] * nitti (~nitti@162.222.47.218) has joined #ceph
[23:04] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[23:13] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[23:16] * segutier_ (~segutier@198.23.71.90-static.reverse.softlayer.com) has joined #ceph
[23:19] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[23:19] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:21] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[23:21] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) has joined #ceph
[23:21] * thb (~me@2a02:2028:11c:2c51:d001:a868:f0dc:ab9c) has joined #ceph
[23:26] * segutier_ (~segutier@198.23.71.90-static.reverse.softlayer.com) Quit (Ping timeout: 480 seconds)
[23:26] <flaf> Hi, is it equivalent with respect to performance between a) set a 10GB block device /dev/sdb1 for the journal of an OSD (osd journal = /dev/sdb1, osd journal size = 0) or b) make a xfs format of /dev/sdb1, mount the file system in /aa/bb/ and put osd journal = /aa/bb/journal, osd journal size = 10000 ?
[23:28] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[23:30] <flaf> I ask this question because I prefer use file path: it's more simple, I can just put only one line in ceph.conf like "osd journal = /aa/$cluster-$id/journal" in the [osd] section.
[23:32] <flaf> If I directly use block device, unless I misunderstood, I must one [osd.$id] section per osd with "osd journal = <the block device>".
[23:33] <flaf> s/t o/t have o/
[23:33] <kraken> flaf meant to say: If I directly use block device, unless I misunderstood, I must have one [osd.$id] section per osd with "osd journal = <the block device>".
[23:33] * georgem (~Adium@184.151.190.234) Quit (Quit: Leaving.)
[23:35] <flaf> Futhermore, if I remove and reinstall a new osd, the id will be increase (I can't choose the id) and I must change the ceph.conf file etc.
[23:35] * xarses (~andreww@216-31-228-210.static-ip.telepacific.net) Quit (Ping timeout: 480 seconds)
[23:35] * moore (~moore@64.202.160.88) Quit (Remote host closed the connection)
[23:36] <flaf> I wish to avoid [osd.$id] section in the conf.
[23:36] * moore (~moore@64.202.160.88) has joined #ceph
[23:37] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[23:37] <fghaas> flaf, you couldn't be more wrong :)
[23:38] <flaf> ah :)
[23:38] <fghaas> you haven't been needing these osd journal lines since ceph-deploy came around
[23:38] * BManojlovic (~steki@cable-89-216-240-92.dynamic.sbb.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[23:39] <fghaas> in 99% of contemporary Ceph deployments, /var/lib/ceph/osd/ceph-<num>/journal (which is the default journal location) is simply a symlink to whatever you have directed ceph-deploy to point to
[23:40] <fghaas> so, if you do "ceph-deploy osd create foo:sdc:sdj1" it will initialize a journal on /dev/sdj1, create a filesystem on /dev/sdc, and within it create a "journal" symlink that points to /dev/sdj1
[23:40] <flaf> Ah ok, I can let "osd journal = /var/lib/ceph/osd/$cluster-$id/journal" and just make a symlink to a block device, is that correct?
[23:40] <fghaas> note, the above is actually an oversimplification, it will refer to the device by GPT partition uuid, but you get the idea
[23:41] <fghaas> no, you just let ceph-deploy make your symlink
[23:41] <fghaas> and you can drop that line from your ceph.conf altogether because it's the built-in default
[23:41] <flaf> ok, but I don't use ceph-deploy, I use puppet in fact.
[23:41] <fghaas> which module, puppet-ceph or puppet-cephdeploy?
[23:42] <flaf> a personal module. I have not found a module for firefly on ubuntu trusty.
[23:43] <fghaas> okay, then build your personal module around ceph-deploy or else you'll be horribly reinventing the wheel
[23:44] <fghaas> even if you *do* build your module around ceph-deploy the better advice is probably to use https://github.com/dontalton/puppet-cephdeploy instead and fix what doesn't work ... but that's just a suggestion
[23:44] * moore (~moore@64.202.160.88) Quit (Ping timeout: 480 seconds)
[23:45] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[23:47] <flaf> In fact, the problem of ceph-deploy (for me, it's personnal) that it's not a good way to learn how it works, it's too obscure for me in fact.
[23:47] * nitti (~nitti@162.222.47.218) Quit (Ping timeout: 480 seconds)
[23:48] <flaf> But thx fghaas for the answer. I don't thought that a symlink was possible.
[23:49] <fghaas> good luck :)
[23:49] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[23:49] <flaf> thx. :)
[23:50] * scuttlemonkey is now known as scuttle|afk
[23:52] <flaf> I didn't known this puppet module. I had just seen this https://github.com/ceph/puppet-ceph
[23:54] * scuttle|afk is now known as scuttlemonkey
[23:54] * jskinner (~jskinner@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[23:55] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[23:57] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.