#ceph IRC Log

Index

IRC Log for 2015-05-05

Timestamps are in GMT/BST.

[0:00] * Stevec (~oftc-webi@zeppo.clusters.umaine.edu) Quit (Quit: Page closed)
[0:01] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[0:01] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[0:08] * puffy (~puffy@216.207.42.129) has joined #ceph
[0:09] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Ping timeout: 480 seconds)
[0:10] * cok (~chk@nat-cph1-sys.net.one.com) has joined #ceph
[0:14] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:16] * loganb (~loganb@vpngac.ccur.com) Quit (Quit: Leaving)
[0:16] * Bobby (~Guest1390@thoreau.gtor.org) has joined #ceph
[0:17] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Remote host closed the connection)
[0:24] * OutOfNoWhere (~rpb@199.68.195.102) has joined #ceph
[0:25] * MentalRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Ping timeout: 480 seconds)
[0:28] * linjan (~linjan@80.179.241.26) Quit (Ping timeout: 480 seconds)
[0:28] * yanzheng (~zhyan@171.216.95.141) Quit (Quit: This computer has gone to sleep)
[0:29] * daniel2_ (~daniel@209.163.140.194) Quit (Ping timeout: 480 seconds)
[0:37] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[0:38] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[0:40] * clayb (~clayb@c-98-245-91-148.hsd1.co.comcast.net) has joined #ceph
[0:45] * shaunm (~shaunm@74.215.76.114) Quit (Ping timeout: 480 seconds)
[0:46] * Bobby (~Guest1390@7R2AAAC2L.tor-irc.dnsbl.oftc.net) Quit ()
[0:46] * narthollis (~Solvius@185.36.100.145) has joined #ceph
[0:47] * fsimonce (~simon@host11-35-dynamic.32-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[0:52] * rendar (~I@host248-178-dynamic.17-87-r.retail.telecomitalia.it) Quit ()
[0:57] * gsilvis_ (~andovan@c-75-69-162-72.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[1:00] * gsilvis (~andovan@c-75-69-162-72.hsd1.ma.comcast.net) has joined #ceph
[1:01] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:03] * evanjfraser (~quassel@122.252.188.1) Quit (Quit: No Ping reply in 180 seconds.)
[1:16] * narthollis (~Solvius@789AAAII8.tor-irc.dnsbl.oftc.net) Quit ()
[1:16] * ItsCriminalAFK (~toast@176.10.99.208) has joined #ceph
[1:20] * ngoswami (~ngoswami@1.39.97.75) Quit (Quit: Leaving)
[1:22] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[1:26] * dopesong_ (~dopesong@88-119-94-55.static.zebra.lt) has joined #ceph
[1:27] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[1:29] * reed (~reed@75-101-54-131.dsl.static.fusionbroadband.com) Quit (Ping timeout: 480 seconds)
[1:31] * alram (~alram@206.169.83.146) Quit (Ping timeout: 480 seconds)
[1:32] * dopesong (~dopesong@lb1.mailer.data.lt) Quit (Ping timeout: 480 seconds)
[1:33] * Manshoon (~Manshoon@c-50-181-29-219.hsd1.wv.comcast.net) has joined #ceph
[1:33] * nhm (~nhm@mff2336d0.tmodns.net) Quit (Ping timeout: 480 seconds)
[1:37] * cok (~chk@nat-cph1-sys.net.one.com) has left #ceph
[1:42] * dopesong_ (~dopesong@88-119-94-55.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[1:46] * ItsCriminalAFK (~toast@53IAAADEO.tor-irc.dnsbl.oftc.net) Quit ()
[1:46] * Quatroking (~isaxi@176.10.99.202) has joined #ceph
[1:55] * dopesong (~dopesong@78-56-228-178.static.zebra.lt) has joined #ceph
[1:58] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:03] * georgem (~Adium@69-196-174-91.dsl.teksavvy.com) has joined #ceph
[2:04] * evanjfraser (~quassel@122.252.188.1) has joined #ceph
[2:04] * oms101 (~oms101@p20030057EA0BFC00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[2:04] <cmdrk> flaf: Scientific Linux 6 with a custom build of 3.18 on the clients and the OSD servers, although I'm not sure if there's any benefit to a newer kernel on the OSDs
[2:05] <cmdrk> running firefly (0.80.9).
[2:09] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) has joined #ceph
[2:09] * georgem1 (~Adium@69-196-174-91.dsl.teksavvy.com) has joined #ceph
[2:09] * georgem (~Adium@69-196-174-91.dsl.teksavvy.com) Quit (Read error: Connection reset by peer)
[2:13] * JV (~chatzilla@204.14.239.105) Quit (Ping timeout: 480 seconds)
[2:13] * oms101 (~oms101@p20030057EA0A2300EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[2:16] * Quatroking (~isaxi@7R2AAAC7J.tor-irc.dnsbl.oftc.net) Quit ()
[2:16] * Ian2128 (~skney@spftor1e1.privacyfoundation.ch) has joined #ceph
[2:19] * chutwig (~textual@pool-173-63-230-184.nwrknj.fios.verizon.net) has joined #ceph
[2:21] * tupper (~tcole@108-83-203-37.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[2:22] * georgem1 (~Adium@69-196-174-91.dsl.teksavvy.com) Quit (Quit: Leaving.)
[2:24] * nhm (~nhm@184-97-175-198.mpls.qwest.net) has joined #ceph
[2:24] * ChanServ sets mode +o nhm
[2:25] * danieagle (~Daniel@177.138.223.195) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[2:28] * puffy (~puffy@216.207.42.129) Quit (Ping timeout: 480 seconds)
[2:29] * tupper (~tcole@108-83-203-37.lightspeed.rlghnc.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[2:36] * nsoffer (~nsoffer@bzq-79-176-255-3.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[2:36] * derjohn_mob (~aj@tmo-113-39.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[2:37] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[2:38] * amote (~amote@121.244.87.116) has joined #ceph
[2:38] * Manshoon (~Manshoon@c-50-181-29-219.hsd1.wv.comcast.net) Quit (Remote host closed the connection)
[2:41] * shyu (~Shanzhi@119.254.196.66) has joined #ceph
[2:45] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) has left #ceph
[2:46] * Ian2128 (~skney@0SGAAAFND.tor-irc.dnsbl.oftc.net) Quit ()
[2:46] * fauxhawk (~Kalado@94.198.98.67) has joined #ceph
[2:49] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[2:50] * debian112 (~bcolbert@24.126.201.64) Quit (Quit: Leaving.)
[2:52] * primechuck (~primechuc@173-17-128-216.client.mchsi.com) has joined #ceph
[2:53] * Elwell (~elwell@106-68-22-72.dyn.iinet.net.au) has joined #ceph
[2:55] * Elwell_ (~elwell@124-148-228-106.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[2:55] * primechuck (~primechuc@173-17-128-216.client.mchsi.com) Quit (Read error: Connection reset by peer)
[2:57] * clayb (~clayb@c-98-245-91-148.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[2:58] * chutwig (~textual@pool-173-63-230-184.nwrknj.fios.verizon.net) Quit (Read error: Connection reset by peer)
[3:03] * Elwell_ (~elwell@58-7-69-115.dyn.iinet.net.au) has joined #ceph
[3:05] * Elwell (~elwell@106-68-22-72.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[3:07] * chutwig_ (~textual@pool-173-63-230-184.nwrknj.fios.verizon.net) has joined #ceph
[3:07] * chutwig_ is now known as chutwig
[3:11] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) Quit (Quit: Leaving.)
[3:16] * fauxhawk (~Kalado@7R2AAADAE.tor-irc.dnsbl.oftc.net) Quit ()
[3:16] * Jaska (~AluAlu@TerokNor.tor-exit.network) has joined #ceph
[3:25] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[3:25] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) has joined #ceph
[3:26] * Elwell (~elwell@58-7-71-53.dyn.iinet.net.au) has joined #ceph
[3:28] * Elwell_ (~elwell@58-7-69-115.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[3:30] * fattaneh (~fattaneh@31.59.57.159) has joined #ceph
[3:30] * georgem (~Adium@69-196-174-91.dsl.teksavvy.com) has joined #ceph
[3:32] * fattaneh (~fattaneh@31.59.57.159) has left #ceph
[3:32] * hellertime (~Adium@pool-173-48-154-80.bstnma.fios.verizon.net) has joined #ceph
[3:37] <flaf> cmdrk: ok, thanks for your answer. ;)
[3:39] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[3:46] * Jaska (~AluAlu@8Q4AAAMRQ.tor-irc.dnsbl.oftc.net) Quit ()
[3:46] * mr_flea (~Chrissi_@tor-exit1.arbitrary.ch) has joined #ceph
[4:01] * jcsp1 (~Adium@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[4:03] * shohn1 (~shohn@dslb-178-008-196-072.178.008.pools.vodafone-ip.de) has joined #ceph
[4:03] * georgem (~Adium@69-196-174-91.dsl.teksavvy.com) Quit (Quit: Leaving.)
[4:04] * shohn (~shohn@dslb-178-002-076-215.178.002.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[4:05] * zhaochao (~zhaochao@124.202.190.2) has joined #ceph
[4:05] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Ping timeout: 480 seconds)
[4:14] * Kupo2 (~tyler.wil@23.111.254.159) Quit (Read error: Connection reset by peer)
[4:16] * mr_flea (~Chrissi_@5NZAACDDN.tor-irc.dnsbl.oftc.net) Quit ()
[4:16] * Arcturus (~oracular@exit2.telostor.ca) has joined #ceph
[4:27] * OutOfNoWhere (~rpb@199.68.195.102) Quit (Ping timeout: 480 seconds)
[4:36] * georgem (~Adium@69-196-174-91.dsl.teksavvy.com) has joined #ceph
[4:39] * amote (~amote@121.244.87.116) Quit (Quit: Leaving)
[4:39] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[4:41] * lalatenduM (~lalatendu@122.171.102.40) has joined #ceph
[4:46] * Arcturus (~oracular@5NZAACDDY.tor-irc.dnsbl.oftc.net) Quit ()
[4:46] * Nephyrin (~tokie@89.105.194.72) has joined #ceph
[4:50] * dustinm` (~dustinm`@105.ip-167-114-152.net) Quit (Ping timeout: 480 seconds)
[5:04] * Vacuum__ (~vovo@88.130.200.0) has joined #ceph
[5:04] * dustinm` (~dustinm`@2607:5300:100:200::160d) has joined #ceph
[5:11] * Vacuum_ (~vovo@88.130.196.6) Quit (Ping timeout: 480 seconds)
[5:16] * Nephyrin (~tokie@789AAAIRK.tor-irc.dnsbl.oftc.net) Quit ()
[5:16] * airsoftglock (~redbeast1@spftor1e1.privacyfoundation.ch) has joined #ceph
[5:18] * amote (~amote@121.244.87.116) has joined #ceph
[5:22] * bandrus (~brian@36.sub-70-211-68.myvzw.com) Quit (Quit: Leaving.)
[5:23] * MentalRay (~MRay@107.171.161.165) has joined #ceph
[5:33] * puffy (~puffy@50.185.218.255) has joined #ceph
[5:36] * MentalRay (~MRay@107.171.161.165) Quit (Quit: This computer has gone to sleep)
[5:39] * Manshoon (~Manshoon@c-50-181-29-219.hsd1.wv.comcast.net) has joined #ceph
[5:43] * MentalRay (~MRay@107.171.161.165) has joined #ceph
[5:43] * MentalRay (~MRay@107.171.161.165) Quit ()
[5:46] * airsoftglock (~redbeast1@789AAAISJ.tor-irc.dnsbl.oftc.net) Quit ()
[5:47] * Manshoon (~Manshoon@c-50-181-29-219.hsd1.wv.comcast.net) Quit (Ping timeout: 480 seconds)
[5:48] * lalatenduM (~lalatendu@122.171.102.40) Quit (Quit: Leaving)
[5:49] * shylesh (~shylesh@121.244.87.124) has joined #ceph
[5:53] * zaitcev (~zaitcev@2001:558:6001:10:61d7:f51f:def8:4b0f) Quit (Quit: Bye)
[5:57] * chutwig (~textual@pool-173-63-230-184.nwrknj.fios.verizon.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[5:59] * karnan (~karnan@106.216.135.79) has joined #ceph
[6:03] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Ping timeout: 480 seconds)
[6:14] * yghannam (~yghannam@0001f8aa.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:15] * hellertime (~Adium@pool-173-48-154-80.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[6:16] * Arfed (~LRWerewol@tor-exit4-readme.dfri.se) has joined #ceph
[6:20] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[6:21] * madkiss1 (~madkiss@2001:6f8:12c3:f00f:350d:2a96:42a:f304) has joined #ceph
[6:22] * rdas (~rdas@122.168.254.80) has joined #ceph
[6:22] * fam is now known as fam_away
[6:23] * yanzheng (~zhyan@171.216.95.141) has joined #ceph
[6:23] * fam_away is now known as fam
[6:26] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[6:26] * georgem (~Adium@69-196-174-91.dsl.teksavvy.com) Quit (Quit: Leaving.)
[6:26] * madkiss (~madkiss@vpn142.sys11.net) Quit (Ping timeout: 480 seconds)
[6:27] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) Quit (Quit: Leaving.)
[6:28] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[6:30] * deepsa (~Deependra@00013525.user.oftc.net) has joined #ceph
[6:32] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[6:33] * branto (~borix@ip-213-220-214-203.net.upcbroadband.cz) has joined #ceph
[6:34] * JV (~chatzilla@12.19.147.253) has joined #ceph
[6:40] * JV_ (~chatzilla@204.14.239.106) has joined #ceph
[6:45] * JV (~chatzilla@12.19.147.253) Quit (Ping timeout: 480 seconds)
[6:45] * JV_ is now known as JV
[6:46] * Arfed (~LRWerewol@7R2AAADLU.tor-irc.dnsbl.oftc.net) Quit ()
[6:46] * Architect (~rushworld@marcuse-1.nos-oignons.net) has joined #ceph
[6:48] * cholcombe (~chris@pool-108-42-124-94.snfcca.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[6:50] * dopesong (~dopesong@78-56-228-178.static.zebra.lt) Quit (Remote host closed the connection)
[6:50] * karnan (~karnan@106.216.135.79) Quit (Ping timeout: 480 seconds)
[6:58] * karnan (~karnan@106.216.135.79) has joined #ceph
[7:08] * deepsa (~Deependra@00013525.user.oftc.net) Quit (Ping timeout: 480 seconds)
[7:16] * Architect (~rushworld@5NZAACDFU.tor-irc.dnsbl.oftc.net) Quit ()
[7:17] * vbellur (~vijay@121.244.87.117) has joined #ceph
[7:22] * fattaneh (~fattaneh@194.225.33.201) has joined #ceph
[7:22] * ircolle (~Adium@c-71-229-136-109.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[7:22] * fattaneh (~fattaneh@194.225.33.201) has left #ceph
[7:35] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[7:37] * derjohn_mob (~aj@tmo-108-150.customers.d1-online.com) has joined #ceph
[7:38] * puffy (~puffy@50.185.218.255) Quit (Quit: Leaving.)
[7:41] * puffy (~puffy@50.185.218.255) has joined #ceph
[7:42] * puffy (~puffy@50.185.218.255) Quit ()
[7:44] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[7:46] * Blueraven (~utugi____@176.10.99.205) has joined #ceph
[7:49] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[7:52] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[7:52] * jwilkins (~jwilkins@c-50-131-97-162.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[7:57] * badone_ is now known as badone
[7:58] * pvh_sa_ (~pvh@105-237-253-44.access.mtnbusiness.co.za) Quit (Ping timeout: 480 seconds)
[8:05] * puffy (~puffy@50.185.218.255) has joined #ceph
[8:08] * schamane (~schamane@barriere.frankfurter-softwarefabrik.de) has joined #ceph
[8:09] * smithfarm (~ncutler@nat1.scz.suse.com) has joined #ceph
[8:09] <schamane> hi guys, trying to install calamari, but there is no possibility for CentOS 7
[8:09] * kefu (~kefu@114.92.102.232) has joined #ceph
[8:09] <schamane> or does anyone know a solution for Centos7?
[8:09] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[8:10] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Remote host closed the connection)
[8:11] * SamYaple_ is now known as SamYaple
[8:11] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[8:13] * karnan (~karnan@106.216.135.79) Quit (Ping timeout: 480 seconds)
[8:14] * puffy (~puffy@50.185.218.255) Quit (Ping timeout: 480 seconds)
[8:14] * rotbeard (~redbeard@x5f75137d.dyn.telefonica.de) has joined #ceph
[8:15] * JV (~chatzilla@204.14.239.106) Quit (Ping timeout: 480 seconds)
[8:16] * Blueraven (~utugi____@789AAAIYZ.tor-irc.dnsbl.oftc.net) Quit ()
[8:16] * cheese^ (~KUSmurf@7R2AAADSW.tor-irc.dnsbl.oftc.net) has joined #ceph
[8:17] * Nacer (~Nacer@2001:41d0:fe82:7200:d0b1:4244:c299:960c) has joined #ceph
[8:19] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) has joined #ceph
[8:22] * fxmulder (~fxmulder@cpe-24-55-6-128.austin.res.rr.com) has joined #ceph
[8:23] * karnan (~karnan@171.76.14.103) has joined #ceph
[8:25] * fxmulder_ (~fxmulder@cpe-24-55-6-128.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[8:33] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[8:38] * Nacer (~Nacer@2001:41d0:fe82:7200:d0b1:4244:c299:960c) Quit (Remote host closed the connection)
[8:38] * cephiroth (~oftc-webi@br167-098.ifremer.fr) Quit (Remote host closed the connection)
[8:39] * kefu_ (~kefu@114.86.209.84) has joined #ceph
[8:41] * kefu (~kefu@114.92.102.232) Quit (Ping timeout: 480 seconds)
[8:42] * cephiroth (~oftc-webi@br167-098.ifremer.fr) has joined #ceph
[8:46] * cheese^ (~KUSmurf@7R2AAADSW.tor-irc.dnsbl.oftc.net) Quit ()
[8:46] * Inuyasha (~PierreW@marcuse-1.nos-oignons.net) has joined #ceph
[8:48] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[8:49] <T1w> if I've got a lot of objects stored in cephs object storage - how can I take a backup of them without retrieving and handling metadata (attributes, key/value pairs etc) by hand?
[8:49] <T1w> just to be sure a catastropic failure of the cluster wont erase all my data
[8:54] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[8:54] <Be-El> hi
[8:55] * derjohn_mob (~aj@tmo-108-150.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[8:55] * fattaneh (~fattaneh@194.225.33.201) has joined #ceph
[8:57] <cephiroth> hi
[8:59] * b0e (~aledermue@213.95.25.82) has joined #ceph
[9:00] * derjohn_mob (~aj@tmo-108-150.customers.d1-online.com) has joined #ceph
[9:01] * overclk (~overclk@121.244.87.117) has joined #ceph
[9:02] * thomnico (~thomnico@2a01:e35:8b41:120:a082:1df3:a3e:6550) has joined #ceph
[9:02] * analbeard (~shw@support.memset.com) has joined #ceph
[9:02] * analbeard (~shw@support.memset.com) has left #ceph
[9:10] * ajazdzewski (~ajazdzews@p200300406E090700120BA9FFFE7A950C.dip0.t-ipconnect.de) has joined #ceph
[9:10] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:10] * rdas (~rdas@122.168.254.80) Quit (Read error: Connection reset by peer)
[9:12] * linjan (~linjan@195.110.41.9) has joined #ceph
[9:14] * derjohn_mob (~aj@tmo-108-150.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[9:16] * Inuyasha (~PierreW@789AAAI1Q.tor-irc.dnsbl.oftc.net) Quit ()
[9:16] * BlS (~Jamana@aurora.enn.lu) has joined #ceph
[9:20] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[9:22] * rdas (~rdas@122.168.160.218) has joined #ceph
[9:22] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[9:24] * kawa2014 (~kawa@212.77.3.87) has joined #ceph
[9:24] * ajazdzewski_ (~ajazdzews@p200300406E090700120BA9FFFE7A950C.dip0.t-ipconnect.de) has joined #ceph
[9:24] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[9:25] * ajazdzewski (~ajazdzews@p200300406E090700120BA9FFFE7A950C.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[9:26] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[9:28] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[9:29] * pvh_sa_ (~pvh@41.164.8.114) has joined #ceph
[9:29] * mwilcox_ (~mwilcox@116.251.192.71) has joined #ceph
[9:30] * bobrik (~bobrik@83.243.64.45) has joined #ceph
[9:30] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:30] * tdb (~tdb@myrtle.kent.ac.uk) Quit (Remote host closed the connection)
[9:31] * fattaneh (~fattaneh@194.225.33.201) Quit (Ping timeout: 480 seconds)
[9:31] * tdb (~tdb@myrtle.kent.ac.uk) has joined #ceph
[9:34] * fsimonce (~simon@host11-35-dynamic.32-79-r.retail.telecomitalia.it) has joined #ceph
[9:34] * rendar (~I@host207-176-dynamic.23-79-r.retail.telecomitalia.it) has joined #ceph
[9:35] * mwilcox (~mwilcox@116.251.192.71) Quit (Ping timeout: 480 seconds)
[9:36] * rdas (~rdas@122.168.160.218) Quit (Ping timeout: 480 seconds)
[9:36] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[9:39] * rdas (~rdas@122.168.178.223) has joined #ceph
[9:40] * thomnico (~thomnico@2a01:e35:8b41:120:a082:1df3:a3e:6550) Quit (Quit: Ex-Chat)
[9:40] * tdb (~tdb@myrtle.kent.ac.uk) Quit (Read error: Connection reset by peer)
[9:40] * tdb (~tdb@myrtle.kent.ac.uk) has joined #ceph
[9:42] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[9:44] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[9:45] * Kioob`Taff (~plug-oliv@2a01:e35:2e8a:1e0::42:10) has joined #ceph
[9:46] * BlS (~Jamana@7R2AAADXP.tor-irc.dnsbl.oftc.net) Quit ()
[9:50] * floppyraid (~holoirc@202.161.23.74) has joined #ceph
[9:52] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[9:52] * mjeanson_ (~mjeanson@bell.multivax.ca) Quit (Remote host closed the connection)
[9:52] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[9:54] * dopesong (~dopesong@78-56-228-178.static.zebra.lt) has joined #ceph
[9:55] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:02] * OnTheRock (~overonthe@199.68.193.62) has joined #ceph
[10:02] * i_m (~ivan.miro@pool-109-191-92-175.is74.ru) has joined #ceph
[10:05] * Larsen (~andreas@larsen.pl) Quit (Quit: Larsen)
[10:07] * boredatwork (~overonthe@199.68.193.54) Quit (Ping timeout: 480 seconds)
[10:08] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[10:08] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[10:10] * jordanP (~jordan@213.215.2.194) has joined #ceph
[10:12] * nsoffer (~nsoffer@bzq-79-176-255-3.red.bezeqint.net) has joined #ceph
[10:13] <T1w> if I've got a lot of objects stored in cephs object storage - how can I take a backup of them without retrieving and handling metadata (attributes, key/value pairs etc) by hand?
[10:13] <T1w> just to be sure a catastropic failure of the cluster wont erase all my data
[10:15] * dopesong (~dopesong@78-56-228-178.static.zebra.lt) Quit (Quit: Leaving...)
[10:18] * rdas (~rdas@122.168.178.223) Quit (Read error: Connection reset by peer)
[10:18] * kefu_ is now known as kefu
[10:19] * fattaneh (~fattaneh@194.225.33.201) has joined #ceph
[10:23] * Xiol (~Xiol@shrike.daneelwell.eu) has joined #ceph
[10:24] * analbeard (~shw@support.memset.com) has joined #ceph
[10:28] * kefu is now known as kefu|afk
[10:32] * jluis is now known as joao
[10:32] * floppyraid (~holoirc@202.161.23.74) Quit (Remote host closed the connection)
[10:32] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) Quit (Ping timeout: 480 seconds)
[10:35] * ChrisNBlum (~ChrisNBlu@178.255.153.117) has joined #ceph
[10:38] * kefu|afk is now known as kefu
[10:38] * rdas (~rdas@122.168.75.126) has joined #ceph
[10:40] * brutusca_ (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[10:40] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[10:42] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[10:45] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:46] * bret1 (~Zombiekil@spftor1e1.privacyfoundation.ch) has joined #ceph
[10:46] * jeevan_ullas (~Deependra@114.143.38.200) has joined #ceph
[10:46] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[10:48] * fdmanana (~fdmanana@bl5-245-210.dsl.telepac.pt) has joined #ceph
[10:53] * brutusca_ (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Ping timeout: 480 seconds)
[10:54] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[10:55] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) Quit (Quit: Leaving.)
[10:55] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:57] * stkim1_ (~oftc-webi@112.172.171.217) has joined #ceph
[10:58] * stkim1 (~almightyk@112.172.171.217) has joined #ceph
[10:59] * stkim1_ (~oftc-webi@112.172.171.217) Quit ()
[11:00] <stkim1> I???m wondering how I can access an object created by rados by cephfs?
[11:00] * rendar (~I@host207-176-dynamic.23-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[11:00] <stkim1> is there a way to access an object via filesystem?
[11:00] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[11:01] * sleinen (~Adium@130.59.94.119) has joined #ceph
[11:01] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[11:01] * sleinen (~Adium@130.59.94.119) Quit ()
[11:02] * floppyraid (~holoirc@202.161.23.74) has joined #ceph
[11:03] * nsoffer (~nsoffer@bzq-79-176-255-3.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[11:04] <stkim1> found an answer ???You cannot write data to Ceph using RBD and access the same data via CephFS??? gosh???
[11:10] <fattaneh> any body knows how can i list the files that store in each osd?
[11:10] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[11:12] * dustinm` (~dustinm`@2607:5300:100:200::160d) Quit (Ping timeout: 480 seconds)
[11:15] * dgurtner (~dgurtner@178.197.235.98) has joined #ceph
[11:15] * overclk (~overclk@121.244.87.117) Quit (Ping timeout: 480 seconds)
[11:16] * bret1 (~Zombiekil@789AAAI78.tor-irc.dnsbl.oftc.net) Quit ()
[11:17] <stkim1> fattaneh you prob want to obtain file extent first before getting list of files stored in an osd
[11:18] <stkim1> assuming you???d be using CephFS..
[11:18] <fattaneh> stkim1: my problem is that i want to put all files in each osd in a tree
[11:18] <fattaneh> stkim1: and then search in that tree
[11:19] <fattaneh> stkim1: to find files
[11:19] * rendar (~I@host143-177-dynamic.8-79-r.retail.telecomitalia.it) has joined #ceph
[11:19] <fattaneh> stkim1: but i looked in current directory i couldn't find any file there
[11:20] <fattaneh> stkim1: yes i use cephfs, and in client i can see the files, but i need to see them in each osd
[11:20] * lalatenduM (~lalatendu@121.244.87.117) Quit (Ping timeout: 480 seconds)
[11:21] * sleinen (~Adium@130.59.94.119) has joined #ceph
[11:21] <stkim1> fattaneh: I c. I???m kinda new to Ceph but I can tell you there should be something wrong with your configuration.
[11:21] * sleinen (~Adium@130.59.94.119) Quit ()
[11:22] <stkim1> fattaneh: otherwise, you should be able to see them whatever OSD you???re on.
[11:22] * fvl (~fvl@ipjusup.net.tomline.ru) Quit (Remote host closed the connection)
[11:22] <fattaneh> stkim1: you mean that you can see your files there?
[11:23] <fattaneh> stkim1: in wich directory are the store?
[11:24] <stkim1> fattaneh: At least you see the root path and you can follow the sub path to find where to go. That should be a good start to debug.
[11:25] * thomnico (~thomnico@2a01:e35:8b41:120:a082:1df3:a3e:6550) has joined #ceph
[11:25] * stkim1 (~almightyk@112.172.171.217) Quit (Quit: stkim1)
[11:25] <fattaneh> stkim1: i can see all files in client, but i cant find them in osd's
[11:28] * jeevan_ullas (~Deependra@114.143.38.200) Quit (Max SendQ exceeded)
[11:28] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[11:29] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[11:31] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[11:32] * treenerd (~treenerd@85.193.140.98) has joined #ceph
[11:32] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[11:34] * Mika_c (~quassel@122.146.93.152) has joined #ceph
[11:37] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[11:37] * Mika_c (~quassel@122.146.93.152) Quit (Remote host closed the connection)
[11:39] * sleinen (~Adium@130.59.94.119) has joined #ceph
[11:40] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[11:40] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[11:40] * sleinen1 (~Adium@2001:620:0:82::101) has joined #ceph
[11:41] <schamane> hi guys, trying to deplay giant with ceph-deploy install --release giant {node}, but its not allowed to use --no-adjust-repos with --realease, but i am behind a proxy
[11:42] <alfredodeza> schamane: you will probably need an internal mirror
[11:42] <alfredodeza> to make that work
[11:42] <schamane> when i change the release in the repo file, it doesnt work as well
[11:42] <schamane> it still wanna install 0.80 (firefly)
[11:46] * dusti (~x303@ncc-1701-a.tor-exit.network) has joined #ceph
[11:47] * sleinen (~Adium@130.59.94.119) Quit (Ping timeout: 480 seconds)
[11:51] * brutusca_ (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[11:51] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[11:53] * Elwell (~elwell@58-7-71-53.dyn.iinet.net.au) Quit (Read error: Connection reset by peer)
[11:54] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) has joined #ceph
[11:54] * dustinm` (~dustinm`@2607:5300:100:200::160d) has joined #ceph
[11:56] * Elwell (~elwell@106-68-29-103.dyn.iinet.net.au) has joined #ceph
[11:59] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[12:00] * sleinen1 (~Adium@2001:620:0:82::101) Quit (Ping timeout: 480 seconds)
[12:02] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) has joined #ceph
[12:03] * brutusca_ (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[12:05] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[12:05] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[12:07] <loicd> fattaneh: oh, I see you already asked here ?
[12:07] <fattaneh> loicd: yes
[12:08] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[12:08] <loicd> fattaneh: in general you should not try to manually extract files from OSDs unless you're in disaster recovery mode.
[12:09] * puffy (~puffy@50.185.218.255) has joined #ceph
[12:09] <fattaneh> loicd: ok, but how can i do that?
[12:09] <fattaneh> loicd: i mean that i must do that
[12:09] * dgurtner_ (~dgurtner@178.197.231.81) has joined #ceph
[12:09] <fattaneh> loicd: i want to make a tree of them
[12:09] <loicd> you are trying to recover from a crashed cluster ?
[12:09] <fattaneh> loicd: no
[12:10] <fattaneh> loicd: i want to add files metadata to the tree(kdtree)
[12:10] * dusti (~x303@7R2AAAD8O.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[12:10] <fattaneh> then search in that tree to find files
[12:10] <loicd> what is kdtree ?
[12:11] * dgurtner (~dgurtner@178.197.235.98) Quit (Ping timeout: 480 seconds)
[12:12] <fattaneh> it's a k dimensional tree
[12:13] * sep (~sep@2a04:2740:1:0:52e5:49ff:feeb:32) has joined #ceph
[12:13] * fxmulder_ (~fxmulder@cpe-24-55-6-128.austin.res.rr.com) has joined #ceph
[12:14] <fattaneh> my project is sth similar to spyglass project in wafl file system
[12:14] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[12:15] * fxmulder (~fxmulder@cpe-24-55-6-128.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[12:16] * darks (~yuastnav@37.187.129.166) has joined #ceph
[12:17] * puffy (~puffy@50.185.218.255) Quit (Ping timeout: 480 seconds)
[12:18] * bkopilov (~bkopilov@nat-pool-tlv-u.redhat.com) has joined #ceph
[12:18] <loicd> fattaneh: whatever you're trying to do, I'm quite sure manually exploring the files from the OSD is not what you need.
[12:19] <loicd> I don't know what a k dimensional tree, spyglass or a wafl file system are. Could you explain more about your use case ?
[12:19] * lalatenduM (~lalatendu@121.244.87.117) has left #ceph
[12:19] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[12:20] <loicd> I think we can find a solution if I understand the problem you're trying to solve ;-)
[12:24] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[12:25] <fattaneh> loicd: may I write my problem in detail and send it for you later
[12:29] <loicd> fattaneh: that would be great. Posting to the ceph-users mailing list will probably attract more comments.
[12:29] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[12:30] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[12:30] <loicd> (i.e. http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com/)
[12:30] <fattaneh> loicd: thanks alot :)
[12:30] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[12:31] * karnan (~karnan@171.76.14.103) Quit (Ping timeout: 480 seconds)
[12:36] * shyu (~Shanzhi@119.254.196.66) Quit (Remote host closed the connection)
[12:38] * mwilcox_ (~mwilcox@116.251.192.71) Quit (Ping timeout: 480 seconds)
[12:38] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[12:40] * karnan (~karnan@171.76.14.103) has joined #ceph
[12:46] * darks (~yuastnav@789AAAJC4.tor-irc.dnsbl.oftc.net) Quit ()
[12:52] * karnan (~karnan@171.76.14.103) Quit (Ping timeout: 480 seconds)
[12:56] * brutusca_ (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[12:56] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[12:57] * dalgaaf (uid15138@id-15138.charlton.irccloud.com) Quit (Quit: Connection closed for inactivity)
[13:00] * kefu (~kefu@114.86.209.84) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[13:03] * kapil_ (~ksharma@2620:113:80c0:5::2222) Quit (Quit: Leaving)
[13:03] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) has joined #ceph
[13:03] * kapil (~kapil@2620:113:80c0:5::2222) Quit (Quit: Konversation terminated!)
[13:04] * jclm1 (~jclm@ip24-253-45-236.lv.lv.cox.net) Quit (Quit: Leaving.)
[13:07] * dustinm` (~dustinm`@2607:5300:100:200::160d) Quit (Ping timeout: 480 seconds)
[13:10] * kefu (~kefu@114.86.209.84) has joined #ceph
[13:12] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[13:12] * brutusca_ (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[13:15] * kapil (~ksharma@2620:113:80c0:5::2222) has joined #ceph
[13:15] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[13:16] * pvh_sa_ (~pvh@41.164.8.114) Quit (Ping timeout: 480 seconds)
[13:16] * BillyBobJohn (~KrimZon@exit-01d.noisetor.net) has joined #ceph
[13:16] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[13:18] * karnan (~karnan@171.76.14.103) has joined #ceph
[13:21] * overclk (~overclk@121.244.87.117) has joined #ceph
[13:21] * kefu (~kefu@114.86.209.84) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[13:22] * dustinm` (~dustinm`@2607:5300:100:200::160d) has joined #ceph
[13:23] * hellertime (~Adium@72.246.0.14) has joined #ceph
[13:23] * fattaneh1 (~fattaneh@194.225.33.201) has joined #ceph
[13:24] * fattaneh (~fattaneh@194.225.33.201) Quit (Read error: Connection reset by peer)
[13:29] * pvh_sa_ (~pvh@41.164.8.114) has joined #ceph
[13:32] * portante (~portante@nat-pool-bos-t.redhat.com) Quit (Quit: ZNC - http://znc.in)
[13:33] * fdmanana (~fdmanana@bl5-245-210.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[13:34] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) has joined #ceph
[13:37] * fattaneh1 (~fattaneh@194.225.33.201) Quit (Remote host closed the connection)
[13:38] * ajazdzewski_ (~ajazdzews@p200300406E090700120BA9FFFE7A950C.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[13:38] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[13:39] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[13:39] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[13:40] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[13:42] * portante (~portante@nat-pool-bos-t.redhat.com) has joined #ceph
[13:46] * BillyBobJohn (~KrimZon@53IAAAEG0.tor-irc.dnsbl.oftc.net) Quit ()
[13:47] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[13:47] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[13:50] * overclk (~overclk@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:51] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[13:53] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[13:57] * shylesh (~shylesh@121.244.87.124) Quit (Remote host closed the connection)
[14:01] * qstion_ (~qstion@37.157.144.44) has joined #ceph
[14:03] <joelm> hammer just been pushed as stable then?
[14:03] <joelm> roleld out one box - was 0.87.2
[14:03] * georgem (~Adium@184.151.178.173) has joined #ceph
[14:03] * georgem (~Adium@184.151.178.173) Quit ()
[14:03] <joelm> 10 mins later ceph-deploy now on 0.94
[14:03] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[14:03] <joelm> hope this works, got *really* burnt by hammer last time
[14:04] * hellerbarde2 (~quassel@sos-nw-client-1-35.ethz.ch) has joined #ceph
[14:04] <hellerbarde2> Hi everyone! Is there a way to add a journal to an OSD after it has been prepared and activated?
[14:05] <hellerbarde2> (oh, and I'm still on firefly, unfortunately)
[14:06] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[14:06] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[14:07] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[14:08] * treenerd (~treenerd@85.193.140.98) Quit (Ping timeout: 480 seconds)
[14:09] <Kingrat> hellerbarde2, i believe you would have to blow it away and recreate it
[14:10] <Kingrat> and i believe that because you can not replace a failed journal drive without recreating the osds using it
[14:10] * ChrisNBlum (~ChrisNBlu@178.255.153.117) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[14:12] <hellerbarde2> Kingrat: what about http://www.ceph.com/docs/master/man/8/ceph-osd/#cmdoption-ceph-osd--mkjournal ?
[14:12] * hellerbarde2 (~quassel@sos-nw-client-1-35.ethz.ch) Quit (Remote host closed the connection)
[14:12] * hellerbarde2 (~quassel@sos-nw-client-1-35.ethz.ch) has joined #ceph
[14:12] <hellerbarde2> woops. client segfaulted...
[14:14] * kefu (~kefu@114.86.209.84) has joined #ceph
[14:16] <Kingrat> well, it is still there in http://www.ceph.com/docs/firefly/man/8/ceph-osd/#cmdoption-ceph-osd--mkjournal but i dont know, ive never done it personally so maybe someone else will chime in
[14:16] * kefu (~kefu@114.86.209.84) Quit ()
[14:16] * Fapiko (~WedTM@178-175-128-50.ip.as43289.net) has joined #ceph
[14:16] <hellerbarde2> Kingrat: ok, thx. And thanks for the reminder that I was in the wrong docs! :)
[14:20] * amote (~amote@121.244.87.116) Quit (Quit: Leaving)
[14:20] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[14:20] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[14:20] * treenerd (~treenerd@85.193.140.98) has joined #ceph
[14:22] * dalgaaf (uid15138@id-15138.charlton.irccloud.com) has joined #ceph
[14:27] * karnan (~karnan@171.76.14.103) Quit (Ping timeout: 480 seconds)
[14:29] * fdmanana (~fdmanana@bl5-245-210.dsl.telepac.pt) has joined #ceph
[14:33] * linjan (~linjan@195.110.41.9) Quit (Ping timeout: 480 seconds)
[14:34] * bkopilov (~bkopilov@nat-pool-tlv-u.redhat.com) Quit (Ping timeout: 480 seconds)
[14:36] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[14:36] * karnan (~karnan@106.51.241.21) has joined #ceph
[14:46] * Fapiko (~WedTM@7R2AAAEIU.tor-irc.dnsbl.oftc.net) Quit ()
[14:46] * PcJamesy (~Enikma@spftor1e1.privacyfoundation.ch) has joined #ceph
[14:51] * Elwell_ (~elwell@106-68-0-209.dyn.iinet.net.au) has joined #ceph
[14:51] * ajazdzewski_ (~ajazdzews@p4FC8F3D6.dip0.t-ipconnect.de) has joined #ceph
[14:52] * shaunm (~shaunm@74.215.76.114) has joined #ceph
[14:53] * Elwell (~elwell@106-68-29-103.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[14:55] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) Quit (Remote host closed the connection)
[14:57] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[14:57] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[14:58] <joelm> hmm, all my rbd backup stuff going a bit awry now, Exporting image: 10% complete...rbd: error writing to destination image at offset 4194304
[14:58] <joelm> Exporting image: 20% complete...rbd: error writing to destination image at offset 8388608
[14:58] <joelm> etc
[15:00] * dyasny (~dyasny@198.251.61.137) has joined #ceph
[15:01] * cyberhide (~oftc-webi@c-73-189-169-128.hsd1.ca.comcast.net) has joined #ceph
[15:02] <joelm> rbd export one/$dev - | dd of=/backup/one/images/$dev.raw bs=64k
[15:02] <joelm> now failing for me
[15:02] * nhm (~nhm@184-97-175-198.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[15:02] <joelm> the dd is there as allows for inserting pigz/pbzip2 etc
[15:03] <joelm> ahh perms
[15:03] <cyberhide> hello folks. hope someone can help me out there. I have a 3 node ceph cluster that I am expanding to 6 nodes. And after applying a new crush map the osds continually go marked down/out and wont stay in the cluster. Its showing about 60% of the pgs being degraded (among other states of stuck/unclean/etc).
[15:04] <peeejayz> do they automatically come back in after a while?
[15:04] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[15:04] <cyberhide> eventually.. but they eventually fall back out again
[15:05] <peeejayz> I had the same problem when adding 4 new nodes.
[15:05] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[15:05] <joelm> cool, works now :)
[15:05] <peeejayz> I found disabling the firewall helped.
[15:05] <peeejayz> i don't know why. But I think its hte private network getting 'raped'
[15:05] <cyberhide> the firewall on these nodes are completely open
[15:06] <peeejayz> what privatenetwork do you have 1gb?
[15:06] <cyberhide> 10gb
[15:07] <Svedrin> on http://ceph.com/docs/master/release-notes/ there's a typo in "enable experimental unrecoverable data corrupting featuers" (should be featuREs)...
[15:07] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) has joined #ceph
[15:08] * Elwell (~elwell@106-68-30-233.dyn.iinet.net.au) has joined #ceph
[15:08] <cyberhide> i just did a "ufw disable" to see if that may help
[15:09] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[15:09] <cyberhide> but load is low (cpu wise), but the boxes seem sluggish.. about 30% memory usage, as well
[15:10] * Elwell_ (~elwell@106-68-0-209.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[15:10] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) Quit (Quit: Ex-Chat)
[15:10] <peeejayz> sounds just like mine was, what does iotop say?
[15:11] <peeejayz> also if you look on the new nodes you have added. and df -h the disks should start having data going into them.
[15:12] * tupper (~tcole@2001:420:2280:1272:8900:f9b8:3b49:567e) has joined #ceph
[15:12] <cyberhide> iotop doesnt show a huge amount of io flowing through.. total box io is 100k/sec for most of them
[15:13] <cyberhide> i do see data hitting the disks.. but they are about 5% of where they should be
[15:13] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:13] * Hemanth (~Hemanth@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:14] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) has joined #ceph
[15:14] * kefu (~kefu@114.86.209.84) has joined #ceph
[15:15] <peeejayz> give it time, that will soon sort its self out
[15:15] <peeejayz> have you looked at network traffic/
[15:15] <cyberhide> i did do a noout, nodown just to keep things from flapping
[15:16] * PcJamesy (~Enikma@7R2AAAEKV.tor-irc.dnsbl.oftc.net) Quit ()
[15:16] * Quatroking (~PappI@manning2.torservers.net) has joined #ceph
[15:17] * yghannam (~yghannam@0001f8aa.user.oftc.net) has joined #ceph
[15:17] <cyberhide> iftop shows the boxes doing around 100k/sec, each, on average
[15:17] <peeejayz> are they will going inside and out.
[15:18] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[15:18] * thomnico (~thomnico@2a01:e35:8b41:120:a082:1df3:a3e:6550) Quit (Quit: Ex-Chat)
[15:18] <joelm> I think you may have other issues, shouldn't take an age to gain quorum
[15:18] <joelm> maybe post your maps
[15:18] <joelm> and how did you add the other hosts? via what tooling
[15:19] <cyberhide> i just created a new map and setcrushmap
[15:19] <joelm> not using ceph-deploy?
[15:19] * rdas (~rdas@122.168.75.126) Quit (Quit: Leaving)
[15:19] <cyberhide> i tested it before hand with crushtool
[15:19] <cyberhide> no.. doing stuff without ceph-deploy
[15:19] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[15:20] <joelm> ah, ok, good luck then :)
[15:20] <peeejayz> pastebin your crush map please
[15:20] <cyberhide> the disks were already in the cluster, just not part of any ruleset
[15:20] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[15:20] <joelm> and you've set all the auth etc up?
[15:21] <joelm> what do the osd logs say etc
[15:21] <cyberhide> yes.. ceph auth list shows the same as the original 3 nodes for permissions.. and that 3 node was running fine for 4 months
[15:21] <cyberhide> http://pastebin.com/Eu5EgcDH
[15:22] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) Quit (Quit: Leaving)
[15:22] <joelm> cyberhide: you've just added 3 other nodes thoguh>?
[15:22] <joelm> should have all the new stuff listed
[15:22] <joelm> ceph-deploy does this for you of course, when doing manually much more to contend with - especially scaling
[15:22] <cyberhide> the new nodes are listed in the ceph crush map
[15:23] <joelm> yes, but that's not auth ;)
[15:23] <cyberhide> yes.. i checked ceph auth list and see all the disks
[15:24] <cyberhide> i had 3 nodes, I was adding 3 new nodes to the crush map when this happened
[15:24] <cyberhide> the 3 nodes were part of the cluster, and I could see them in ceph osd tree. but I had not used them in any ruleset
[15:24] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[15:24] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[15:25] <joelm> well, what do the osd logs say?
[15:25] <joelm> you should have logging I assume?
[15:25] <joelm> if you want to work out why they're flapping, I'd start there
[15:26] <cyberhide> yes.. and the logs for the osds are saying a bunch of different things. some say "transitioning to Stray" and walking through pg's
[15:27] * Elwell_ (~elwell@203-59-158-233.dyn.iinet.net.au) has joined #ceph
[15:27] <cyberhide> some say connect claims to be X.X.X.X not X.X.X.X - wrong node
[15:27] <joelm> yea, never seen that I'm afraid
[15:27] <joelm> oh, the latter I have
[15:27] <joelm> what are your pubic/cluister network defintions?
[15:27] <joelm> public/cluster
[15:28] <cyberhide> public network = 10.0.224.0/21 cluster network = 10.0.216.0/22
[15:28] <cyberhide> seperate blocks, seperate networks (vlans)
[15:28] <joelm> ok, cool
[15:28] <joelm> they all viewable network wise?
[15:28] <joelm> (on you rnew hostS)
[15:29] * Elwell (~elwell@106-68-30-233.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[15:30] * bene (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[15:30] * vbellur (~vijay@121.244.87.124) has joined #ceph
[15:30] <cyberhide> yep.. i can ping <hostname> of the other nodes, as well as ips
[15:30] * mwilcox_ (~mwilcox@116.251.192.71) has joined #ceph
[15:34] <cyberhide> im not really sure why they cant seem to stabilize talking to each other. They seem to intermittently talk to each other, but then they occasionally restart.. some showing "failed assert" error. which I think is what causes them to drop out
[15:35] <cyberhide> failed assert, suicide timeout
[15:37] <cyberhide> but I dont see anything related to that for any recent version of ceph. I am running 0.80-9-1trusty
[15:38] * mwilcox_ (~mwilcox@116.251.192.71) Quit (Ping timeout: 480 seconds)
[15:39] <joelm> failed assert could be related to the journal but not sure tbhj
[15:40] * kefu (~kefu@114.86.209.84) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[15:41] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) Quit (Quit: Leaving.)
[15:41] <cyberhide> my journals are sitting on ssds.. not sure how I can check on that
[15:42] * zhaochao (~zhaochao@124.202.190.2) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 31.6.0/20150331233809])
[15:44] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[15:44] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has left #ceph
[15:44] * dyasny (~dyasny@198.251.61.137) Quit (Ping timeout: 480 seconds)
[15:45] * sleinen (~Adium@130.59.94.119) has joined #ceph
[15:46] * Quatroking (~PappI@0SGAAAGDY.tor-irc.dnsbl.oftc.net) Quit ()
[15:46] * nhm (~nhm@172.56.3.98) has joined #ceph
[15:46] * ChanServ sets mode +o nhm
[15:46] * Oddtwang (~raindog@hessel2.torservers.net) has joined #ceph
[15:46] * sleinen1 (~Adium@2001:620:0:82::107) has joined #ceph
[15:47] <cyberhide> trying to figure out next options... im trying to tweak network tunables via sysctl, see if that helps
[15:47] <joelm> I really don't think that will be the problem, but go for it
[15:50] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has joined #ceph
[15:53] <cyberhide> im grabbing at straws.. heh
[15:53] * sleinen (~Adium@130.59.94.119) Quit (Ping timeout: 480 seconds)
[15:53] * loganb (~loganb@vpngac.ccur.com) has joined #ceph
[15:54] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[15:54] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[15:55] <loganb> anyone in here have experience installing calamari on centos7?
[15:56] <loganb> I am not able to add my cluster to calamari. Calamari only shows that I have 3 clusters waiting to be attached, but won't let me add them
[15:57] <loganb> there are bug reports that have a similar problem to me, I have tried the steps listed in them, it did not help. The reports are a year old as well.
[15:57] <loganb> I really feel like it is some small config somewhere that I am missing.
[15:57] <loganb> I have 3 servers, node{1..3} and I am running calamari on node1
[15:58] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[15:59] * dgurtner (~dgurtner@178.197.235.98) has joined #ceph
[16:00] * bene2 (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[16:01] * dgurtner_ (~dgurtner@178.197.231.81) Quit (Ping timeout: 480 seconds)
[16:02] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has left #ceph
[16:02] * hellerbarde2 (~quassel@sos-nw-client-1-35.ethz.ch) Quit (Remote host closed the connection)
[16:03] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[16:03] * dustinm` (~dustinm`@2607:5300:100:200::160d) Quit (Ping timeout: 480 seconds)
[16:04] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[16:04] * yanzheng (~zhyan@171.216.95.141) Quit (Read error: Connection reset by peer)
[16:05] * dgurtner (~dgurtner@178.197.235.98) Quit (Read error: No route to host)
[16:07] * bene (~ben@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[16:08] * kefu (~kefu@114.86.209.84) has joined #ceph
[16:09] * dgurtner (~dgurtner@178.197.235.98) has joined #ceph
[16:09] * Manshoon (~Manshoon@208.184.50.131) has joined #ceph
[16:10] * dustinm` (~dustinm`@105.ip-167-114-152.net) has joined #ceph
[16:12] <m0zes> so, when creating a new cephfs filesystem, I apparently missed the mds parameter 'mds max file size' (which is set to 1TB by default). That is simply not large enough for us. according to the documentation, it is only used when creating a new pool. Is there any way to change it for a running cluster?
[16:12] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[16:14] * dneary (~dneary@66.201.52.99) has joined #ceph
[16:14] * treenerd (~treenerd@85.193.140.98) Quit (Ping timeout: 480 seconds)
[16:14] * pvh_sa_ (~pvh@41.164.8.114) Quit (Ping timeout: 480 seconds)
[16:15] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[16:15] <gregsfortytwo> m0zes: oooh, I was about to disappoint you, but there's a monitor command
[16:16] <gregsfortytwo> "ceph mds set max_file_size <size>"
[16:16] <gregsfortytwo> I think that's in bytes
[16:16] * Oddtwang (~raindog@0SGAAAGFI.tor-irc.dnsbl.oftc.net) Quit ()
[16:16] <m0zes> awesome. I must have missed that when just looking at the help output.
[16:16] <m0zes> thanks, gregsfortytwo
[16:18] * clayb (~clayb@67-40-154-97.hlrn.qwest.net) has joined #ceph
[16:18] <m0zes> we work with genomics data, and some of our genome files are ~2TiB already. I can only imagine the file sizes getting larger from there.
[16:19] * ajazdzewski_ (~ajazdzews@p4FC8F3D6.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[16:19] <m0zes> you would think that the tools would have a way to split them up into smaller files (or to read/write gzipped files) but no, that would make too much sense
[16:20] <gregsfortytwo> I guess you never put anything on your desktop then :)
[16:20] <todin> I read alot about the newstore on the ml, how to I activate the newstore to test it?
[16:22] * dyasny (~dyasny@mtl-pppoe-adsl2518.securenet.net) has joined #ceph
[16:22] * Manshoon_ (~Manshoon@208.184.50.130) has joined #ceph
[16:22] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:25] * raif (~holoirc@202.161.23.74) has joined #ceph
[16:25] * floppyraid (~holoirc@202.161.23.74) Quit (Read error: Connection reset by peer)
[16:25] * karnan (~karnan@106.51.241.21) Quit (Ping timeout: 480 seconds)
[16:26] * treenerd (~treenerd@85.193.140.98) has joined #ceph
[16:28] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Ping timeout: 480 seconds)
[16:28] <jcsp1> hmm, maybe the 1TB file size limit default is a bit out of date (ext4 is 16TB these days)
[16:29] <gregsfortytwo> jcsp1: it's all about the cost of probing when things go wrong; bumping up that limit means a file recovery can take foreeeeeeever
[16:29] * Manshoon (~Manshoon@208.184.50.131) Quit (Ping timeout: 480 seconds)
[16:31] <m0zes> honestly, 1TiB files are pretty extreme in most situations.
[16:31] <jcsp1> yeah, we need a limit, the default value of it is kind of a "finger in the air" thing though
[16:31] * Manshoon (~Manshoon@208.184.50.131) has joined #ceph
[16:32] <jcsp1> same as ext4 would be a bit less arbitrary than 1TB, but then hopefully most people with super-big files are technical enough to tweak the limit so maybe it doesn't matter
[16:32] * Manshoon (~Manshoon@208.184.50.131) Quit (Remote host closed the connection)
[16:32] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[16:32] <m0zes> if there is a simple and painless way to increase it on the fly, which there is, then it shouldn't matter what the default is. perhaps making sure it is an obvious limit that can be changed on the fly might be nice.
[16:32] * Manshoon (~Manshoon@199.16.199.4) has joined #ceph
[16:33] <jcsp1> we could warn if a file is getting close to the limit, so that someone has a chance to increase it, but that's probably overkill
[16:33] <m0zes> obviously we need the default to be the xfs limit. 18 exabytes.
[16:33] <jcsp1> hah
[16:33] <jcsp1> fair point!
[16:34] * Larsen (~andreas@larsen.pl) has joined #ceph
[16:35] * karnan (~karnan@171.76.52.29) has joined #ceph
[16:35] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[16:35] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[16:38] * Manshoon_ (~Manshoon@208.184.50.130) Quit (Ping timeout: 480 seconds)
[16:40] * tw0fish (~twofish@UNIX5.ANDREW.CMU.EDU) has joined #ceph
[16:40] * dgurtner (~dgurtner@178.197.235.98) Quit (Read error: Connection reset by peer)
[16:40] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[16:41] <tw0fish> has anyone here had any success with the ceph repos and getting things running with ceph-deploy on RHEL 7?
[16:41] * yghannam (~yghannam@0001f8aa.user.oftc.net) Quit (Ping timeout: 480 seconds)
[16:42] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[16:43] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[16:43] * yghannam (~yghannam@0001f8aa.user.oftc.net) has joined #ceph
[16:44] * treenerd (~treenerd@85.193.140.98) Quit (Ping timeout: 480 seconds)
[16:44] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[16:45] * MentalRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[16:46] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[16:50] * Sun7zu (~Wijk@bakunin.gtor.org) has joined #ceph
[16:52] <tw0fish> I am finding ceph-deploy trying to install RPMs that aren't even in the repos (ceph-mon and ceph-osd).
[16:58] * dneary (~dneary@66.201.52.99) Quit (Ping timeout: 480 seconds)
[16:59] * cdelatte (~cdelatte@2606:a000:6e63:4c00:fcaa:83b2:9be6:8591) has joined #ceph
[16:59] * ircolle (~Adium@2601:1:a580:1735:80f7:499a:4cdb:8288) has joined #ceph
[17:01] * cdelatte (~cdelatte@2606:a000:6e63:4c00:fcaa:83b2:9be6:8591) Quit ()
[17:01] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:02] * xarses (~andreww@12.164.168.117) has joined #ceph
[17:02] <joelm> tw0fish: you need EPEL non?
[17:03] * thomnico (~thomnico@smb-alys-01.wifihubtelecom.net) has joined #ceph
[17:03] * Manshoon (~Manshoon@199.16.199.4) Quit (Read error: Connection reset by peer)
[17:04] <tw0fish> I have EPEL enabled, i.e. - 'rhel-7-server-optional-rpms'
[17:04] * Manshoon (~Manshoon@199.16.199.4) has joined #ceph
[17:04] <joelm> tw0fish: and you followed - http://docs.ceph.com/docs/master/install/get-packages/
[17:04] <joelm> see the extra source?
[17:04] <tw0fish> joelm: yes i followed all of the instruction from there
[17:04] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[17:05] * dgurtner (~dgurtner@178.197.235.98) has joined #ceph
[17:05] <tw0fish> joelm: if i do go to http://ceph.com/packages/ceph-extras/rpm/ and look around, i do not see a directory for RHEL 7 there.
[17:05] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[17:06] <joelm> sorry, I#'m not a RHEL user
[17:06] <tw0fish> so, it would seem that you either don't need the ceph-extras for RHEL 7 ot they simply didn't come out yet.
[17:06] <tw0fish> Looking at the ceph documentation, however, they state they are supporting REHL 7.
[17:06] <joelm> so can't say - knowing that Ceph/Inkltank is owned by Redhat, I'd say they will work
[17:06] <joelm> otherwise there would be real problems ;)
[17:06] <tw0fish> But it is all kinds of RPM depsolving mess and things that ceph-deploy is trying to install that aren't even there.
[17:07] <joelm> dunno, I'm a debian user
[17:07] * yghannam (~yghannam@0001f8aa.user.oftc.net) Quit (Ping timeout: 480 seconds)
[17:07] <tw0fish> joelm: @Ceph being owned by Redhat -- haha yeah, you would think things should just work. ;)
[17:08] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[17:10] * cholcombe (~chris@pool-108-42-124-94.snfcca.fios.verizon.net) has joined #ceph
[17:10] * MentalRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: This computer has gone to sleep)
[17:15] <m0zes> we use rhel7
[17:15] <m0zes> s/rhel/centos/
[17:18] * JV (~chatzilla@204.14.239.106) has joined #ceph
[17:20] * Sun7zu (~Wijk@53IAAAEVR.tor-irc.dnsbl.oftc.net) Quit ()
[17:20] * Bored (~straterra@176.10.99.202) has joined #ceph
[17:24] * Manshoon (~Manshoon@199.16.199.4) Quit (Remote host closed the connection)
[17:24] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[17:24] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[17:24] * Manshoon (~Manshoon@199.16.199.4) has joined #ceph
[17:26] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) has joined #ceph
[17:26] * Manshoon (~Manshoon@199.16.199.4) Quit (Remote host closed the connection)
[17:27] * Manshoon (~Manshoon@199.16.199.4) has joined #ceph
[17:27] * Manshoon (~Manshoon@199.16.199.4) Quit (Remote host closed the connection)
[17:28] * Manshoon (~Manshoon@199.16.199.4) has joined #ceph
[17:29] * bene2 (~ben@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[17:29] * Manshoon_ (~Manshoon@208.184.50.130) has joined #ceph
[17:32] <tw0fish> m0zes: did you use ceph-deploy to get things running on centos 7?
[17:34] * MentalRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[17:34] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[17:36] <m0zes> tw0fish: https://dpaste.de/kqAn
[17:36] * Manshoon (~Manshoon@199.16.199.4) Quit (Ping timeout: 480 seconds)
[17:37] <m0zes> basically what we did. then did ceph-deploy iirc
[17:37] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[17:38] <tw0fish> okay, yeah i just don't see ceph-mon and ceph-osd RPMs in those repos and those are 2 RPMs ceph-deploy is trying to install
[17:39] * shakamunyi (~shakamuny@wbucrp-gdm0a-as.bsc.disney.com) has joined #ceph
[17:41] <alfredodeza> tw0fish: m0zes: ceph-deploy is correctly trying to install *rhel* packages. It is not sufficient to have a RH box, you need entitlements/subscription
[17:41] <alfredodeza> if you do not want rhel packages then you need to specify a release
[17:41] * thomnico (~thomnico@smb-alys-01.wifihubtelecom.net) Quit (Quit: Ex-Chat)
[17:41] <m0zes> https://dpaste.de/zBTE
[17:42] <m0zes> there's the yum provides for us.
[17:42] <alfredodeza> I am sure that you would be able to go around this problem by doing `ceph-deploy install --release=hammer {nodes}`
[17:43] <joelm> alfredodeza: I ran an install today, this morning it was giant, seemingly 10 min later it was hammer - with the same invocation (not settiong the release specifcally)
[17:43] <alfredodeza> I can assure you there was no change in ceph-deploy today joelm
[17:43] <alfredodeza> you probably used a different version
[17:44] <joelm> no, it's all the same admin node :)
[17:44] * yghannam (~yghannam@0001f8aa.user.oftc.net) has joined #ceph
[17:44] <joelm> seriously, 0.87 went on, then I ran it against another host and it installed 0.94
[17:44] <alfredodeza> tw0fish: m0zes custom repos can be used by default as well from the config. This is explained in detail here: http://ceph.com/ceph-deploy/docs/conf.html#ceph-deploy-configuration
[17:44] <joelm> wonder if there's a release file that it's reading
[17:44] <alfredodeza> joelm: nothing changes that programatically
[17:44] <joelm> well, not dreaming it :)
[17:44] <alfredodeza> it is hard coded unless it is specified
[17:45] <alfredodeza> I can assure you it is hard coded
[17:45] <alfredodeza> unless it is specified in cephdeploy.conf
[17:45] <joelm> ok, I understand, what I'm saying is that within the same pass, same admn host, it changed :()
[17:45] <alfredodeza> you can alter ceph-deploy behavior in cephdeploy.conf too
[17:45] <joelm> I never set a release specifically
[17:46] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[17:48] <joelm> Where does it find this field? " Installing stable version hammer"
[17:48] <tw0fish> m0zes: thank you for that paset, it is helpful. It looks like 'ceph' provides 'ceph-osd' now
[17:48] <tw0fish> It would seem at one time it did not
[17:48] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[17:49] <alfredodeza> joelm: it is hard coded
[17:49] <m0zes> it could be in the distro repos they split it into seperate packages, tw0fish
[17:49] <m0zes> they can be very picky about how things are packaged.
[17:49] <alfredodeza> m0zes: it got split for RH
[17:49] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[17:49] <joelm> alfredodeza: well that's strange then as the admin host was the last to update to hammer - it was *certainly* runnin gianr
[17:50] <joelm> but eveindenly pushed hammer on the new install
[17:50] <joelm> was reticent to update to hammer due to issues we faced with it past few months
[17:50] * Bored (~straterra@53IAAAEYL.tor-irc.dnsbl.oftc.net) Quit ()
[17:50] * qable (~LorenXo@789AAAJT3.tor-irc.dnsbl.oftc.net) has joined #ceph
[17:51] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:51] <joelm> if you are certain it's hardcoded then I'm at a bit of a loss as to how that can happen
[17:52] <tw0fish> [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: yum -y install ceph ceph-mon ceph-osd
[17:52] <tw0fish> ceph-deploy is not failing because ceph-mon and ceph-osd are trying to be installed, however.
[17:52] <tw0fish> this is helpful, at least i know all i need to get installed is 'ceph' itself.
[17:53] <tw0fish> it would seem the ceph-deploy script needs updated so it isn't trying to install RPMs that are no longer available
[17:53] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[17:54] <joelm> heh https://github.com/ceph/ceph-deploy/blob/master/ceph_deploy/install.py#L18-L19
[17:57] * cdelatte (~cdelatte@174.96.107.132) has joined #ceph
[17:57] <joelm> right, found why
[17:57] <joelm> Trusty mainline has taken precendence over giant ceph-deploy
[17:57] <joelm> meaning that bumps the stable release to hammer
[17:58] * cdelatte (~cdelatte@174.96.107.132) Quit ()
[17:58] <joelm> might want to check that, going to potentially cause a bonfire
[17:58] * cdelatte (~cdelatte@2606:a000:6e63:4c00:fcaa:83b2:9be6:8591) has joined #ceph
[17:58] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[17:58] <joelm> 2015-05-05 11:03:02 upgrade ceph-common:amd64 0.87.1-1trusty 0.87.2-1trusty
[17:58] <joelm> grrrrrr
[17:59] * dyasny (~dyasny@mtl-pppoe-adsl2518.securenet.net) Quit (Read error: Connection reset by peer)
[18:02] <joelm> 2015-05-05 11:03:10 upgrade ceph-deploy:all 1.5.22trusty 1.5.23trusty
[18:02] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[18:02] <joelm> ^^^^ this is a proble
[18:02] * mykola (~Mikolaj@91.225.201.211) has joined #ceph
[18:02] <joelm> for users running giant
[18:03] * bandrus (~brian@36.sub-70-211-68.myvzw.com) has joined #ceph
[18:03] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[18:05] * jclm (~jclm@192.16.26.2) has joined #ceph
[18:06] * puffy (~puffy@50.185.218.255) has joined #ceph
[18:07] * rotbeard (~redbeard@x5f75137d.dyn.telefonica.de) Quit (Quit: Leaving)
[18:07] * dgurtner (~dgurtner@178.197.235.98) Quit (Read error: Connection reset by peer)
[18:07] * reed (~reed@75-101-54-131.dsl.static.fusionbroadband.com) has joined #ceph
[18:11] * lovejoy (~lovejoy@213.83.69.6) has joined #ceph
[18:11] * lovejoy (~lovejoy@213.83.69.6) Quit ()
[18:12] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[18:13] * kefu (~kefu@114.86.209.84) Quit (Max SendQ exceeded)
[18:14] * kefu (~kefu@114.86.209.84) has joined #ceph
[18:15] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[18:16] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[18:18] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) has left #ceph
[18:19] * pdar (~patrickda@access.ducie-dc1.codethink.co.uk) has joined #ceph
[18:20] * qable (~LorenXo@789AAAJT3.tor-irc.dnsbl.oftc.net) Quit ()
[18:21] * kawa2014 (~kawa@212.77.3.87) Quit (Quit: Leaving)
[18:21] * daniel2_ (~daniel@209.163.140.194) has joined #ceph
[18:22] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[18:23] * MACscr1 (~Adium@2601:d:c800:de3:55d:2917:7a77:e6b) Quit (Quit: Leaving.)
[18:24] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[18:24] * branto (~borix@ip-213-220-214-203.net.upcbroadband.cz) has left #ceph
[18:25] * MACscr (~Adium@c-98-214-160-70.hsd1.il.comcast.net) has joined #ceph
[18:26] * scuttlemonkey is now known as scuttle|afk
[18:26] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[18:29] * kefu (~kefu@114.86.209.84) Quit (Max SendQ exceeded)
[18:29] * kefu (~kefu@114.86.209.84) has joined #ceph
[18:30] * vbellur (~vijay@122.167.250.154) has joined #ceph
[18:31] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) has joined #ceph
[18:33] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[18:34] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[18:34] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[18:40] * MACscr (~Adium@c-98-214-160-70.hsd1.il.comcast.net) Quit (Quit: Leaving.)
[18:41] * fsckstix (~stevpem@202.161.23.74) Quit (Ping timeout: 480 seconds)
[18:42] * kawa2014 (~kawa@212.77.30.29) has joined #ceph
[18:42] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[18:42] * alram (~alram@206.169.83.146) has joined #ceph
[18:46] * alram (~alram@206.169.83.146) Quit ()
[18:46] * alram (~alram@206.169.83.146) has joined #ceph
[18:47] * jordanP (~jordan@213.215.2.194) Quit (Quit: Leaving)
[18:50] * karnan (~karnan@171.76.52.29) Quit (Ping timeout: 480 seconds)
[18:50] <tw0fish> m0zes: thanks again for the paste. i had forgotten the 'check_obsoletes=1'.. now all is working.
[18:50] * JWilbur (~kalleeen@79.98.107.90) has joined #ceph
[18:51] <tw0fish> the script still tries to install ceph-osd and ceph-mon , but it is not a show stopper.
[18:52] * kawa2014 (~kawa@212.77.30.29) Quit (Quit: Leaving)
[18:52] * pvh_sa_ (~pvh@105-237-253-44.access.mtnbusiness.co.za) has joined #ceph
[18:53] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[18:58] * brutusca_ (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[18:58] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[18:58] <steveeJ> I'm wondering how difficult it is to create a ceph cluster with dynamic IPs. has anyone tried that?
[19:02] <steveeJ> this looks interesting: http://ceph.com/community/blog/tag/aws/
[19:02] * karnan (~karnan@106.51.243.12) has joined #ceph
[19:05] <cetex> so.. updated data.. we're pushing 200MB/s over 40k files per minute. (kinda 666.66666666 files per second)
[19:06] <cetex> is it possible to handle this with ~150-200 4TB 7.2k rpm drives?
[19:06] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) Quit (Quit: Leaving.)
[19:07] <cetex> so.. 12GB/minute over 40k files.
[19:07] <cetex> or the other way around..
[19:07] <cetex> 3.3MB per file on average.
[19:08] * alphe (~alphe@0001ac6f.user.oftc.net) has joined #ceph
[19:09] <cyberhide> that would be quite a bit of IO.. 3 files/per second/per drive, basically
[19:09] <cyberhide> or 3 objects i should say :)
[19:09] <loicd> alphe: hi
[19:10] <cetex> yeah.. it's pushing it
[19:10] <cyberhide> is that read, write, mixed?
[19:10] <cetex> it's mixed.
[19:10] <alphe> hi
[19:10] * bene (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[19:10] <cetex> we're writing 40k files per second 24/7
[19:10] <alphe> loicd so how can I avoid the too many PGs per OSD (1033 > max 300)
[19:11] <loicd> you can tune that with an option
[19:11] <cyberhide> yeah.. are these sata requests?
[19:11] <cetex> and we also need to read them, but we'd most likely read from disk-cache in 90% of the cases.
[19:11] <cyberhide> err. sata drives
[19:11] <loicd> alphe: here it is https://github.com/ceph/ceph/blob/master/src/common/config_opts.h#L211 mon_pg_warn_max_per_osd
[19:14] <alphe> mon_pg_warn_max_per_osd is accessible through ceph commandline ?
[19:14] <alphe> or do I have to recompile the whole ceph to change that ?
[19:14] * shylesh (~shylesh@1.23.174.55) has joined #ceph
[19:15] <cetex> so maybe ceph isn't what we need..
[19:15] <cetex> i've been thinking of doing avro over hdfs as well
[19:15] * brutusca_ (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[19:16] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[19:16] <cetex> but i don't like hdfs..
[19:17] * ircolle is now known as ircolle-brb
[19:19] <alphe> basically pg and pgp policy have changed
[19:20] <alphe> in a doc you have a basic equation to calculate the pg and pgnum you need for your cluster but then that same documentation says that more pg is better that will lower cpu use and memory use
[19:20] * JWilbur (~kalleeen@7R2AAAE3A.tor-irc.dnsbl.oftc.net) Quit ()
[19:20] <alphe> but then you have a commonly max pg per osd to 300 hard coded ...
[19:21] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[19:21] <alphe> and to that problem sage says ... well put more osd in your ceph cluster that will solve the issue
[19:21] <cyberhide> well.. the reason I think its a bit much IO is more because of the drives. Unless you had more drives or more cache (write cache maybe?), i think you would face this with a lot of clustered storage options
[19:21] <alphe> ....
[19:21] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[19:22] <alphe> sage solution to every problem with ceph is ... PUT MORE OSD IN YOUR CLUSTER
[19:22] <cetex> cyberhide: yeah.. that's an issue. i don't really mind if ceph wouldn't store the data properly. we're planning to run multiple datacenters and let it be "opportunistic"
[19:23] <alphe> like we all are google ...
[19:23] <cetex> if it's there it's there, if it's not, it's not..
[19:23] <cetex> :)
[19:23] * kefu is now known as kefu|afk
[19:23] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) has joined #ceph
[19:23] * Hemanth (~Hemanth@117.192.230.10) has joined #ceph
[19:24] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[19:25] * ircolle-brb is now known as ircolle
[19:25] <cyberhide> yeah.. that could work... would it be 150-200 drives per datacenter, or in total across all datacenters?
[19:25] <cetex> per datacenter
[19:26] <cetex> we're planning to store around 1weeks worth of data currently, so around 150-200TB
[19:26] <cetex> 150-200TB * replicas
[19:26] <cetex> but maybe we could drop the number of replicas
[19:26] <cetex> since we're writing to more than one
[19:27] * dalgaaf (uid15138@id-15138.charlton.irccloud.com) Quit (Quit: Connection closed for inactivity)
[19:28] <alphe> loicd I put mon_pg_warn_max_per_osd = 3000 in my ceph.conf then propagated that ceph.conf to all my nodes then restarted the cluster
[19:29] * Kupo1 (~tyler.wil@23.111.254.159) has joined #ceph
[19:29] * JV (~chatzilla@204.14.239.106) Quit (Ping timeout: 480 seconds)
[19:32] * kefu|afk (~kefu@114.86.209.84) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:32] * bkopilov (~bkopilov@bzq-79-176-131-50.red.bezeqint.net) has joined #ceph
[19:32] * Manshoon (~Manshoon@208.184.50.131) has joined #ceph
[19:33] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[19:33] * sleinen1 (~Adium@2001:620:0:82::107) Quit (Ping timeout: 480 seconds)
[19:35] * linjan (~linjan@80.179.241.26) has joined #ceph
[19:35] <cetex> so. 2 copies per file would most likely be enough.
[19:35] <cyberhide> yeah.. in different datacenters.. a bit thin on protection, but still doable
[19:35] <cetex> how would the object gateway handle that?
[19:35] <cetex> yeah.
[19:35] <cetex> we are using a cloud-provider today
[19:36] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[19:36] <cetex> but sometimes we have issues reaching them (even though we have our own 10gb fibres in full redundancy to them)
[19:36] <alphe> loicd problem solved thank you very much !
[19:36] <cetex> since it's over ~2500km of fibre the shortest path, and ~5000km of fibre the other.
[19:36] <cetex> so there's issues..
[19:36] <alphe> health HEALTH_OK
[19:37] <cetex> so we're thinking about building a storage-cluster in one of our own datacenters to handle the writes.
[19:37] <cetex> and therefore we could most likely go with one copy
[19:37] <cyberhide> multiple gateways?
[19:37] <cetex> but we'll go for two at this time so we have some headroom in case things change.
[19:37] * Manshoon (~Manshoon@208.184.50.131) Quit (Remote host closed the connection)
[19:37] <cetex> yeah. how does each gateway scale? :)
[19:38] <loicd> alphe: you're welcome !
[19:38] <cyberhide> good question... its an area I havent gotten deep into besides playing with the s3 api
[19:38] <cetex> ok. :)
[19:39] <cyberhide> federated gateway might be an option, though
[19:39] * Manshoon_ (~Manshoon@208.184.50.130) Quit (Ping timeout: 480 seconds)
[19:39] <cetex> hm, yeah..
[19:39] <cetex> we'll most likely need to split them up a bit..
[19:39] <cetex> i guess 400M files in one pool is a bit too much for the storage gateways?
[19:39] <cyberhide> not sure if thats the kind of scaling you are looking for
[19:40] <cyberhide> that im not sure.. lol
[19:40] <cetex> :>
[19:40] <cetex> hm hm..
[19:40] <cyberhide> in theory.. with enough horsepower, it should be able to
[19:40] <cetex> right.
[19:40] <cyberhide> its just caching account/container and object metadata
[19:41] <cetex> ok.
[19:41] <cetex> hm.. does the object gateway work with the caching layer as well?
[19:41] <cyberhide> and then passing the object through (and adding the right headers for the given api)
[19:41] <cyberhide> i would assume so.. it accesses the data the same way anything else would.. standard rados lib calls i believe
[19:42] <cetex> we could dedicate ~30machines per site with ssd's to do caching for writes and reads with ssd's as well, but we still need the "main storage" (all the harddrives) to perform relatively decently with reads/writes since we may get a (or a few) requests for old data at any time and the latency shouldn't be too high..
[19:42] <cetex> 30machines at most though. would be great with 5-10...
[19:43] <cyberhide> 30 caching machines.. thats quiet a bit
[19:43] <cyberhide> quite I should say
[19:43] <cetex> well.. that's what we have available during planning now at least..
[19:44] * Manshoon (~Manshoon@208.184.50.131) has joined #ceph
[19:44] <cyberhide> better to have too much than not enough :)
[19:45] <cetex> we need 2 hosts per rack with high availability, so we're placing them in 2 different blade-chassies, the other 3 machines per chassie could theoretically do caching.
[19:45] * Manshoon (~Manshoon@208.184.50.131) Quit (Remote host closed the connection)
[19:46] * LeaChim (~LeaChim@host86-171-90-60.range86-171.btcentralplus.com) has joined #ceph
[19:46] <cyberhide> hmm.. sounds like you've got rack/chassis covered
[19:46] * Manshoon (~Manshoon@199.16.199.4) has joined #ceph
[19:46] * alphe (~alphe@0001ac6f.user.oftc.net) Quit (Quit: Leaving)
[19:47] <cetex> i think we do. ideally i wouldn't need them and save them for something else though. but does librados in the storage gateways / caching layer improve total throughput over time?
[19:47] <cetex> or is it only delaying the writes?
[19:48] * bkopilov (~bkopilov@bzq-79-176-131-50.red.bezeqint.net) Quit (Read error: Connection reset by peer)
[19:48] <cetex> for example, can we do writes to the cache which replicates between ssd's and then do writes directly to the filesystem bypasing journal or something else performance-improving?
[19:49] <cetex> because the spindles have relatively hard limits on throughput
[19:49] * bkopilov (~bkopilov@109.67.120.102) has joined #ceph
[19:49] <cetex> especially if they need to write to a journal and then write to the filesystem.
[19:49] <cyberhide> well.. the the OSDs themselves use direct aio
[19:50] <cyberhide> and the lib rados calls from the gateway talk directly to the OSD daemons
[19:50] <cyberhide> and if you are journaled to ssds, even better on writes
[19:50] * JV (~chatzilla@204.14.239.106) has joined #ceph
[19:50] <cetex> yeah. but the ssd's won't apply to the osd's..
[19:50] <cetex> only to caching layer
[19:51] <cetex> (5x2.5" drives vs 2x3.5" drives)
[19:51] <cyberhide> true.. but the ssds will give a notify back after caching and then quietly go flush to disk when it can (writeback)
[19:52] <cyberhide> is there a writethrough mode? wonder if thats what you are looking for
[19:52] <cetex> i don't know.. :)
[19:52] <cetex> documentation is lacking
[19:52] <cetex> one of the issues seems to be that ceph is written to do "guaranteed writes at all times"
[19:52] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:52] <cyberhide> hmm.. it appears it does for rbd, looking if librados internally supports it
[19:53] <cyberhide> probably does, just not sure if its a setting or something you can expose to the gateway
[19:53] <cetex> i'd like to bypass that. if something fails, let it fail. if something succeeds, good, i'm happy, but i'd rather that stuff fails than it chokes the pipeline..
[19:53] <cetex> ok. :)
[19:54] <cyberhide> yeah.. documentation doesnt seem to exist on librados supporting write through for cache.. just rbd
[19:54] * brutuscat (~brutuscat@124.Red-176-84-59.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[19:56] <cetex> :)
[19:56] <cetex> i have to leave for a short while (30-60min?) but i'll be back.
[19:59] * MACscr (~Adium@2601:d:c800:de3:c80a:7213:bac5:fd3b) has joined #ceph
[19:59] * shaunm (~shaunm@74.215.76.114) Quit (Ping timeout: 480 seconds)
[19:59] * Hemanth (~Hemanth@117.192.230.10) Quit (Read error: Connection timed out)
[20:08] <dostrow> Hello all, I have some scaling and config best practices questions for whomever would be so kind. We are switich our storage solution over to ceph and have tested rados block storage rather extensively but I am now evaluating radosgw.
[20:08] * scuttle|afk is now known as scuttlemonkey
[20:09] * jwilkins (~jwilkins@c-50-131-97-162.hsd1.ca.comcast.net) has joined #ceph
[20:09] <dostrow> We have 1.4 billion objects under management, generating roughly 500,000 new objects per day, these are easilly organized into roughly 7 million "buckets" of data with object sizes per bucket ranging from 1 to 10,000.
[20:10] <dostrow> First question, does it make sense to create that many discrete buckets
[20:10] <dostrow> second, is there any downside in having them all live in the same pool
[20:10] <dostrow> we are planning a 16 node cluster, each node with 45 OSDs
[20:11] <dostrow> fronted by an SSD caching layer as well
[20:12] * Manshoon_ (~Manshoon@208.184.50.131) has joined #ceph
[20:12] <dostrow> third question, depending on the answers to 1 and 2, what is best practice for placement group allocation for the supporting pools for radosgw with this sort of data set
[20:17] * Manshoon (~Manshoon@199.16.199.4) Quit (Ping timeout: 480 seconds)
[20:17] * MentalRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Ping timeout: 480 seconds)
[20:18] * bkopilov (~bkopilov@109.67.120.102) Quit (Ping timeout: 480 seconds)
[20:20] * shylesh (~shylesh@1.23.174.55) Quit (Remote host closed the connection)
[20:20] * hellertime (~Adium@72.246.0.14) Quit (Quit: Leaving.)
[20:20] * Tumm (~hifi@95.128.43.164) has joined #ceph
[20:23] <cyberhide> can anyone help with the following message my OSDs are frequently dumping out than restarting:
[20:23] <cyberhide> 2015-05-05 13:20:26.251618 7fb21602e700 1 heartbeat_map is_healthy 'OSD::op_tp thread 0x7fb1fedfd700' had timed out after 15 2015-05-05 13:20:26.251634 7fb21602e700 1 heartbeat_map is_healthy 'OSD::op_tp thread 0x7fb1fedfd700' had suicide timed out after 150
[20:23] * Manshoon_ (~Manshoon@208.184.50.131) Quit (Remote host closed the connection)
[20:23] <cyberhide> im running 0.80-9
[20:23] * Manshoon (~Manshoon@199.16.199.4) has joined #ceph
[20:26] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[20:26] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[20:28] * logan (~logan@63.143.49.103) Quit (Ping timeout: 480 seconds)
[20:31] * logan (~a@63.143.49.103) has joined #ceph
[20:32] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[20:33] * giorgis (~oftc-webi@ppp-2-87-14-187.home.otenet.gr) has joined #ceph
[20:33] <debian112> anyone using 4TB drives in there cluster?
[20:33] * kevinkevin (~oftc-webi@deepthroat.unetresgrossebite.com) Quit (Remote host closed the connection)
[20:33] * JV (~chatzilla@204.14.239.106) Quit (Ping timeout: 480 seconds)
[20:33] <giorgis> hello people!!
[20:34] <giorgis> can someone help me with a pool problem?
[20:34] <giorgis> I have a pool with no name
[20:34] * Manshoon (~Manshoon@199.16.199.4) Quit (Remote host closed the connection)
[20:34] <giorgis> how can I rename that pool?
[20:34] * Manshoon (~Manshoon@199.16.199.4) has joined #ceph
[20:34] <debian112> osd pool rename <poolname> <poolname>
[20:34] <giorgis> @debian112: the pool name is empty :(
[20:34] <cephalobot> giorgis: Error: "debian112:" is not a valid command.
[20:35] <giorgis> I don't know how it was created
[20:35] <giorgis> 3 data,4 metadata,5 rbd,6 .rgw,7 .rgw.control,8 .rgw.gc,9 .log,10 .intent-log,11 .usage,12 .users,13 .users.email,14 .users.swift,15 .users.uid,16 .rgw.root,17 .rgw.buckets.index,18 .rgw.buckets,19 .rgw.buckets.extra,20 volumes,21 ,
[20:35] <giorgis> this is the output from the last pool
[20:35] <giorgis> *lspools
[20:35] <giorgis> the last pool (number 21) is the problematic
[20:36] <giorgis> is there a way to rename a pool using the pool ID?
[20:36] <debian112> is there data in that pool
[20:36] <debian112> that you want
[20:36] <giorgis> no
[20:36] <giorgis> rados df
[20:36] <giorgis> shows no objects in there
[20:38] <debian112> are you running hammer?
[20:38] * clayb (~clayb@67-40-154-97.hlrn.qwest.net) has left #ceph
[20:39] <giorgis> no it is firefly
[20:39] <debian112> try this: ceph osd pool delete "" "" --yes-i-really-really-mean-it
[20:40] <giorgis> ceph version 0.80.9 (b5a67f0e1d15385bc0d60a6da6e7fc810bde6047)
[20:40] <giorgis> isn't there a safer way
[20:40] <giorgis> to delete the pool using the ID?
[20:40] <giorgis> which is 21
[20:41] <cetex> back.
[20:42] <debian112> let me check
[20:43] <cetex> debian112: i'm using 4tb drives.
[20:43] <cetex> debian112: for some testing
[20:45] <debian112> giorgis: you could try giving it a name, then delete that name
[20:45] <giorgis> how can I do that?
[20:46] <debian112> ceph osd pool rename "" newpool
[20:48] <debian112> cetex, ok cool
[20:48] <debian112> I am build a new ceph node for our cloud
[20:48] <debian112> building
[20:49] <giorgis> Invalid command: missing required parameter srcpool(<poolname>)
[20:49] <giorgis> using "" ""
[20:49] <giorgis> produced the same result :(
[20:49] <debian112> switch to: rados rmpool "" ""
[20:50] * Tumm (~hifi@789AAAJ1J.tor-irc.dnsbl.oftc.net) Quit ()
[20:51] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[20:53] * sleinen1 (~Adium@2001:620:0:82::100) has joined #ceph
[20:53] <giorgis> debian112: I am afraid to put it empty in order to avoid the accidental deletion of a proper pool
[20:54] <debian112> I hear ya
[20:54] <giorgis> if I could at least verify somehow that "" "" would be for that pool...
[20:54] * vbellur (~vijay@122.167.250.154) Quit (Ping timeout: 480 seconds)
[20:55] <giorgis> that's why I am wondering if there is a way to remove it using the id which is unique
[20:55] <debian112> leave it if it's not hurting anything. I didn't find any reference with using the number for the pool
[20:55] * dyasny (~dyasny@198.251.61.137) has joined #ceph
[20:55] <giorgis> I have a problem
[20:55] <giorgis> I have 8pgs stuck + unclean
[20:55] <giorgis> that are coming from that pool
[20:56] <giorgis> because I 've changed the CRUSH MAP
[20:56] <debian112> I see
[20:56] <giorgis> to host replication rather than on osd
[20:56] * shaunm (~shaunm@74.215.76.114) has joined #ceph
[20:56] <giorgis> and now I want at least to change the replication level
[20:56] <giorgis> if I could do that
[20:56] <giorgis> for that pool
[20:56] <debian112> what is that 2 or 3?
[20:56] <giorgis> everything would be normal
[20:56] <giorgis> the replication is 3
[20:56] <giorgis> but I have only 2 hosts
[20:56] <giorgis> therefore the error
[20:57] <debian112> you need another host
[20:57] <debian112> is the easiest way
[20:58] <debian112> what is your pg set to:
[20:58] <giorgis> no way at the moment :(
[20:58] <giorgis> the pg_num for that pool is 8
[20:59] <giorgis> pool 21 '' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 1562 owner 18446744073709551615 flags hashpspool stripe_width 0
[20:59] <debian112> figured
[20:59] <giorgis> HEALTH_WARN 8 pgs stuck unclean
[20:59] <giorgis> and I have exactly 8 stuck unclean
[20:59] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:00] <giorgis> I don't mind at all get rid of that pool but I want to make sure nothing else happens to the rest :(
[21:01] * zaitcev (~zaitcev@2001:558:6001:10:61d7:f51f:def8:4b0f) has joined #ceph
[21:01] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[21:02] <debian112> need some feedback on this new server I am putting together for ceph:
[21:02] <debian112> http://paste.debian.net/171616/
[21:03] * Manshoon (~Manshoon@199.16.199.4) Quit (Remote host closed the connection)
[21:03] <debian112> Basically I am looking to provide Slow storage with Journaling on SSD: 36TB And Fast SSD Storage: 1TB per server
[21:04] <debian112> anyone see a problem with the hardware config?
[21:04] * sleinen1 (~Adium@2001:620:0:82::100) Quit (Ping timeout: 480 seconds)
[21:04] <debian112> 10 GB on front end, and 20GB on backend
[21:04] * Manshoon (~Manshoon@199.16.199.4) has joined #ceph
[21:07] * BManojlovic (~steki@cable-89-216-173-30.dynamic.sbb.rs) has joined #ceph
[21:11] * BManojlovic (~steki@cable-89-216-173-30.dynamic.sbb.rs) Quit ()
[21:11] <giorgis> debian112: thanks for your help
[21:11] <giorgis> rados rmpool "" "" --yes-i-really-really-mean-it
[21:11] <giorgis> did the trick...
[21:11] <giorgis> no I have again a healthy cluster :D
[21:11] * BManojlovic (~steki@cable-89-216-173-30.dynamic.sbb.rs) has joined #ceph
[21:12] <debian112> no problem
[21:12] <cetex> :)
[21:13] <debian112> good something for the ceph guys to add
[21:13] <debian112> remove pool based on number
[21:13] * Manshoon_ (~Manshoon@208.184.50.131) has joined #ceph
[21:13] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[21:13] <giorgis> debian112: yes....based on number would be much more efficient
[21:14] <giorgis> and error resistable
[21:16] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[21:17] * sleinen1 (~Adium@2001:620:0:82::100) has joined #ceph
[21:18] * Manshoon_ (~Manshoon@208.184.50.131) Quit (Remote host closed the connection)
[21:18] * Manshoon_ (~Manshoon@199.16.199.4) has joined #ceph
[21:19] * Manshoon (~Manshoon@199.16.199.4) Quit (Ping timeout: 480 seconds)
[21:20] * MKoR (~nih@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[21:24] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:27] * subscope (~subscope@92-249-244-15.pool.digikabel.hu) has joined #ceph
[21:28] * puffy (~puffy@50.185.218.255) Quit (Quit: Leaving.)
[21:29] * raif (~holoirc@202.161.23.74) Quit (Read error: No route to host)
[21:29] * ngoswami (~ngoswami@1.39.12.68) has joined #ceph
[21:30] * puffy (~puffy@50.185.218.255) has joined #ceph
[21:30] * puffy (~puffy@50.185.218.255) Quit ()
[21:34] * Knorrie (knorrie@yoshi.kantoor.mendix.nl) Quit (Ping timeout: 480 seconds)
[21:37] * Knorrie (knorrie@yoshi.kantoor.mendix.nl) has joined #ceph
[21:39] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Remote host closed the connection)
[21:43] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[21:43] * i_m (~ivan.miro@pool-109-191-92-175.is74.ru) Quit (Ping timeout: 480 seconds)
[21:50] * MKoR (~nih@7R2AAAFD8.tor-irc.dnsbl.oftc.net) Quit ()
[21:50] * xanax` (~matx@spftor1e1.privacyfoundation.ch) has joined #ceph
[21:52] * loganb (~loganb@vpngac.ccur.com) has left #ceph
[21:59] * ngoswami (~ngoswami@1.39.12.68) Quit (Quit: Leaving)
[22:09] * karnan (~karnan@106.51.243.12) Quit (Ping timeout: 480 seconds)
[22:15] <seapasulli> anyone know why the bucket.list() xml size element for an object would be different than the key.size for an object in ceph/s3?
[22:18] * bkopilov (~bkopilov@bzq-79-176-38-204.red.bezeqint.net) has joined #ceph
[22:18] * karnan (~karnan@106.51.243.25) has joined #ceph
[22:19] * Manshoon (~Manshoon@208.184.50.131) has joined #ceph
[22:20] * xanax` (~matx@7R2AAAFGB.tor-irc.dnsbl.oftc.net) Quit ()
[22:20] * Nijikokun (~colde@tor00.telenet.unc.edu) has joined #ceph
[22:21] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[22:21] * Manshoon (~Manshoon@208.184.50.131) Quit (Remote host closed the connection)
[22:24] <seapasulli> from what I understand these should always be the same
[22:24] * nsoffer (~nsoffer@bzq-109-64-255-30.red.bezeqint.net) has joined #ceph
[22:26] * Manshoon_ (~Manshoon@199.16.199.4) Quit (Ping timeout: 480 seconds)
[22:30] * bkopilov (~bkopilov@bzq-79-176-38-204.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[22:30] * shohn1 (~shohn@dslb-178-008-196-072.178.008.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[22:30] * ircolle1 (~ircolle@2601:1:a580:1735:ea2a:eaff:fe91:b49b) has joined #ceph
[22:32] * karnan (~karnan@106.51.243.25) Quit (Quit: Leaving)
[22:35] * Sysadmin88 (~IceChat77@054527d3.skybroadband.com) has joined #ceph
[22:35] * yghannam (~yghannam@0001f8aa.user.oftc.net) Quit (Remote host closed the connection)
[22:39] * mykola (~Mikolaj@91.225.201.211) Quit (Quit: away)
[22:39] * Manshoon (~Manshoon@208.184.50.131) has joined #ceph
[22:41] * Manshoon (~Manshoon@208.184.50.131) Quit (Remote host closed the connection)
[22:42] * Manshoon (~Manshoon@199.16.199.4) has joined #ceph
[22:43] * georgem (~Adium@184.151.190.196) has joined #ceph
[22:44] * Nacer (~Nacer@2001:41d0:fe82:7200:740d:3453:9550:ad09) has joined #ceph
[22:49] * Nijikokun (~colde@7R2AAAFIM.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[22:49] <todin> I read alot about the newstore on the ml, how to I activate the newstore to test it?
[22:53] * MACscr (~Adium@2601:d:c800:de3:c80a:7213:bac5:fd3b) Quit (Quit: Leaving.)
[22:53] * tw0fish (~twofish@UNIX5.ANDREW.CMU.EDU) Quit (Quit: leaving)
[22:54] * puffy (~puffy@172.56.40.50) has joined #ceph
[22:56] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) has left #ceph
[22:59] * tupper (~tcole@2001:420:2280:1272:8900:f9b8:3b49:567e) Quit (Ping timeout: 480 seconds)
[23:02] * rendar (~I@host143-177-dynamic.8-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[23:02] * linjan (~linjan@80.179.241.26) Quit (Ping timeout: 480 seconds)
[23:04] * gregsfortytwo (~gregsfort@209.132.181.86) Quit (Ping timeout: 480 seconds)
[23:05] * off_rhoden (~off_rhode@209.132.181.86) Quit (Ping timeout: 480 seconds)
[23:05] * rendar (~I@host143-177-dynamic.8-79-r.retail.telecomitalia.it) has joined #ceph
[23:06] * puffy (~puffy@172.56.40.50) Quit (Quit: Leaving.)
[23:06] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[23:13] * linjan (~linjan@213.8.240.146) has joined #ceph
[23:13] * alram_ (~alram@206.169.83.146) has joined #ceph
[23:15] * georgem (~Adium@184.151.190.196) Quit (Quit: Leaving.)
[23:16] <joshd> todin: still in a branch, wip-newstore - and it's likely unstable/incomplete still
[23:17] * MACscr (~Adium@2601:d:c800:de3:b943:79cd:baeb:1ce3) has joined #ceph
[23:18] * Manshoon (~Manshoon@199.16.199.4) Quit (Remote host closed the connection)
[23:19] * Manshoon (~Manshoon@199.16.199.4) has joined #ceph
[23:20] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[23:20] * alram (~alram@206.169.83.146) Quit (Ping timeout: 480 seconds)
[23:21] * georgem1 (~Adium@184.151.190.196) has joined #ceph
[23:21] * georgem1 (~Adium@184.151.190.196) Quit ()
[23:21] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Read error: Connection reset by peer)
[23:21] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[23:22] * BManojlovic (~steki@cable-89-216-173-30.dynamic.sbb.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[23:26] * Nacer (~Nacer@2001:41d0:fe82:7200:740d:3453:9550:ad09) Quit (Remote host closed the connection)
[23:29] * nhm (~nhm@172.56.3.98) Quit (Ping timeout: 480 seconds)
[23:31] * daniel2_ (~daniel@209.163.140.194) Quit (Ping timeout: 480 seconds)
[23:32] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[23:33] * gregsfortytwo (~gregsfort@209.132.181.86) has joined #ceph
[23:34] * off_rhoden (~off_rhode@209.132.181.86) has joined #ceph
[23:34] <seapasulli> is there any set of docs that tells me what each field in the ceph logs shows?
[23:36] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[23:36] * Manshoon_ (~Manshoon@208.184.50.131) has joined #ceph
[23:38] * giorgis (~oftc-webi@ppp-2-87-14-187.home.otenet.gr) Quit (Remote host closed the connection)
[23:41] * fsimonce (~simon@host11-35-dynamic.32-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[23:43] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[23:43] * Manshoon (~Manshoon@199.16.199.4) Quit (Ping timeout: 480 seconds)
[23:50] * mwilcox_ (~mwilcox@116.251.192.71) has joined #ceph
[23:50] * fsckstix (~stevpem@1.136.96.133) has joined #ceph
[23:50] * Swompie` (~roaet@5-12-170-175.residential.rdsnet.ro) has joined #ceph
[23:53] * Manshoon_ (~Manshoon@208.184.50.131) Quit (Remote host closed the connection)
[23:56] * Tume|Sai (sai@ns1.kaatajat.net) Quit (Remote host closed the connection)
[23:58] * sjmtest (uid32746@id-32746.uxbridge.irccloud.com) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.