#ceph IRC Log

Index

IRC Log for 2015-04-15

Timestamps are in GMT/BST.

[0:00] * BManojlovic (~steki@cable-89-216-225-32.dynamic.sbb.rs) Quit (Remote host closed the connection)
[0:01] * BManojlovic (~steki@cable-89-216-225-32.dynamic.sbb.rs) has joined #ceph
[0:02] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[0:05] * OutOfNoWhere (~rpb@199.68.195.102) has joined #ceph
[0:05] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[0:07] * utugi______ (~clusterfu@425AAALKU.tor-irc.dnsbl.oftc.net) Quit ()
[0:07] * Rehevkor (~hoopy@chomsky.torservers.net) has joined #ceph
[0:07] * BManojlovic (~steki@cable-89-216-225-32.dynamic.sbb.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[0:07] * xcezzz (~Adium@pool-100-3-14-19.tampfl.fios.verizon.net) Quit (Read error: Connection reset by peer)
[0:07] * xcezzz (~Adium@pool-100-3-14-19.tampfl.fios.verizon.net) has joined #ceph
[0:07] * BManojlovic (~steki@cable-89-216-225-32.dynamic.sbb.rs) has joined #ceph
[0:07] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[0:13] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) Quit (Quit: Ex-Chat)
[0:13] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) has joined #ceph
[0:17] * fsimonce (~simon@host178-188-dynamic.26-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[0:20] * BManojlovic (~steki@cable-89-216-225-32.dynamic.sbb.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[0:20] * fireD_ (~fireD@93-139-207-158.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[0:21] * BManojlovic (~steki@cable-89-216-225-32.dynamic.sbb.rs) has joined #ceph
[0:23] * BManojlovic (~steki@cable-89-216-225-32.dynamic.sbb.rs) Quit (Remote host closed the connection)
[0:24] * BManojlovic (~steki@cable-89-216-225-32.dynamic.sbb.rs) has joined #ceph
[0:24] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[0:25] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Read error: Connection reset by peer)
[0:28] * BManojlovic (~steki@cable-89-216-225-32.dynamic.sbb.rs) Quit ()
[0:34] * rendar (~I@host122-176-dynamic.3-87-r.retail.telecomitalia.it) Quit ()
[0:36] * debian112 (~bcolbert@24.126.201.64) Quit (Quit: Leaving.)
[0:37] * Rehevkor (~hoopy@2WVAABK4P.tor-irc.dnsbl.oftc.net) Quit ()
[0:37] * MACscr (~Adium@2601:d:c800:de3:4195:a4b1:5af2:f82c) has joined #ceph
[0:37] * LorenXo (~dug@176.10.99.207) has joined #ceph
[0:37] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[0:39] * shivark (~oftc-webi@32.97.110.57) Quit (Remote host closed the connection)
[0:43] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[0:49] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[0:50] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[0:53] * reed (~reed@198.23.103.98-static.reverse.softlayer.com) Quit (Read error: Connection reset by peer)
[0:53] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) Quit (Read error: Connection reset by peer)
[0:54] * davidzlap (~Adium@206.169.83.146) Quit (Quit: Leaving.)
[0:54] * davidzlap (~Adium@206.169.83.146) has joined #ceph
[0:54] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) has joined #ceph
[0:54] * davidzlap (~Adium@206.169.83.146) Quit ()
[0:55] * sjmtest (uid32746@id-32746.uxbridge.irccloud.com) Quit (Quit: Connection closed for inactivity)
[0:58] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Ping timeout: 480 seconds)
[1:00] * lightspeed (~lightspee@2001:8b0:16e:1:8326:6f70:89f:8f9c) Quit (Ping timeout: 480 seconds)
[1:05] * fghaas (~florian@185.15.236.4) Quit (Quit: Leaving.)
[1:05] * wschulze (~wschulze@38.96.12.2) Quit (Read error: Connection reset by peer)
[1:06] * sbfox (~Adium@S0106c46e1fb849db.vf.shawcable.net) has joined #ceph
[1:06] * sbfox (~Adium@S0106c46e1fb849db.vf.shawcable.net) Quit ()
[1:06] * sbfox (~Adium@S0106c46e1fb849db.vf.shawcable.net) has joined #ceph
[1:07] * LorenXo (~dug@5NZAABNCU.tor-irc.dnsbl.oftc.net) Quit ()
[1:07] * SquallSeeD31 (~brianjjo@india012.server4you.net) has joined #ceph
[1:10] * lightspeed (~lightspee@2001:8b0:16e:1:8326:6f70:89f:8f9c) has joined #ceph
[1:12] * Concubidated (~Adium@206.169.83.146) Quit (Ping timeout: 480 seconds)
[1:14] * wschulze (~wschulze@38.96.12.2) has joined #ceph
[1:24] * asalor (~asalor@0001ef37.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:25] * asalor (~asalor@218.83.broadband9.iol.cz) has joined #ceph
[1:28] * oms101 (~oms101@p20030057EA00CA00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:36] * oms101 (~oms101@p20030057EA002A00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[1:37] * SquallSeeD31 (~brianjjo@2FBAABFOA.tor-irc.dnsbl.oftc.net) Quit ()
[1:37] * Bonzaii (~Tarazed@192.42.116.16) has joined #ceph
[1:37] * p66kumar (~p66kumar@74.119.205.248) Quit (Quit: p66kumar)
[1:40] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[1:40] <_robbat2|irssi> is yehuda around?
[1:40] <_robbat2|irssi> i wanted to follow up an email of his: https://www.mail-archive.com/ceph-users@lists.ceph.com/msg17203.html
[1:40] <_robbat2|irssi> about multiple domain names in 'rgw dns name'
[1:46] <_robbat2|irssi> he says to put it in the region endpoints, but it's not clear if that should be hostnames or full URLs
[1:46] <_robbat2|irssi> and after that is set, should 'rgw dns name' be removed?
[1:50] * MACscr (~Adium@2601:d:c800:de3:4195:a4b1:5af2:f82c) Quit (Quit: Leaving.)
[1:58] * ircolle (~ircolle@66-194-8-225.static.twtelecom.net) has left #ceph
[1:58] * daniel2_ (~daniel@12.164.168.117) Quit (Remote host closed the connection)
[2:03] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) Quit (Quit: Leaving.)
[2:04] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:04] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) has joined #ceph
[2:05] * alram (~alram@38.96.12.2) has joined #ceph
[2:07] * Bonzaii (~Tarazed@5NZAABNGA.tor-irc.dnsbl.oftc.net) Quit ()
[2:07] * aleksag (~Revo84@176.10.99.208) has joined #ceph
[2:08] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[2:08] * Concubidated1 (~Adium@71.21.5.251) has joined #ceph
[2:09] * Concubidated (~Adium@71.21.5.251) Quit (Read error: Connection reset by peer)
[2:19] * davidzlap (~Adium@cpe-23-242-189-171.socal.res.rr.com) has joined #ceph
[2:25] * sbfox (~Adium@S0106c46e1fb849db.vf.shawcable.net) Quit (Quit: Leaving.)
[2:25] * fouxm (~foucault@ks01.commit.ninja) Quit (Ping timeout: 480 seconds)
[2:25] <skullone> is ceph-deploy a culmination of fabric scripts?
[2:26] * fouxm (~foucault@ks01.commit.ninja) has joined #ceph
[2:26] <off_rhoden> skullone: ceph-deploy does not use fabric at all
[2:26] * asalor (~asalor@218.83.broadband9.iol.cz) Quit (Ping timeout: 480 seconds)
[2:33] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[2:34] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[2:34] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[2:36] * shyu (~Shanzhi@119.254.196.66) has joined #ceph
[2:36] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Ping timeout: 480 seconds)
[2:37] * aleksag (~Revo84@3OZAAA5K5.tor-irc.dnsbl.oftc.net) Quit ()
[2:37] * Knuckx (~Spessu@176.10.99.206) has joined #ceph
[2:40] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[2:40] * yanzheng (~zhyan@182.139.204.64) has joined #ceph
[2:47] <skullone> how are people deploying ceph with ceph-deploy currently? with the website as slow as it is, i keep getting stuck either downloading packages, or adding the gpg key to yum ;(
[2:54] * LeaChim (~LeaChim@host86-143-18-67.range86-143.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:55] * sbfox (~Adium@S0106c46e1fb849db.vf.shawcable.net) has joined #ceph
[2:57] * wschulze (~wschulze@38.96.12.2) Quit (Quit: Leaving.)
[2:57] * hellertime (~Adium@pool-173-48-154-80.bstnma.fios.verizon.net) has joined #ceph
[2:57] * hellertime (~Adium@pool-173-48-154-80.bstnma.fios.verizon.net) Quit ()
[3:02] * yanzheng (~zhyan@182.139.204.64) Quit (Quit: This computer has gone to sleep)
[3:03] * yanzheng (~zhyan@182.139.204.64) has joined #ceph
[3:04] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[3:07] * Knuckx (~Spessu@3OZAAA5LR.tor-irc.dnsbl.oftc.net) Quit ()
[3:08] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[3:10] * TMM (~hp@46.243.30.149) Quit (Quit: Ex-Chat)
[3:12] <dmick> which website is slow?
[3:12] * sbfox (~Adium@S0106c46e1fb849db.vf.shawcable.net) Quit (Quit: Leaving.)
[3:16] * fam is now known as fam_away
[3:17] * via (~via@smtp2.matthewvia.info) Quit (Ping timeout: 480 seconds)
[3:18] * wschulze (~wschulze@38.96.12.2) has joined #ceph
[3:18] * DV__ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[3:24] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[3:25] * wschulze (~wschulze@38.96.12.2) Quit (Quit: Leaving.)
[3:33] * root3 (~root@p5DDE6F1C.dip0.t-ipconnect.de) has joined #ceph
[3:35] * MVenesio (~MVenesio@186.136.59.165) Quit (Ping timeout: 480 seconds)
[3:37] * Quackie (~Xylios@edwardsnowden1.torservers.net) has joined #ceph
[3:40] * root2 (~root@pD9E9DA3E.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[3:44] * DV__ (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[3:46] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) has joined #ceph
[3:46] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) Quit (Remote host closed the connection)
[3:49] * kefu (~kefu@114.92.111.70) has joined #ceph
[3:53] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[3:53] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[3:55] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[3:55] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[3:59] * kefu (~kefu@114.92.111.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[4:03] * elder (~elder@50.250.13.174) Quit (Ping timeout: 480 seconds)
[4:05] * kefu (~kefu@114.92.111.70) has joined #ceph
[4:07] * Quackie (~Xylios@5NZAABNMH.tor-irc.dnsbl.oftc.net) Quit ()
[4:07] * xanax` (~Guest1390@enjolras.gtor.org) has joined #ceph
[4:10] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit (Ping timeout: 480 seconds)
[4:12] * fam_away is now known as fam
[4:15] * shang (~ShangWu@175.41.48.77) has joined #ceph
[4:16] * elder (~elder@50.153.130.7) has joined #ceph
[4:20] * asalor (~asalor@218.83.broadband9.iol.cz) has joined #ceph
[4:30] * zhaochao (~zhaochao@111.161.77.236) has joined #ceph
[4:37] * xanax` (~Guest1390@2WVAABLCW.tor-irc.dnsbl.oftc.net) Quit ()
[4:37] * Snowcat4 (~datagutt@exit1.telostor.ca) has joined #ceph
[4:40] <skullone> ceph.com :i
[4:40] <skullone> heh
[4:40] <skullone> i mirrored it, so less of an issue now
[4:43] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) has joined #ceph
[5:01] * elder (~elder@50.153.130.7) Quit (Ping timeout: 480 seconds)
[5:07] * Snowcat4 (~datagutt@98EAAA9I1.tor-irc.dnsbl.oftc.net) Quit ()
[5:08] * daniel2_ (~daniel@12.0.207.18) has joined #ceph
[5:11] * Jamana (~xul@tor.nullbyte.me) has joined #ceph
[5:19] * elder (~elder@50.250.13.174) has joined #ceph
[5:21] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[5:24] * sbfox (~Adium@S0106c46e1fb849db.vf.shawcable.net) has joined #ceph
[5:24] * DV_ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[5:28] * Vacuum_ (~vovo@i59F793DB.versanet.de) has joined #ceph
[5:35] * Vacuum (~vovo@88.130.221.51) Quit (Ping timeout: 480 seconds)
[5:38] * shyu (~Shanzhi@119.254.196.66) Quit (Ping timeout: 480 seconds)
[5:41] * DV_ (~veillard@2001:41d0:1:d478::1) Quit (Remote host closed the connection)
[5:41] * Jamana (~xul@1GLAABBQ8.tor-irc.dnsbl.oftc.net) Quit ()
[5:41] * Lattyware (~mog_@dreamatorium.badexample.net) has joined #ceph
[5:44] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[5:44] * OutOfNoWhere (~rpb@199.68.195.102) Quit (Ping timeout: 480 seconds)
[5:44] * wushudoin (~wushudoin@209.132.181.86) Quit (Ping timeout: 480 seconds)
[5:48] * shyu (~Shanzhi@119.254.196.66) has joined #ceph
[5:53] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[5:57] * oro (~oro@209.249.118.71) has joined #ceph
[6:01] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[6:05] <_robbat2|irssi> yehuda: if you're around, https://github.com/ceph/ceph/pull/4366
[6:08] * DV_ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[6:11] * Lattyware (~mog_@5NZAABNS3.tor-irc.dnsbl.oftc.net) Quit ()
[6:11] * Thayli (~Neon@37.187.129.166) has joined #ceph
[6:14] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[6:14] * shylesh (~shylesh@121.244.87.124) has joined #ceph
[6:15] * kefu (~kefu@114.92.111.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:17] * oro (~oro@209.249.118.71) Quit (Ping timeout: 480 seconds)
[6:25] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[6:26] * DV_ (~veillard@2001:41d0:1:d478::1) Quit (Remote host closed the connection)
[6:27] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[6:28] * lalatenduM (~lalatendu@121.244.87.117) Quit ()
[6:28] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[6:29] * lalatenduM (~lalatendu@121.244.87.117) Quit ()
[6:33] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[6:34] * sbfox (~Adium@S0106c46e1fb849db.vf.shawcable.net) Quit (Quit: Leaving.)
[6:38] * wschulze (~wschulze@38.96.12.2) has joined #ceph
[6:38] * sbfox (~Adium@S0106c46e1fb849db.vf.shawcable.net) has joined #ceph
[6:40] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[6:41] * Thayli (~Neon@1GLAABBSP.tor-irc.dnsbl.oftc.net) Quit ()
[6:41] * Miho (~PuyoDead@37.48.65.122) has joined #ceph
[6:44] * lalatenduM (~lalatendu@121.244.87.117) Quit (Ping timeout: 480 seconds)
[6:54] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) Quit (Quit: Leaving.)
[6:55] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[6:57] * alram (~alram@38.96.12.2) Quit (Quit: Lost terminal)
[6:57] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:03] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[7:03] * amote (~amote@121.244.87.116) has joined #ceph
[7:11] * Miho (~PuyoDead@2WVAABLII.tor-irc.dnsbl.oftc.net) Quit ()
[7:11] * Skyrider (~PcJamesy@aurora.enn.lu) has joined #ceph
[7:13] * vbellur (~vijay@122.167.65.220) Quit (Ping timeout: 480 seconds)
[7:38] * daniel2_ (~daniel@12.0.207.18) Quit (Remote host closed the connection)
[7:41] * Skyrider (~PcJamesy@1GLAABBUU.tor-irc.dnsbl.oftc.net) Quit ()
[7:41] * KungFuHamster (~rapedex@herngaard.torservers.net) has joined #ceph
[7:42] * ChrisNBlum (~ChrisNBlu@178.255.153.117) has joined #ceph
[7:51] * coreping (~Michael_G@n1.coreping.org) Quit (Quit: WeeChat 0.4.2)
[7:53] * Mika_c (~quassel@125.227.22.217) has joined #ceph
[7:54] * coreping (~Michael_G@n1.coreping.org) has joined #ceph
[7:55] * coreping (~Michael_G@n1.coreping.org) Quit ()
[7:57] * bkopilov (~bkopilov@bzq-109-66-134-152.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[7:59] * bkopilov (~bkopilov@bzq-109-66-110-105.red.bezeqint.net) has joined #ceph
[8:00] * overclk (~overclk@121.244.87.117) has joined #ceph
[8:00] * vbellur (~vijay@121.244.87.117) has joined #ceph
[8:01] * coreping (~Michael_G@n1.coreping.org) has joined #ceph
[8:06] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[8:07] * kefu (~kefu@114.92.111.70) has joined #ceph
[8:11] * KungFuHamster (~rapedex@1GLAABBVL.tor-irc.dnsbl.oftc.net) Quit ()
[8:12] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[8:15] * rdas (~rdas@121.244.87.116) has joined #ceph
[8:20] * cooldharma06 (~chatzilla@14.139.180.52) has joined #ceph
[8:23] * Sysadmin88 (~IceChat77@054527d3.skybroadband.com) Quit (Quit: REALITY.SYS Corrupted: Re-boot universe? (Y/N/Q))
[8:24] * ChrisNBlum (~ChrisNBlu@178.255.153.117) Quit (Quit: Goodbye)
[8:26] * bandrus (~brian@128.sub-70-211-79.myvzw.com) Quit (Quit: Leaving.)
[8:27] * bkopilov (~bkopilov@bzq-109-66-110-105.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[8:28] * cooldharma06 (~chatzilla@14.139.180.52) has left #ceph
[8:30] * rotbeard (~redbeard@217.110.226.114) has joined #ceph
[8:31] * phoenix42 (~phoenix42@122.252.249.67) has joined #ceph
[8:34] * DV__ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[8:34] * kefu (~kefu@114.92.111.70) Quit (Max SendQ exceeded)
[8:35] * WWW (~oftc-webi@61.135.169.73) has joined #ceph
[8:35] * kefu (~kefu@114.92.111.70) has joined #ceph
[8:35] <WWW> ceph.com site down?
[8:36] <singler_> doesn't load for me too
[8:40] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[8:41] * bitserker (~toni@88.87.194.130) has joined #ceph
[8:42] * ifur (~osm@0001f63e.user.oftc.net) Quit (Ping timeout: 480 seconds)
[8:44] * bandrus (~brian@128.sub-70-211-79.myvzw.com) has joined #ceph
[8:46] * andrew_m (~Uniju@orion.enn.lu) has joined #ceph
[8:47] * WWW (~oftc-webi@61.135.169.73) Quit (Remote host closed the connection)
[8:48] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[8:50] * bkopilov (~bkopilov@bzq-109-66-27-145.red.bezeqint.net) has joined #ceph
[8:53] * fghaas (~florian@212095007024.public.telering.at) has joined #ceph
[8:57] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[8:59] * bkopilov (~bkopilov@bzq-109-66-27-145.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[9:00] * phoenix42 (~phoenix42@122.252.249.67) Quit (Ping timeout: 480 seconds)
[9:01] * bandrus (~brian@128.sub-70-211-79.myvzw.com) Quit (Ping timeout: 480 seconds)
[9:02] <Be-El> hi
[9:04] * derjohn_mob (~aj@tmo-109-26.customers.d1-online.com) has joined #ceph
[9:04] * shang (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[9:07] * sbfox (~Adium@S0106c46e1fb849db.vf.shawcable.net) Quit (Quit: Leaving.)
[9:08] * derjohn_mobi (~aj@fw.gkh-setu.de) has joined #ceph
[9:08] * shyu (~Shanzhi@119.254.196.66) Quit (Ping timeout: 480 seconds)
[9:11] * fghaas (~florian@212095007024.public.telering.at) Quit (Ping timeout: 480 seconds)
[9:12] * bkopilov (~bkopilov@bzq-79-183-150-61.red.bezeqint.net) has joined #ceph
[9:12] * derjohn_mob (~aj@tmo-109-26.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[9:13] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) has joined #ceph
[9:13] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) Quit (Read error: Connection reset by peer)
[9:13] * thomnico (~thomnico@2a01:e35:8b41:120:a549:a73c:cbb5:eb2d) has joined #ceph
[9:13] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) has joined #ceph
[9:14] * getup (~getup@gw.office.cyso.net) has joined #ceph
[9:16] * andrew_m (~Uniju@2WVAABLOC.tor-irc.dnsbl.oftc.net) Quit ()
[9:16] * DV__ (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[9:16] * hifi1 (~utugi____@5NZAABN5K.tor-irc.dnsbl.oftc.net) has joined #ceph
[9:18] <qstion> skullone: does RH meddle in cephs business after the aquisition?
[9:18] * Concubidated1 (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[9:18] <qstion> i'm interested too :)
[9:20] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[9:21] * derjohn_mobi (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[9:22] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) Quit (Quit: p66kumar)
[9:22] * wschulze (~wschulze@38.96.12.2) Quit (Quit: Leaving.)
[9:22] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[9:25] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[9:25] * Concubidated (~Adium@71.21.5.251) Quit ()
[9:26] * dgurtner (~dgurtner@178.197.231.228) has joined #ceph
[9:28] * shyu (~Shanzhi@119.254.196.66) has joined #ceph
[9:32] * bitserker (~toni@88.87.194.130) Quit (Ping timeout: 480 seconds)
[9:32] * analbeard (~shw@support.memset.com) has joined #ceph
[9:32] * bitserker (~toni@88.87.194.130) has joined #ceph
[9:33] * fsimonce (~simon@host178-188-dynamic.26-79-r.retail.telecomitalia.it) has joined #ceph
[9:33] * nils_ (~nils@doomstreet.collins.kg) has joined #ceph
[9:36] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[9:40] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[9:42] * zack_dolby (~textual@e0109-114-22-11-74.uqwimax.jp) has joined #ceph
[9:42] * DV_ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[9:42] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[9:42] * rendar (~I@host38-179-dynamic.23-79-r.retail.telecomitalia.it) has joined #ceph
[9:46] * hifi1 (~utugi____@5NZAABN5K.tor-irc.dnsbl.oftc.net) Quit ()
[9:46] * RaidSoft (~Thononain@171.ip-5-135-148.eu) has joined #ceph
[9:47] * atze (~oftc-webi@mail.nnm.nl) has joined #ceph
[9:48] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[9:49] * bkopilov (~bkopilov@bzq-79-183-150-61.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[9:49] <atze> hi. A question on SSD journal and mixing IO. In general mixing IO (high IO small block vs bigblock) is a bad idea because blocks cause latency which is bad for your small block io
[9:49] * bkopilov (~bkopilov@bzq-79-179-34-87.red.bezeqint.net) has joined #ceph
[9:50] <atze> Do NVMe SSD journals over a solution in this since they can handle multiple threads (64K) ?
[9:51] * scuttlemonkey (~scuttle@nat-pool-rdu-t.redhat.com) Quit (Quit: Coyote finally caught me)
[9:51] * rturk|afk (~rturk@nat-pool-rdu-t.redhat.com) Quit (Quit: Coyote finally caught me)
[9:57] * frednass (~fred@dn-infra-12.lionnois.univ-lorraine.fr) has joined #ceph
[9:58] * bkopilov (~bkopilov@bzq-79-179-34-87.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[9:58] * bkopilov (~bkopilov@bzq-79-177-155-2.red.bezeqint.net) has joined #ceph
[9:59] * aszeszo (~aszeszo@geq177.internetdsl.tpnet.pl) has joined #ceph
[10:02] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[10:03] <T1w> cetex: any news on xfs derag and performance?
[10:06] * MACscr (~Adium@2601:d:c800:de3:bd2b:87b2:8669:267f) has joined #ceph
[10:11] * Hemanth (~Hemanth@121.244.87.117) Quit (Ping timeout: 480 seconds)
[10:12] * fireD (~fireD@31.216.194.79) has joined #ceph
[10:13] * MACscr (~Adium@2601:d:c800:de3:bd2b:87b2:8669:267f) Quit (Quit: Leaving.)
[10:14] * bkopilov (~bkopilov@bzq-79-177-155-2.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[10:16] * RaidSoft (~Thononain@2WVAABLRS.tor-irc.dnsbl.oftc.net) Quit ()
[10:16] * bkopilov (~bkopilov@bzq-79-182-117-117.red.bezeqint.net) has joined #ceph
[10:16] * vpol (~vpol@000131a0.user.oftc.net) has joined #ceph
[10:18] <cetex> no. not yet. working on it
[10:18] <T1w> ok
[10:25] * jaank_ (~quassel@98.215.50.223) Quit (Read error: Connection reset by peer)
[10:26] * The1w (~jens@node3.survey-it.dk) has joined #ceph
[10:26] * atze (~oftc-webi@mail.nnm.nl) Quit (Remote host closed the connection)
[10:30] * thomnico (~thomnico@2a01:e35:8b41:120:a549:a73c:cbb5:eb2d) Quit (Quit: Ex-Chat)
[10:33] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[10:36] * getup (~getup@gw.office.cyso.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[10:37] * getup (~getup@gw.office.cyso.net) has joined #ceph
[10:42] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Ping timeout: 480 seconds)
[10:43] * bkopilov (~bkopilov@bzq-79-182-117-117.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[10:44] * bkopilov (~bkopilov@bzq-109-66-98-43.red.bezeqint.net) has joined #ceph
[10:46] * galaxyAbstractor (~kalmisto@politkovskaja.torservers.net) has joined #ceph
[10:49] * fghaas (~florian@212095007024.public.telering.at) has joined #ceph
[10:50] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[10:52] * ingard (~cake@tu.rd.vc) has joined #ceph
[10:54] * MACscr (~Adium@2601:d:c800:de3:bd2b:87b2:8669:267f) has joined #ceph
[10:55] * bkopilov (~bkopilov@bzq-109-66-98-43.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[10:57] * fghaas (~florian@212095007024.public.telering.at) Quit (Ping timeout: 480 seconds)
[10:57] * aszeszo1 (~aszeszo@83-238-161-74.static.ip.netia.com.pl) has joined #ceph
[10:57] * aszeszo (~aszeszo@geq177.internetdsl.tpnet.pl) Quit (Read error: No route to host)
[10:59] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[11:03] * yanzheng1 (~zhyan@171.216.94.165) has joined #ceph
[11:04] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) has joined #ceph
[11:04] * yanzheng (~zhyan@182.139.204.64) Quit (Ping timeout: 480 seconds)
[11:07] * sugoruyo (~georgev@paarthurnax.esc.rl.ac.uk) has joined #ceph
[11:12] * aszeszo (~aszeszo@geq177.internetdsl.tpnet.pl) has joined #ceph
[11:14] * thomnico (~thomnico@2a01:e35:8b41:120:a549:a73c:cbb5:eb2d) has joined #ceph
[11:14] * getup (~getup@gw.office.cyso.net) Quit (Read error: Connection reset by peer)
[11:16] * galaxyAbstractor (~kalmisto@3OZAAA5YT.tor-irc.dnsbl.oftc.net) Quit ()
[11:16] * ggg (~delcake@37.187.129.166) has joined #ceph
[11:19] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[11:19] * aszeszo1 (~aszeszo@83-238-161-74.static.ip.netia.com.pl) Quit (Ping timeout: 480 seconds)
[11:21] * shang (~ShangWu@175.41.48.77) has joined #ceph
[11:29] <ingard> 09:19 < ingard> too many PGs per OSD (477 > max 300)
[11:29] <ingard> 09:19 < ingard> i'm getting this since updating the mons to hammer (from firefly)
[11:29] <ingard> 09:19 < ingard> is it a problem?
[11:29] <ingard> loicd: ^
[11:29] <loicd> ingard: too many PGs per OSD (477 > max 300) is not a problem, it's just a warning. More than 300 PG per OSD on average will use more CPU.
[11:30] <ingard> but is it because of the upgrade?
[11:30] <loicd> As you add more OSD to your cluster this will go away.
[11:30] <ingard> we didnt change anything but the warning started appearing after monitor upgrade
[11:30] <loicd> ingard: it's a new warning indeed
[11:31] <ingard> aight
[11:32] <ingard> so about the new erasure coding algo. is it related to that?
[11:33] <ingard> anyway. is it possible to migrate old erasure pools to use the new algo?
[11:34] * thomnico (~thomnico@2a01:e35:8b41:120:a549:a73c:cbb5:eb2d) Quit (Quit: Ex-Chat)
[11:34] * thomnico (~thomnico@2a01:e35:8b41:120:a549:a73c:cbb5:eb2d) has joined #ceph
[11:36] * rotbeard (~redbeard@217.110.226.114) Quit (Quit: Leaving)
[11:42] <ingard> loicd: is there an easy way of telling which (if any) osds are still running an older version?
[11:44] <loicd> ingard: the warning is just a warning: the number of PG per OSD did not change as part of the upgrade. Only now you see that you should have less PG per OSD in order to not use as much CPU. If you don't see that your machines are using too much CPU you are fine.
[11:45] <loicd> ingard: ceph tell osd.* version
[11:45] <loicd> will display the version of each osd
[11:46] * ggg (~delcake@2WVAABLWM.tor-irc.dnsbl.oftc.net) Quit ()
[11:46] * hoo (~hoo@firewall.netconomy.net) has joined #ceph
[11:47] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[11:48] <hoo> any news, why ceph.com is down
[11:48] <hoo> ?
[11:49] * aszeszo (~aszeszo@geq177.internetdsl.tpnet.pl) Quit (Quit: Leaving.)
[11:49] <hoo> or any other possibility to access the (brilliant) doc of ceph?
[11:50] * Scrin (~Redshift@195.169.125.226) has joined #ceph
[11:55] * aszeszo (~aszeszo@geq177.internetdsl.tpnet.pl) has joined #ceph
[11:55] * dgurtner (~dgurtner@178.197.231.228) Quit (Ping timeout: 480 seconds)
[11:55] * aszeszo (~aszeszo@geq177.internetdsl.tpnet.pl) Quit ()
[11:56] * RayTracer (~RayTracer@153.19.7.39) has joined #ceph
[12:00] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[12:05] * vpol (~vpol@000131a0.user.oftc.net) Quit (Quit: vpol)
[12:05] * vbellur (~vijay@121.244.87.124) has joined #ceph
[12:08] * shyu (~Shanzhi@119.254.196.66) Quit (Remote host closed the connection)
[12:09] * bene (~ben@c-75-68-96-186.hsd1.nh.comcast.net) has joined #ceph
[12:09] * MACscr (~Adium@2601:d:c800:de3:bd2b:87b2:8669:267f) Quit (Quit: Leaving.)
[12:11] * fireD_ (~fireD@188.125.15.171) has joined #ceph
[12:12] * Mika_c (~quassel@125.227.22.217) Quit (Remote host closed the connection)
[12:13] * fireD (~fireD@31.216.194.79) Quit (Ping timeout: 480 seconds)
[12:20] * Scrin (~Redshift@5NZAABOES.tor-irc.dnsbl.oftc.net) Quit ()
[12:20] * Scrin (~Revo84@ns365892.ip-94-23-6.eu) has joined #ceph
[12:26] * getup (~getup@gw.office.cyso.net) has joined #ceph
[12:28] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[12:33] * vpol (~vpol@000131a0.user.oftc.net) has joined #ceph
[12:34] <joelm> this ceph.com stuff is really frustrating chaps, might be worthwhile considering farming off the packages to another host and leave the site on it's own somwhere
[12:35] <joelm> breaking lots of updates
[12:35] * dgurtner (~dgurtner@178.197.231.228) has joined #ceph
[12:38] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) has joined #ceph
[12:39] * fghaas (~florian@212095007024.public.telering.at) has joined #ceph
[12:45] * getup (~getup@gw.office.cyso.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[12:47] * getup (~getup@gw.office.cyso.net) has joined #ceph
[12:48] * dalgaaf (uid15138@id-15138.charlton.irccloud.com) has joined #ceph
[12:50] * fghaas (~florian@212095007024.public.telering.at) Quit (Ping timeout: 480 seconds)
[12:50] * Scrin (~Revo84@2WVAABLZF.tor-irc.dnsbl.oftc.net) Quit ()
[12:50] * Quatroking (~Frostshif@tor-exit.server9.tvdw.eu) has joined #ceph
[12:53] * kefu is now known as kefu|afk
[12:58] * fghaas (~florian@212095007024.public.telering.at) has joined #ceph
[13:04] * bkopilov (~bkopilov@bzq-79-178-34-207.red.bezeqint.net) has joined #ceph
[13:09] * shylesh (~shylesh@121.244.87.124) Quit (Remote host closed the connection)
[13:09] * fghaas (~florian@212095007024.public.telering.at) Quit (Ping timeout: 480 seconds)
[13:14] * sankarshan (~sankarsha@121.244.87.117) has joined #ceph
[13:15] * kefu|afk (~kefu@114.92.111.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[13:16] <flaf> Hi,
[13:16] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[13:16] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[13:16] <flaf> I didn't know the "ceph tell osd.* version". It can be useful.
[13:17] <flaf> If I try "ceph tell mon.* version" (with hammer), all is OK except for one monitor where I have "mon.3: Error ENOENT: problem getting command descriptions from mon.3"
[13:17] <flaf> Is it normal?
[13:18] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[13:18] <flaf> (mon.3 is completely OK, the cluster is OK)
[13:20] * madkiss (~madkiss@chello080108036100.31.11.vie.surfer.at) has joined #ceph
[13:20] * Quatroking (~Frostshif@425AAALTK.tor-irc.dnsbl.oftc.net) Quit ()
[13:21] * _robbat2|irssi (nobody@www2.orbis-terrarum.net) Quit (Ping timeout: 480 seconds)
[13:25] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[13:35] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[13:36] * shang (~ShangWu@175.41.48.77) has joined #ceph
[13:37] <bd> joelm: that is why I apt-mirror the repository to a local machine ;)
[13:37] * shang (~ShangWu@175.41.48.77) Quit ()
[13:37] <joelm> bd: heh, I just use eu.ceph.com now :)
[13:38] <joelm> ganesha NFS with Ceph FSAL actually really quick (at least in some basic iso tests()
[13:38] <joelm> that with krb5 support etc
[13:38] * getup (~getup@gw.office.cyso.net) Quit (Ping timeout: 480 seconds)
[13:41] <Be-El> joelm: does ganesha act as nfs server on a standard ceph fs (which can also be mounted on other clients as well), or does it require exclusive access?
[13:41] * danieagle (~Daniel@177.138.223.106) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[13:44] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[13:44] * macjack (~macjack@122.146.93.152) has left #ceph
[13:46] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[13:46] * phoenix42 (~phoenix42@14.139.219.162) has joined #ceph
[13:48] * karnan (~karnan@121.244.87.117) has joined #ceph
[13:48] * bene (~ben@c-75-68-96-186.hsd1.nh.comcast.net) Quit (Ping timeout: 480 seconds)
[13:50] * OODavo (~HoboPickl@bolobolo1.torservers.net) has joined #ceph
[13:50] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[13:52] * bene (~ben@c-75-68-96-186.hsd1.nh.comcast.net) has joined #ceph
[13:53] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[13:53] <joelm> Be-El: I simply made the system that was service nfs via ganesha have admin access, could probably more specific with caps
[13:54] <joelm> that re-exports cephfs (you can choose the root path)
[13:54] <joelm> but via the FSAL on libcephfs, so it's actually a lot quicker thn I expected (once I'd removed the debug flag!)
[13:54] <joelm> goes pretty much flat ouut opn my 1G interface, testing on 10G next to see
[13:55] <Be-El> joelm: i'm currently looking for the best solution for our setup. we have a number of internal systems that can access ceph(fs) directly, and some systems that require a secure access. the later currently use nfsv4 with kerberos
[13:55] * bene (~ben@c-75-68-96-186.hsd1.nh.comcast.net) Quit ()
[13:55] <Be-El> joelm: so i may either re-export a cephfs mount point, or use ganesha with ceph fs directly?
[13:55] * nico_ch (~nc@flinux01.tu-graz.ac.at) has joined #ceph
[13:56] * nico_ch (~nc@flinux01.tu-graz.ac.at) Quit ()
[13:57] <flaf> Could you explain me the interest of export a cephfs via NFS? Because a client can mount directly a cephfs. Sorry, maybe I have not understood.
[13:58] <joelm> Be-El: you can point NFS clients at the system that's service ganesha nfs.. that backs onto ceph
[13:58] * tupper_ (~tcole@rtp-isp-nat-pool1-1.cisco.com) Quit (Read error: Connection reset by peer)
[13:58] <Be-El> flaf: in my case the to be secured clients do not have direct access to ceph, so they need some kind of gateway
[13:58] <joelm> flaf: as you don't need to use cephfs.. anything that supports nfs
[13:59] * vbellur (~vijay@122.166.94.20) has joined #ceph
[13:59] <joelm> Be-El: you can implement nfs sec too, via sec=sys or krb5 etc. ACLs too
[13:59] <joelm> I found the gluster config packed with info - https://github.com/nfs-ganesha/nfs-ganesha/tree/master/src/FSAL/FSAL_GLUSTER
[13:59] <Be-El> joelm: i have no experience with ganesha yet. how stable is it (given the not so widespread use of pnfs?
[14:00] <joelm> but took initial ideas from Widodh - http://blog.widodh.nl/2014/12/nfs-ganesha-with-libcephfs-on-ubuntu-14-04/
[14:00] <joelm> Be-El: no idea dude, literally rolled it this morning, seems to keep up so far, YMMV! :)
[14:00] * phoenix42_ (~phoenix42@14.139.219.162) has joined #ceph
[14:01] <joelm> I'm sure I can break it, one way or another :D
[14:01] * joelm has a nack
[14:01] * zhaochao (~zhaochao@111.161.77.236) Quit (Remote host closed the connection)
[14:01] <Be-El> just wait until cephfs beneath it breaks ;-)
[14:02] <joelm> heh, well, sure. I've actually not experienced that thus far. It's been pretty solid in my usage patterns
[14:02] <joelm> only issue ever with ceph was a PEBCAK
[14:02] <joelm> but yea, can understand :)
[14:02] <Be-El> today I managed to get a load of 300 due to ceph-fuse (firefly). it somehow lost its connection and started a new thread for each requesting application
[14:03] <Be-El> had to use the good old kill -9 to get rid of the fuse process
[14:03] <joelm> hehe, yea, good old -9
[14:03] <joelm> dieeeeeee
[14:04] <Be-El> cluster is currently rebalancing 15 tb due to a somewhat nasty error in my crush map
[14:06] * phoenix42 (~phoenix42@14.139.219.162) Quit (Ping timeout: 480 seconds)
[14:13] * MVenesio (~MVenesio@186.136.59.165) has joined #ceph
[14:14] * MACscr (~Adium@2601:d:c800:de3:b801:9545:3d65:8a14) has joined #ceph
[14:17] <flaf> Be-El: joelm: ok thx I understand the security reason.
[14:20] * OODavo (~HoboPickl@98EAAA9XO.tor-irc.dnsbl.oftc.net) Quit ()
[14:20] * phoenix42_ (~phoenix42@14.139.219.162) Quit (Ping timeout: 480 seconds)
[14:20] <alfredodeza> skullone: there are a lot of ways to get ceph-deploy
[14:20] <alfredodeza> ceph.com and its packages being just one of them
[14:21] <alfredodeza> ceph-deploy exists as a Python package on the Python Package Index (PyPI) and can be installed with Python installers
[14:21] <alfredodeza> skullone: ^ ^
[14:23] * fghaas (~florian@212095007024.public.telering.at) has joined #ceph
[14:24] * ganders (~root@190.2.42.21) has joined #ceph
[14:29] * zack_dolby (~textual@e0109-114-22-11-74.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[14:31] * via (~via@smtp2.matthewvia.info) has joined #ceph
[14:31] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[14:33] * The1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[14:38] * fitzdsl (~Romain@dedibox.fitzdsl.net) has joined #ceph
[14:38] * kefu (~kefu@114.92.111.70) has joined #ceph
[14:38] * fitzdsl (~Romain@dedibox.fitzdsl.net) has left #ceph
[14:39] * fireD (~fireD@178.160.95.199) has joined #ceph
[14:41] * fireD_ (~fireD@188.125.15.171) Quit (Ping timeout: 480 seconds)
[14:41] * MACscr (~Adium@2601:d:c800:de3:b801:9545:3d65:8a14) Quit (Quit: Leaving.)
[14:42] * hellertime (~Adium@a23-79-238-10.deploy.static.akamaitechnologies.com) has joined #ceph
[14:42] * danieagle (~Daniel@177.138.223.106) has joined #ceph
[14:46] * karnan (~karnan@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:47] <harmw> are there any mirrors for ceph-deploy to use for when ceph.com/git/.. is down?
[14:49] <alfredodeza> harmw: you need the git repository?
[14:49] <alfredodeza> that is hosted also in github
[14:49] <alfredodeza> github.com/ceph/ceph-deploy
[14:49] <harmw> well, ceph-deploy is trying to consume https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
[14:49] <harmw> which fails
[14:49] <alfredodeza> ah
[14:49] <harmw> [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: rpm --import https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
[14:50] <harmw> :)
[14:50] <alfredodeza> right right
[14:50] * hellertime (~Adium@a23-79-238-10.deploy.static.akamaitechnologies.com) Quit (Ping timeout: 480 seconds)
[14:50] <alfredodeza> harmw: https://raw.githubusercontent.com/ceph/ceph/master/keys/release.asc
[14:51] * Bobby1 (~Kurimus@5.79.68.161) has joined #ceph
[14:51] <harmw> ok, lets see
[14:51] * _robbat2|irssi (nobody@www2.orbis-terrarum.net) has joined #ceph
[14:51] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) has joined #ceph
[14:52] * Hemanth (~Hemanth@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:52] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[14:53] <alfredodeza> harmw: iirc there is also a way to tell ceph-deploy not to install keys
[14:53] <alfredodeza> so you might be able to workaround like that
[14:53] * alfredodeza verifies this looking at the install --help menu
[14:54] <harmw> ok, and thanks
[14:54] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[14:56] <alfredodeza> harmw: I *think* you want this:
[14:56] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[14:56] <alfredodeza> ceph-deploy install --gpg-url https://raw.githubusercontent.com/ceph/ceph/master/keys/release.asc {nodes}
[14:56] <alfredodeza> however, current repos are not working either
[14:56] <joelm> why not just fix ceph.com
[14:56] <harmw> thats exactly what I'm trying :)
[14:57] <alfredodeza> joelm: that would be to easy
[14:57] <joelm> :)
[14:58] <harmw> what *is* the problem with ceph.com anyway, is that known?
[14:58] <alfredodeza> we've had a bunch of different issues
[14:59] <alfredodeza> it seems like every day is something new :(
[14:59] <alfredodeza> it is not straightforward to fix because there are so many different things there
[15:00] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[15:01] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:06] * vata (~vata@208.88.110.46) has joined #ceph
[15:08] * lalatenduM (~lalatendu@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:09] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[15:10] <joelm> just make a varnish cache with huuuuuge TTL :)
[15:11] * dyasny (~dyasny@173.231.115.58) has joined #ceph
[15:12] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[15:12] * tupper (~tcole@rtp-isp-nat-pool1-1.cisco.com) has joined #ceph
[15:12] <qstion> Hi, my ceph monitor have just crashed: http://paste2.org/68c7MMHI
[15:12] <qstion> a bug or what?
[15:19] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[15:20] * Bobby1 (~Kurimus@2FBAABF79.tor-irc.dnsbl.oftc.net) Quit ()
[15:20] * KristopherBel (~Teddybare@tor-exit.server9.tvdw.eu) has joined #ceph
[15:21] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[15:23] * MVenesio (~MVenesio@186.136.59.165) Quit (Quit: Leaving...)
[15:26] * gregmark (~Adium@68.87.42.115) has joined #ceph
[15:33] <harmw> alfredodeza: is there a mirror for packages?
[15:33] * DV_ (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[15:34] <jluis> qstion, hard to say without 'debug ms = 1', but most likely yes
[15:37] <jluis> qstion, if you file a ticket for that, please add any logs you may have; if you are able to reproduce the assert reliably, please do so with 'debug ms = 1' and 'debug mon = 10'
[15:37] <jluis> well, 'debug ms = 10' would likely be better, given this is coming from the messenger itself
[15:38] * tupper (~tcole@rtp-isp-nat-pool1-1.cisco.com) Quit (Ping timeout: 480 seconds)
[15:38] <qstion> that was a random(ish) crash. other monitors in the cluster were rebooting etc. i doubt i will reproduce this on purpose
[15:39] * karnan (~karnan@106.51.133.93) has joined #ceph
[15:41] <qstion> if i mount same rbd image to two hosts (image is version 2, without shared flag)
[15:41] <qstion> could that corrupt data on image?
[15:41] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[15:48] * ifur (~osm@0001f63e.user.oftc.net) has joined #ceph
[15:49] * harold (~hamiller@71-94-227-43.dhcp.mdfd.or.charter.com) has joined #ceph
[15:50] * KristopherBel (~Teddybare@98EAAA902.tor-irc.dnsbl.oftc.net) Quit ()
[15:50] * RaidSoft (~Aramande_@tor-exit-2.zenger.nl) has joined #ceph
[15:53] <janos> sure
[15:53] <janos> if you're not using a clustered filesystem on it
[15:54] <janos> if you format it with xfs,ext4 etc and mount it and use it in tow places you can corrupt it
[15:54] <janos> tow/two
[15:55] <qstion> just as i thought!
[15:57] <joelm> yea, maybe use cLVM on it
[15:58] <joelm> (if you need $FILESYSTYEM)
[15:58] * wschulze (~wschulze@38.96.12.2) has joined #ceph
[15:59] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) Quit (Read error: Connection reset by peer)
[16:00] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) has joined #ceph
[16:01] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[16:01] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) has left #ceph
[16:01] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[16:03] * madkiss (~madkiss@chello080108036100.31.11.vie.surfer.at) Quit (Quit: Leaving.)
[16:03] * madkiss (~madkiss@vpn142.sys11.net) has joined #ceph
[16:06] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[16:06] * yanzheng1 (~zhyan@171.216.94.165) Quit (Quit: This computer has gone to sleep)
[16:07] * frednass (~fred@dn-infra-12.lionnois.univ-lorraine.fr) Quit (Read error: Connection reset by peer)
[16:07] * frednass (~fred@dn-infra-12.lionnois.univ-lorraine.fr) has joined #ceph
[16:08] * wushudoin (~wushudoin@209.132.181.86) has joined #ceph
[16:08] * DV__ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[16:10] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) has joined #ceph
[16:12] * tupper_ (~tcole@rtp-isp-nat1.cisco.com) has joined #ceph
[16:14] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[16:15] <ganders> is the ceph website down?
[16:19] * madkiss1 (~madkiss@chello080108036100.31.11.vie.surfer.at) has joined #ceph
[16:20] * RaidSoft (~Aramande_@98EAAA912.tor-irc.dnsbl.oftc.net) Quit ()
[16:20] * AG_Scott (~clarjon1@orion.enn.lu) has joined #ceph
[16:21] * nils_ (~nils@doomstreet.collins.kg) Quit (Read error: Connection reset by peer)
[16:21] * madkiss (~madkiss@vpn142.sys11.net) Quit (Ping timeout: 480 seconds)
[16:21] <rotbeard> ganders, I think so. can't download the repo packages too here ;)
[16:23] * alram (~alram@38.96.12.2) has joined #ceph
[16:24] <ganders> rotbeard: :), know if it's going to be available soon?
[16:24] <frickler> for packages you can use eu.ceph.com
[16:24] <frickler> jluis: maybe change the topic of the channel? seems to be a FAQ currently ;)
[16:24] <joelm> maybe should set the channel topic
[16:25] * madkiss1 (~madkiss@chello080108036100.31.11.vie.surfer.at) Quit (Quit: Leaving.)
[16:25] <rotbeard> ganders, dunno.
[16:28] * nils_ (~nils@doomstreet.collins.kg) has joined #ceph
[16:28] * tserong (~tserong@203-214-92-220.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[16:29] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[16:34] * fghaas (~florian@212095007024.public.telering.at) Quit (Ping timeout: 480 seconds)
[16:36] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[16:38] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[16:38] <flaf> I was reading this thread http://www.mail-archive.com/ceph-users@lists.ceph.com/msg18931.html, and it talks about "OSD without journal". It's possible to have OSDs without journal? I never see that before.
[16:39] <joelm> yes and no
[16:39] <flaf> Ah ;)
[16:39] <joelm> generally it'll mean on the disk itself
[16:39] * hellertime (~Adium@72.246.185.14) has joined #ceph
[16:39] <joelm> as a file - I use that way with ours
[16:39] <joelm> I don't use a journal device
[16:40] <frickler> loicd: regarding "too many PGs per OSD", is there a way to disable that new warning? keeps my alarming busy that the cluster is continously in HEALTH_WARN
[16:40] <joelm> flaf: oh, that mail is something different I guess
[16:41] <flaf> joelm: ok, with an OSD, you can have 1) a journal as regular file in the working dir of the OSD or 2) a journal in a raw partition. But there is always a journal for the OSD. No?
[16:41] <joelm> ZFS has lots of facilities that Ceph already does.. be interested to see it running on ZFS and proerply leverageing ZFS goodies (and strip our Ceph implementations)
[16:41] <joelm> yea, I thoght you mean no journal device specifically
[16:42] <joelm> you have stuff like ZiL in ZFS
[16:42] * evilrob00 (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[16:42] <flaf> joelm: so an OSD has always a journal, even if the fs is zfs or btrfs etc. etc. Is it correct?
[16:42] <joelm> afaik
[16:43] <joelm> but the doesn't need to be a seperate SSD
[16:43] <joelm> can be on the same OSD
[16:45] <flaf> joelm: for instance, if I want to test btrfs fs for the OSD backend, should I put the journal as a regular file in the OSD working or put the journal in a raw partition of the OSD disk?
[16:45] * evilrob00 (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) has joined #ceph
[16:45] <loicd> frickler: you can set mon_pg_warn_max_per_osd to something other than the default
[16:46] <xcezzz> flaf: you should be able to just specify the host:sd? and btrfs as the fs and it will take care of the journal creation
[16:48] <flaf> xcezzz: I'm not sure to understand. I don't use ceph-deploy, ceph-disk etc. to create an OSD working dir. Could you show me a complete command as example?
[16:48] <xcezzz> oh im thinking ceph-deploy
[16:49] <flaf> Do you mean that, with btrfs, there is no journal file in the OSD working dir? (ie /var/lib/ceph/osd/ceph-$id/journal doesn't exist ?)
[16:49] <frickler> loicd: great, thx
[16:50] * tserong (~tserong@203-214-92-220.dyn.iinet.net.au) has joined #ceph
[16:50] * AG_Scott (~clarjon1@5NZAABOWP.tor-irc.dnsbl.oftc.net) Quit ()
[16:50] * Lunk2 (~Quackie@95.130.15.97) has joined #ceph
[16:52] * hellertime (~Adium@72.246.185.14) Quit (Quit: Leaving.)
[16:52] <joelm> flaf: when you use ceph-deploy you don't pass it a journal, just a disk - it does the rest
[16:53] <joelm> can make any fs
[16:53] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[16:54] * evilrob0_ (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) has joined #ceph
[16:54] * evilrob00 (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[16:55] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) Quit (Quit: Leaving)
[16:56] <flaf> ok,I see. And if I understand well there is always a journal for the osd. No exception with btrfs, zfs etc. as fs backend.
[16:57] * alram (~alram@38.96.12.2) Quit (Ping timeout: 480 seconds)
[16:59] <Be-El> flaf: theoretically you can configure an osd to use no journal at all. but that's neither recommended nor configurable with the default tools like ceph-deploy or ceph-disk
[17:00] <flaf> But in this case, I wonder what is the author talking about when he says "and I wonder if the osd performance without a journal would be 2x better"
[17:01] <Be-El> flaf: each write for non-btrfs setup has to be written into the journal first and to the osd storage itself
[17:02] <Be-El> flaf: so without a journal you might have a better write performance
[17:02] <Be-El> but there's a reason to have a journal...
[17:02] <flaf> Be-El: ah, do you have some links about how to configure that ?
[17:02] * marrusl (~mark@cpe-24-90-46-248.nyc.res.rr.com) Quit (Quit: bye!)
[17:03] <Be-El> flaf: i'm searching...there's been a post to either ceph-devel or ceph-user some time ago describing the journals
[17:03] * evilrob0_ (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[17:03] * karnan (~karnan@106.51.133.93) Quit (Ping timeout: 480 seconds)
[17:03] <flaf> Be-El: and with btrfs, is the journal recommended too ?
[17:03] <Be-El> flaf: that's a good question
[17:04] <Be-El> flaf: for btrfs, you write to journal and fs in parallel
[17:04] <Be-El> flaf: http://irq0.org/articles/ceph/journal
[17:04] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:04] <Be-El> i'm wondering whether the consistency argument still holds for journaling filesystems like btrfs
[17:05] <flaf> me too.
[17:05] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:06] <flaf> Because, in fact, my goal is just to test ceph with btrfs and I just want to know the specific configuration I should put in this case.
[17:06] <Be-El> flaf: for production you should definitely use a journal.
[17:06] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[17:07] * thomnico (~thomnico@2a01:e35:8b41:120:a549:a73c:cbb5:eb2d) Quit (Quit: Ex-Chat)
[17:08] <flaf> Ok. I see. And with btrfs, in this case (osd journal), the benefit is just a simultaneous writing in journal and storage. Is that correct?
[17:08] * evilrob00 (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) has joined #ceph
[17:10] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[17:10] * marrusl (~mark@cpe-24-90-46-248.nyc.res.rr.com) has joined #ceph
[17:11] <hoo> i want to switch my installation to ceph-deploy
[17:11] <hoo> on centos 7.1
[17:11] <hoo> using eu.ceph.com/rpm-hammer
[17:11] <hoo> problem
[17:11] <hoo> ceph-deploy stops on every remote command per ssh
[17:12] * karnan (~karnan@106.51.132.72) has joined #ceph
[17:12] <hoo> i can do ssh root@mon1 sudo ls
[17:12] <flaf> Be-El: oh in fact you have already answer to my question with "for btrfs, you write to journal and fs in parallel". Sorry.
[17:12] <hoo> but all ceph-deploy commands hang
[17:13] <hoo> this task on the remote machine is never finishing: python -c import sys;exec(eval(sys.stdin.readline()))
[17:13] <flaf> Be-El: joelm: xcezzz: thx for your help, it's more clear for me now.
[17:13] <Be-El> flaf: afaik the btrfs "driver" for osds uses snapshots to ensure the atomicity of transactions. this is also the reason by btrfs is somewhat unstable with ceph. the developers haven't tested that high frequency of snapshots
[17:13] * ifur (~osm@0001f63e.user.oftc.net) Quit (Quit: Lost terminal)
[17:14] <Be-El> s/by/why/
[17:14] * mdxi (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[17:14] * PerlStalker (~PerlStalk@162.220.127.20) has joined #ceph
[17:14] <hoo> a typical output of a stuck ceph-deploy:
[17:14] <hoo> augeas{ "bond_interface" :
[17:14] <hoo> context => "/files/etc/network/interfaces",
[17:14] <hoo> changes => [
[17:14] <hoo> "set auto[child::1 = 'bond0']/1 bond0",
[17:14] <hoo> "set iface[. = 'bond0'] bond0",
[17:14] <hoo> "set iface[. = 'bond0']/family inet",
[17:14] <hoo> "set iface[. = 'bond0']/method static",
[17:14] <hoo> "set iface[. = 'bond0']/address 192.168.110.42",
[17:14] <hoo> "set iface[. = 'bond0']/netmask 255.255.255.0",
[17:14] <hoo> "set iface[. = 'bond0']/network 192.168.110.0",
[17:14] <hoo> "set iface[. = 'bond0']/gateway 192.168.110.240",
[17:14] <hoo> "set iface[. = 'bond0']/slaves 'eth0 eth1'",
[17:14] <hoo> "set iface[. = 'bond0']/bound_mode active-backup",
[17:15] <hoo> "set iface[. = 'bond0']/bond_miimon 100",
[17:15] <hoo> "set iface[. = 'bond0']/bond_downdelay 200",
[17:15] <hoo> "set iface[. = 'bond0']/bond_updelay 200",
[17:15] <hoo> ],
[17:15] <hoo> }
[17:15] <hoo> and Puppet will take care of creating the resource and updating it. Be aware that the interfaces and options not managed by puppet are left untouched.
[17:15] <hoo> sorry wrong c&p
[17:15] <hoo> ceph-deploy -v install hwdev04
[17:15] <hoo> [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[17:15] <hoo> [ceph_deploy.cli][INFO ] Invoked (1.5.23): /usr/bin/ceph-deploy -v install hwdev04
[17:15] <hoo> [ceph_deploy.install][DEBUG ] Installing stable version hammer on cluster ceph hosts hwdev04
[17:15] <hoo> [ceph_deploy.install][DEBUG ] Detecting platform for host hwdev04 ...
[17:15] <hoo> and on this position it stops... forever
[17:15] <flaf> Be-El: And if I use btrfs, I need no specific instructions in ceph.conf. Ceph automatically detects the fs backend?
[17:15] <Be-El> hoo: that command executes whatever is passed to stdin
[17:15] * harold (~hamiller@71-94-227-43.dhcp.mdfd.or.charter.com) Quit (Quit: Leaving)
[17:15] * DV__ (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[17:16] <Be-El> flaf: it detects the filesystem on the storage device. you have to specify the filesystem either at creation time or as default in ceph.conf
[17:16] <xcezzz> hoo: isnt ceph.com down? installs from their repo arent going to work
[17:16] <hoo> Be-El: how can I test, if it works
[17:16] <hoo> Bel-El: f.e. an ssh command on the host with ceph-deploy
[17:17] <Be-El> hoo: no clue. i've neither used ceph-deploy nor some redhat-based system yet
[17:17] <hoo> xcezzz: I use eu.ceph.com
[17:18] * sbfox (~Adium@72.2.49.50) has joined #ceph
[17:18] <xcezzz> ahh
[17:18] <hoo> also I use --no-adjust-repos
[17:19] <hoo> and install ceph-release manually from eu.ceph.com
[17:19] <hoo> and afterwards
[17:19] <hoo> sed -i -r 's/\/ceph.com\//\/eu.ceph.com\//g' /etc/yum.repos.d/ceph.repo
[17:19] <hoo> then ceph-deploy install (on localhost) works
[17:19] <flaf> Ok, I see the configuration in ceph.conf it just uses by the ceph tools when I create an OSD working dir (but in my case I do that manually without the ceph tools). Ok, thx Be-El. No question. ;)
[17:20] * Lunk2 (~Quackie@5NZAABOYL.tor-irc.dnsbl.oftc.net) Quit ()
[17:20] * Plesioth (~Szernex@195.169.125.226) has joined #ceph
[17:21] <hoo> actually, still no remote command works with ceph-deploy at the moment
[17:21] <hoo> anybody an idea
[17:21] * litwol (~litwol@167.114.20.116) has joined #ceph
[17:23] * sbfox1 (~Adium@72.2.49.50) has joined #ceph
[17:23] * ajazdzewski (~ajazdzews@p4FC8E49E.dip0.t-ipconnect.de) has joined #ceph
[17:24] * ifur (~osm@0001f63e.user.oftc.net) has joined #ceph
[17:24] <joelm> hoo: and the ssh keys are ok?
[17:25] <hoo> joel: yes
[17:25] <hoo> joel: found out that it works from another centos 7.1 host
[17:25] * scuttle|afk (~scuttle@nat-pool-rdu-t.redhat.com) has joined #ceph
[17:26] * scuttle|afk is now known as scuttlemonkey
[17:26] * sbfox (~Adium@72.2.49.50) Quit (Ping timeout: 480 seconds)
[17:26] <hoo> joel: seems something with the ceph-deploy machine is wrong
[17:26] <hoo> so thanks so far
[17:26] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[17:26] <joelm> good stuff
[17:27] * oro (~oro@209.249.118.67) has joined #ceph
[17:29] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) has joined #ceph
[17:30] <litwol> Hello
[17:30] * rdas (~rdas@121.244.87.116) has joined #ceph
[17:30] <litwol> I am curious to find out what's happening with ceph.com
[17:30] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[17:30] <joelm> I think we all are litwol :)
[17:31] <joelm> perhaps a topic should be set on the channel
[17:31] <Be-El> litwol: the running gag is: they upgraded to hammer
[17:31] * litwol slams hand on table loling
[17:33] <Be-El> well, they should definitely find out what wrong with the hosting. an unaccessible web page is not a good sign for a software project....
[17:33] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) has joined #ceph
[17:34] <xcezzz> lol
[17:34] <litwol> would suck more if web files are hosted off... /me not gonna say it
[17:35] * ajazdzewski (~ajazdzews@p4FC8E49E.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[17:35] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[17:39] <flaf> Ah sorry Be-El I have another question: with a classical fs (ie XFS), it's recommended to put the journal in the first partition of the OSD disk (I suppose I haven't SSD etc). So the journal is in a separate partition. But is it always valid with btrfs? Maybe it's better to put the journal in the osd working dir as a regular file, isn't it?
[17:40] <Be-El> flaf: the reason to put the journal at the start of the disk is the fact that the first tracks on the disk are considered faster (angular speed of sectors etc.)
[17:40] <flaf> yes indeed.
[17:40] <Be-El> flaf: if you use a file for the journal, it might be distributed accross the whole disk, resulting in a lot more seek operations
[17:40] <Be-El> this is independent of btrfs/xfs/ext4
[17:41] <joelm> Be-El: that would be classic fragmentation thoguh non
[17:41] <flaf> But with btrfs, if I want to have parallel writing in journal and OSD storage, should I put journal and storage in the same btrfs fs?
[17:42] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[17:42] * fghaas (~florian@194.112.182.214) has joined #ceph
[17:42] * DV_ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[17:42] <flaf> If the journal is in a separate raw partition (with fs), is there still parallel writing in journal and osd storage?
[17:43] <flaf> s/with/without/
[17:43] <flaf> (I hope my question is clear)
[17:43] <Be-El> joelm: fragmentation is another problem, yes. but a filesystem does not necessary has to allocate the space for a file in a single, continous extent
[17:44] <Be-El> flaf: writeahead/parallel journal writing is only depending on the filesystem of the storage afaik
[17:44] <Be-El> flaf: whether the journal is a file or a raw partition doesn't matter
[17:45] <joelm> Be-El: single files, I assume, are? Be fairly idioting to fragment at creation?
[17:45] <joelm> when you have multiple, sure
[17:45] <Be-El> flaf: if you put the journal on a btrfs partition as a file, you will definitely suffer in several ways. filesystem overhead on one hand, and the COW-nature of btrfs on the other
[17:45] <flaf> Ok. I thought that to benefit from the parallelism, the journal and the storage must be in the same btrfs fs.
[17:45] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[17:45] <Be-El> joelm: if enough space is available the fs should allocate them in a single extent, yes. but i wouldn't bet on int
[17:46] <joelm> in this context though, new OSD, fresh journal...
[17:47] <Be-El> joelm: it might even be more efficient to stripe the journal accross the filesystem (assuming platter beneath it)
[17:48] <Be-El> joelm: the expected seek distance between journal and data should shrink
[17:48] <joelm> yea, good point
[17:48] <flaf> Thx Be-El. I think I have enough information to try tests... when I have time. ;)
[17:49] * joelm wonders how much of the journal file can live in the disks's cache (onboard one)
[17:49] <joelm> probably not much
[17:50] <Be-El> joelm: i would prefer small pci-e devices with standard ram and a battery
[17:50] <joelm> yea, just a thought experiment really :)
[17:50] * Plesioth (~Szernex@98EAAA95W.tor-irc.dnsbl.oftc.net) Quit ()
[17:50] * dgurtner (~dgurtner@178.197.231.228) Quit (Ping timeout: 480 seconds)
[17:53] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[17:54] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[17:55] * ifur (~osm@0001f63e.user.oftc.net) Quit (Quit: Lost terminal)
[17:56] * tserong (~tserong@203-214-92-220.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[17:56] * oro (~oro@209.249.118.67) Quit (Ping timeout: 480 seconds)
[17:57] * reed (~reed@2602:244:b653:6830:2c27:126:1cc:4840) has joined #ceph
[17:58] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[18:00] * ifur (~osm@0001f63e.user.oftc.net) has joined #ceph
[18:00] * kefu (~kefu@114.92.111.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:00] * karnan (~karnan@106.51.132.72) Quit (Ping timeout: 480 seconds)
[18:00] * puffy (~puffy@50.185.218.255) Quit (Quit: Leaving.)
[18:01] * puffy (~puffy@50.185.218.255) has joined #ceph
[18:01] * ifur (~osm@0001f63e.user.oftc.net) Quit ()
[18:02] * ifur (~osm@0001f63e.user.oftc.net) has joined #ceph
[18:02] * bandrus (~brian@128.sub-70-211-79.myvzw.com) has joined #ceph
[18:04] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[18:05] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) has joined #ceph
[18:13] * wushudoin (~wushudoin@209.132.181.86) Quit (Quit: Leaving)
[18:13] * wushudoin (~wushudoin@209.132.181.86) has joined #ceph
[18:13] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:13] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) Quit (Quit: leaving)
[18:14] * kawa2014 (~kawa@2-229-47-79.ip195.fastwebnet.it) has joined #ceph
[18:14] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) has joined #ceph
[18:16] * joef (~Adium@2620:79:0:207:4078:b381:6577:dad2) has joined #ceph
[18:18] * daniel2_ (~daniel@12.164.168.117) has joined #ceph
[18:19] * joef1 (~Adium@2620:79:0:207:843d:27e2:ef7b:16b7) has joined #ceph
[18:20] * Salamander_ (~hyst@tor-proxy-readme.cloudexit.eu) has joined #ceph
[18:21] <championofcyrodi> "HGST will ship a 10TB helium drive in the second half of 2015" -http://www.zdnet.com/article/hgst-goes-all-in-on-helium-drives/
[18:22] * tserong (~tserong@203-214-92-220.dyn.iinet.net.au) has joined #ceph
[18:23] * RayTracer (~RayTracer@153.19.7.39) Quit (Remote host closed the connection)
[18:24] * Hemanth (~Hemanth@117.192.237.203) has joined #ceph
[18:24] * joef (~Adium@2620:79:0:207:4078:b381:6577:dad2) Quit (Ping timeout: 480 seconds)
[18:24] * ifur (~osm@0001f63e.user.oftc.net) Quit (Quit: Lost terminal)
[18:26] * joef1 (~Adium@2620:79:0:207:843d:27e2:ef7b:16b7) Quit (Quit: Leaving.)
[18:26] * sjmtest (uid32746@id-32746.uxbridge.irccloud.com) has joined #ceph
[18:26] * ifur (~osm@0001f63e.user.oftc.net) has joined #ceph
[18:27] * p66kumar (~p66kumar@74.119.205.248) has joined #ceph
[18:27] * alram (~alram@38.96.12.2) has joined #ceph
[18:27] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[18:27] * joef (~Adium@2620:79:0:207:d83a:1568:5c79:36c3) has joined #ceph
[18:28] * rldleblanc (~rdleblanc@69-195-66-44.unifiedlayer.com) Quit (Quit: Konversation terminated!)
[18:28] * elder (~elder@50.250.13.174) Quit (Quit: Leaving)
[18:29] <Vivek> loicd: Are you there ?
[18:29] * joef (~Adium@2620:79:0:207:d83a:1568:5c79:36c3) has left #ceph
[18:29] <Vivek> loicd: I have managed to integrate a vsphere host with ceph.
[18:30] <Vivek> loicd: Trying if a VCenter Integration is possible.
[18:31] * joef1 (~Adium@138-72-131-66.pixar.com) has joined #ceph
[18:32] * puffy (~puffy@50.185.218.255) Quit (Quit: Leaving.)
[18:33] * mdxi (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) has joined #ceph
[18:33] <loicd> Vivek: I'm happy for you :-)
[18:33] * bandrus1 (~brian@2602:306:cccf:f389:e0c1:2f18:c453:a698) has joined #ceph
[18:33] * bandrus (~brian@128.sub-70-211-79.myvzw.com) Quit (Ping timeout: 480 seconds)
[18:35] * ifur (~osm@0001f63e.user.oftc.net) Quit (Quit: Lost terminal)
[18:36] * Hemanth (~Hemanth@117.192.237.203) Quit (Ping timeout: 480 seconds)
[18:37] * ifur (~osm@0001f63e.user.oftc.net) has joined #ceph
[18:37] * davidz1 (~davidz@cpe-23-242-189-171.socal.res.rr.com) has joined #ceph
[18:37] * davidzlap (~Adium@cpe-23-242-189-171.socal.res.rr.com) Quit (Quit: Leaving.)
[18:38] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) Quit (Quit: Leaving.)
[18:40] * Hemanth (~Hemanth@117.192.237.203) has joined #ceph
[18:42] * wschulze (~wschulze@38.96.12.2) Quit (Quit: Leaving.)
[18:44] * hellertime (~Adium@72.246.185.14) has joined #ceph
[18:44] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) has left #ceph
[18:44] * ircolle (~Adium@166.170.40.213) has joined #ceph
[18:46] * qstion (~qstion@37.157.144.44) Quit (Remote host closed the connection)
[18:47] * ircolle-afk (~Adium@2601:1:a580:1735:507e:4aaf:e5ad:c905) Quit (Ping timeout: 480 seconds)
[18:50] * Salamander_ (~hyst@2WVAABMG6.tor-irc.dnsbl.oftc.net) Quit ()
[18:50] * Aal (~Kwen@chomsky.torservers.net) has joined #ceph
[18:51] <litwol> yey site is back
[18:51] * alram (~alram@38.96.12.2) Quit (Quit: leaving)
[18:51] * xarses (~andreww@12.164.168.117) has joined #ceph
[18:54] * tserong (~tserong@203-214-92-220.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[18:58] * BManojlovic (~steki@cable-89-216-238-192.dynamic.sbb.rs) has joined #ceph
[19:01] * ircolle (~Adium@166.170.40.213) Quit (Ping timeout: 480 seconds)
[19:01] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[19:02] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[19:02] * ifur_ (~osm@hornbill.csc.warwick.ac.uk) has joined #ceph
[19:02] * ifur (~osm@0001f63e.user.oftc.net) Quit (Remote host closed the connection)
[19:02] * mgolub (~Mikolaj@91.225.202.153) has joined #ceph
[19:03] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit ()
[19:04] * joef1 (~Adium@138-72-131-66.pixar.com) Quit (Quit: Leaving.)
[19:05] * wkennington (~william@76.77.180.204) Quit (Remote host closed the connection)
[19:05] * ifur_ is now known as ifur
[19:06] * thomnico (~thomnico@2a01:e35:8b41:120:a549:a73c:cbb5:eb2d) has joined #ceph
[19:08] * bene (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[19:08] * bandrus1 (~brian@2602:306:cccf:f389:e0c1:2f18:c453:a698) Quit (Quit: Leaving.)
[19:08] * tserong (~tserong@203-214-92-220.dyn.iinet.net.au) has joined #ceph
[19:09] * ifur (~osm@0001f63e.user.oftc.net) Quit (Quit: Lost terminal)
[19:09] * ircolle (~Adium@2601:1:a580:1735:d5cc:ddc2:9fe9:bef1) has joined #ceph
[19:09] * wkennington (~william@76.77.180.204) has joined #ceph
[19:10] * Sysadmin88 (~IceChat77@054527d3.skybroadband.com) has joined #ceph
[19:11] * ifur (~osm@0001f63e.user.oftc.net) has joined #ceph
[19:11] * ifur (~osm@0001f63e.user.oftc.net) Quit (Remote host closed the connection)
[19:11] * ifur (~osm@0001f63e.user.oftc.net) has joined #ceph
[19:16] * bandrus (~brian@128.sub-70-211-79.myvzw.com) has joined #ceph
[19:16] * vpol (~vpol@000131a0.user.oftc.net) Quit (Quit: vpol)
[19:19] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[19:19] * hellertime (~Adium@72.246.185.14) Quit (Quit: Leaving.)
[19:20] * Aal (~Kwen@2FBAABGGQ.tor-irc.dnsbl.oftc.net) Quit ()
[19:20] * vegas3 (~Kayla@edwardsnowden0.torservers.net) has joined #ceph
[19:23] * Hemanth (~Hemanth@117.192.237.203) Quit (Ping timeout: 480 seconds)
[19:26] * fghaas (~florian@194.112.182.214) Quit (Quit: Leaving.)
[19:27] * kawa2014 (~kawa@2-229-47-79.ip195.fastwebnet.it) Quit (Quit: Leaving)
[19:27] * lalatenduM (~lalatendu@122.172.106.120) has joined #ceph
[19:29] * kawa2014 (~kawa@2-229-47-79.ip195.fastwebnet.it) has joined #ceph
[19:29] * bkopilov (~bkopilov@bzq-79-178-34-207.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[19:30] * bkopilov (~bkopilov@bzq-79-177-203-226.red.bezeqint.net) has joined #ceph
[19:33] * ircolle (~Adium@2601:1:a580:1735:d5cc:ddc2:9fe9:bef1) Quit (Ping timeout: 480 seconds)
[19:33] * vpol (~vpol@000131a0.user.oftc.net) has joined #ceph
[19:36] * kawa2014 (~kawa@2-229-47-79.ip195.fastwebnet.it) Quit (Quit: Leaving)
[19:36] * sbfox1 (~Adium@72.2.49.50) Quit (Quit: Leaving.)
[19:37] * mattronix (~quassel@fw1.sdc.mattronix.nl) Quit (Remote host closed the connection)
[19:38] * mattronix (~quassel@2a01:7c8:aab8:616:5054:ff:fe89:506b) has joined #ceph
[19:38] * pdrakewe_ (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) has joined #ceph
[19:40] * mattronix (~quassel@2a01:7c8:aab8:616:5054:ff:fe89:506b) Quit (Remote host closed the connection)
[19:42] * mattronix (~quassel@2a01:7c8:aab8:616:5054:ff:fe89:506b) has joined #ceph
[19:42] <harmw> why is my brand new cluster switching to HEALTH_WARN after creating a new pool
[19:42] * bkopilov (~bkopilov@bzq-79-177-203-226.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[19:43] <harmw> # ceph osd pool create volumes 64
[19:43] <harmw> health HEALTH_WARN
[19:43] <harmw> 64 pgs stuck inactive
[19:43] <harmw> 64 pgs stuck unclean
[19:43] * pdrakeweb (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) Quit (Ping timeout: 480 seconds)
[19:44] * fghaas (~florian@213162068002.public.t-mobile.at) has joined #ceph
[19:46] * sbfox (~Adium@72.2.49.50) has joined #ceph
[19:47] * bkopilov (~bkopilov@bzq-109-65-10-75.red.bezeqint.net) has joined #ceph
[19:47] * ircolle (~Adium@2601:1:a580:1735:2dba:e8d5:cdbd:97c0) has joined #ceph
[19:49] <PerlStalker> harmw: Not enough osds to meet the replication requirements, perhaps.
[19:50] * vegas3 (~Kayla@2WVAABMKU.tor-irc.dnsbl.oftc.net) Quit ()
[19:50] * kalleeen (~AG_Scott@tor-exit.squirrel.theremailer.net) has joined #ceph
[19:50] <skullone> is there anything i can do, to help ceph.com hosting? i have AWS and digital ocean boxes which can do caching, or host mirrors :)
[19:51] <Tetard> lasceph.com
[19:51] * Tetard facepalms
[19:52] <skullone> also, my employer has large datacenters, with spare capacity (its harder to provision here than AWS though)
[19:52] <skullone> so if capacity or bandwidth are an issue, i can definitely help
[19:52] <harmw> PerlStalker: good one, though I've changd the size and min_size to 2 (since this cluster currently runs on 2 OSD's)
[19:53] <skullone> also know people at Dyn that could hook you up with a DNS host, so you arent stuck with dreamhost DNS
[19:55] * fghaas (~florian@213162068002.public.t-mobile.at) Quit (Quit: Leaving.)
[19:57] * sbfox (~Adium@72.2.49.50) Quit (Quit: Leaving.)
[20:03] * sbfox (~Adium@72.2.49.50) has joined #ceph
[20:03] * puffy (~puffy@64.191.206.83) has joined #ceph
[20:03] * nils_ (~nils@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[20:05] * vpol (~vpol@000131a0.user.oftc.net) Quit (Quit: vpol)
[20:06] <Vivek> loicd: Just that chap authentication and mutual chap autentication is pending.
[20:07] * mgolub (~Mikolaj@91.225.202.153) Quit (Remote host closed the connection)
[20:08] * cholcombe (~chris@pool-108-42-125-114.snfcca.fios.verizon.net) has joined #ceph
[20:10] * cholcombe (~chris@pool-108-42-125-114.snfcca.fios.verizon.net) Quit ()
[20:10] * cholcombe (~chris@pool-108-42-125-114.snfcca.fios.verizon.net) has joined #ceph
[20:12] * thomnico (~thomnico@2a01:e35:8b41:120:a549:a73c:cbb5:eb2d) Quit (Quit: Ex-Chat)
[20:13] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) Quit (Quit: leaving)
[20:14] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) has joined #ceph
[20:16] * foxxx0 (~fox@2a01:4f8:200:216b::2) has joined #ceph
[20:17] * mgolub (~Mikolaj@91.225.202.153) has joined #ceph
[20:17] * dalgaaf (uid15138@id-15138.charlton.irccloud.com) Quit (Quit: Connection closed for inactivity)
[20:18] * vpol (~vpol@000131a0.user.oftc.net) has joined #ceph
[20:20] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) Quit (Quit: leaving)
[20:20] * kalleeen (~AG_Scott@3OZAAA6FX.tor-irc.dnsbl.oftc.net) Quit ()
[20:20] * fauxhawk (~AluAlu@enjolras.gtor.org) has joined #ceph
[20:21] * bkopilov (~bkopilov@bzq-109-65-10-75.red.bezeqint.net) Quit (Read error: No route to host)
[20:21] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) has joined #ceph
[20:21] * karnan (~karnan@106.51.132.72) has joined #ceph
[20:22] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) Quit ()
[20:23] * bitserker (~toni@88.87.194.130) Quit (Quit: Leaving.)
[20:23] * bitserker (~toni@88.87.194.130) has joined #ceph
[20:23] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) has joined #ceph
[20:24] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) Quit ()
[20:24] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) has joined #ceph
[20:24] * daniel2_ (~daniel@12.164.168.117) Quit (Remote host closed the connection)
[20:27] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[20:29] * vpol (~vpol@000131a0.user.oftc.net) Quit (Quit: vpol)
[20:31] * daniel2_ (~daniel@12.164.168.117) has joined #ceph
[20:32] * bitserker (~toni@88.87.194.130) Quit (Ping timeout: 480 seconds)
[20:34] <harmw> hm, is 'too many PGs per OSD (512 > max 300)' something bad?
[20:35] * hellertime (~Adium@72.246.185.14) has joined #ceph
[20:36] * RayTracer (~RayTracer@host-81-190-2-156.gdynia.mm.pl) has joined #ceph
[20:36] * RayTracer (~RayTracer@host-81-190-2-156.gdynia.mm.pl) Quit (Remote host closed the connection)
[20:36] * sherlocked (~watson@14.139.82.6) has joined #ceph
[20:37] * sherlocked (~watson@14.139.82.6) Quit ()
[20:37] * hellertime (~Adium@72.246.185.14) Quit ()
[20:38] * bkopilov (~bkopilov@bzq-109-65-10-75.red.bezeqint.net) has joined #ceph
[20:43] * LeaChim (~LeaChim@host86-143-18-67.range86-143.btcentralplus.com) has joined #ceph
[20:48] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[20:49] * sbfox (~Adium@72.2.49.50) Quit (Quit: Leaving.)
[20:50] * vpol (~vpol@000131a0.user.oftc.net) has joined #ceph
[20:50] * fauxhawk (~AluAlu@2WVAABMNW.tor-irc.dnsbl.oftc.net) Quit ()
[20:50] * dusti (~rcfighter@orion.enn.lu) has joined #ceph
[20:56] <cholcombe> ceph: i did some searching through calamari and ceph's wiki. Is there a way to get history ceph df information currently? I believe the answer is no but I just wanted to verify
[20:56] <cholcombe> s/history/historical/
[20:58] <jcsp1> cholcombe: if you're using calamari, then the data is there, albeit not exposed in the UI. There's a URL that will give you a raw graphite dashboard that you can use to plot your own charts, it's /graphite/dashboard IIRC
[20:58] <cholcombe> interesting
[20:58] <cholcombe> ok that's good news
[20:58] <cholcombe> so it's logging every x seconds to graphite?
[20:59] <jcsp1> yes, if everything is installed properly then diamond is collecting stats on the ceph servers and sending them to graphite on the calamari server.
[21:00] <cholcombe> does calamari collect stats to a central place? it wasn't clear from the docs
[21:02] <jcsp1> "to graphite on the calamari server"...
[21:02] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[21:02] <cholcombe> ok
[21:02] <cholcombe> so single point of failure possibly
[21:05] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[21:06] <litwol> Would be an interesting use case to have a specialized virtual-machine/appliance auto-start on every enterprice-office workstation whenever an employee boots their computer. then said computer join the ceph osd fleet automatically.
[21:07] <litwol> "free" office-wide NAS with no additional infrastructure.
[21:15] * karnan (~karnan@106.51.132.72) Quit (Remote host closed the connection)
[21:20] * dusti (~rcfighter@5NZAABPEI.tor-irc.dnsbl.oftc.net) Quit ()
[21:20] * Rehevkor (~mrapple@hessel2.torservers.net) has joined #ceph
[21:21] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) has joined #ceph
[21:22] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[21:33] <litwol> Is there a documentation page i can read that describes ceph's process to data re-balancing when pool grows by adding new OSDs?
[21:33] <litwol> i'm not having much luck finding it
[21:34] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[21:39] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:39] * rendar (~I@host38-179-dynamic.23-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[21:39] <florz> litwol: the CRUSH paper describes the concepts rather well
[21:40] <litwol> found reweight-by-utilization
[21:40] * BManojlovic (~steki@cable-89-216-238-192.dynamic.sbb.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[21:40] <florz> litwol: http://ceph.com/papers/weil-crush-sc06.pdf
[21:40] <litwol> ty
[21:42] * rendar (~I@host38-179-dynamic.23-79-r.retail.telecomitalia.it) has joined #ceph
[21:42] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[21:43] * alram (~alram@38.96.12.2) has joined #ceph
[21:43] * Aid2 (0c4515fd@107.161.19.109) has joined #ceph
[21:50] * Rehevkor (~mrapple@98EAABADK.tor-irc.dnsbl.oftc.net) Quit ()
[21:50] * kalmisto (~BillyBobJ@tor-exit0-readme.dfri.se) has joined #ceph
[21:54] * Aid2 (0c4515fd@107.161.19.109) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[22:06] * alram (~alram@38.96.12.2) Quit (Ping timeout: 480 seconds)
[22:07] * wschulze (~wschulze@38.96.12.2) has joined #ceph
[22:08] * sbfox (~Adium@72.2.49.50) has joined #ceph
[22:10] * dynamicudpate (~overonthe@199.68.193.54) has joined #ceph
[22:12] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) has left #ceph
[22:15] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[22:15] * OnTheRock (~overonthe@199.68.193.54) Quit (Ping timeout: 480 seconds)
[22:18] * lalatenduM (~lalatendu@122.172.106.120) Quit (Quit: Leaving)
[22:20] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) Quit (Remote host closed the connection)
[22:20] * kalmisto (~BillyBobJ@98EAABAD8.tor-irc.dnsbl.oftc.net) Quit ()
[22:20] * Shnaw (~Dinnerbon@tor-node.rutgers.edu) has joined #ceph
[22:29] * ganders (~root@190.2.42.21) Quit (Quit: WeeChat 0.4.2)
[22:37] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) has joined #ceph
[22:47] * dyasny (~dyasny@173.231.115.58) Quit (Ping timeout: 480 seconds)
[22:50] * Shnaw (~Dinnerbon@5NZAABPJS.tor-irc.dnsbl.oftc.net) Quit ()
[22:50] * Sun7zu (~utugi____@edwardsnowden1.torservers.net) has joined #ceph
[22:53] * wschulze (~wschulze@38.96.12.2) Quit (Quit: Leaving.)
[22:56] * BManojlovic (~steki@cable-89-216-238-192.dynamic.sbb.rs) has joined #ceph
[22:57] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) Quit (Quit: Ex-Chat)
[23:01] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[23:03] * tobiash (~quassel@mail.bmw-carit.de) Quit (Quit: No Ping reply in 180 seconds.)
[23:03] * tobiash (~quassel@mail.bmw-carit.de) has joined #ceph
[23:07] * BManojlovic (~steki@cable-89-216-238-192.dynamic.sbb.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[23:08] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[23:10] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[23:18] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Read error: Connection reset by peer)
[23:19] * oro (~oro@66-194-8-225.static.twtelecom.net) has joined #ceph
[23:19] <debian112> anyone noticed that "ceph fs new" command is not there?
[23:19] <debian112> or maybe it changed?
[23:20] <jcsp1> debian112: it was only added after firefly
[23:20] * Sun7zu (~utugi____@98EAABAF8.tor-irc.dnsbl.oftc.net) Quit ()
[23:20] * notarima (~spidu_@425AAAL2F.tor-irc.dnsbl.oftc.net) has joined #ceph
[23:20] <jcsp1> it was added in giant iirc
[23:22] <debian112> so what is the recommend production version?
[23:22] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) Quit (Read error: Connection reset by peer)
[23:22] <debian112> firefly still?
[23:23] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) has joined #ceph
[23:23] * p66kumar (~p66kumar@74.119.205.248) Quit (Ping timeout: 480 seconds)
[23:23] <debian112> or giant?
[23:23] * alram (~alram@38.96.12.2) has joined #ceph
[23:24] * bene (~ben@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[23:24] * mgolub (~Mikolaj@91.225.202.153) Quit (Quit: away)
[23:28] * p66kumar (~p66kumar@74.119.205.248) has joined #ceph
[23:30] * fireD (~fireD@178.160.95.199) Quit (Ping timeout: 480 seconds)
[23:34] <m0zes> firefly is "LTS", giant is "stable", I'm using hammer (the newest LTS).
[23:35] <skullone> hrm getting a weird error on Hammer when deploying radosgw:
[23:35] <skullone> dev-cgw-01][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.dev-cgw-01 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.dev-cgw-01/keyring
[23:35] <skullone> [dev-cgw-01][ERROR ] 2015-04-15 17:34:52.464312 7ff395de8700 0 librados: client.bootstrap-rgw authentication error (1) Operation not permitted
[23:35] <skullone> [dev-cgw-01][ERROR ] Error connecting to cluster: PermissionError
[23:35] * yghannam (~yghannam@0001f8aa.user.oftc.net) Quit (Ping timeout: 480 seconds)
[23:36] <skullone> cant quite figure out whats going on
[23:41] <debian112> m0zes thanks
[23:41] <debian112> What OS are most people using in production?
[23:42] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[23:42] <m0zes> I'm using centos 7.1 for the servers, a mix of os's for the clients. mostly gentoo.
[23:42] <debian112> I been running a small cluster and starting order hardware for the real one.
[23:43] <skullone> m0zes: do you use ceph-deploy?
[23:43] <m0zes> skullone: yes, but not for radosgw.
[23:43] <debian112> I will like to use debian
[23:44] <debian112> but I see most of the testing is with Rhel and Ubuntu, My test cluster is Centos 7.1
[23:47] * blynch (~blynch@vm-nat.msi.umn.edu) Quit (Ping timeout: 480 seconds)
[23:49] <m0zes> being on a wellworn path for things as integral as storage can be useful. sometimes more useful than admin comfort.
[23:50] * notarima (~spidu_@425AAAL2F.tor-irc.dnsbl.oftc.net) Quit ()
[23:50] * Drezil (~Kyso_@politkovskaja.torservers.net) has joined #ceph
[23:51] * fsimonce (~simon@host178-188-dynamic.26-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[23:51] * blynch (~blynch@vm-nat.msi.umn.edu) has joined #ceph
[23:52] * PerlStalker (~PerlStalk@162.220.127.20) Quit (Quit: ...)
[23:54] * sbfox (~Adium@72.2.49.50) Quit (Quit: Leaving.)
[23:59] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:59] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[23:59] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[23:59] * brad_mssw (~brad@66.129.88.50) Quit ()

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.