#ceph IRC Log

Index

IRC Log for 2014-02-27

Timestamps are in GMT/BST.

[0:04] <zoltan> thanks for the help, bandrus!
[0:04] <zoltan> we'll see how it went in the morning.
[0:04] <zoltan> nighy night
[0:04] <zoltan> *nighty
[0:04] * zoltan (~nagyz@80-218-67-114.dclient.hispeed.ch) Quit ()
[0:21] <bandrus> night!
[0:21] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[0:24] * dereky (~derek@129-2-129-152.wireless.umd.edu) Quit (Quit: dereky)
[0:25] * The_Bishop (~bishop@2a02:2450:102f:e:1d43:71d7:cba7:96b5) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[0:45] * mo- (~mo@2a01:4f8:141:3264::3) Quit (Quit: leaving)
[0:46] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[0:47] * LeaChim (~LeaChim@host86-166-182-74.range86-166.btcentralplus.com) has joined #ceph
[0:50] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[0:51] * JoeGruher (~JoeGruher@134.134.139.70) Quit (Remote host closed the connection)
[0:52] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[0:57] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:59] * nwat (~textual@eduroam-228-225.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:07] * sprachgenerator (~sprachgen@130.202.135.187) Quit (Quit: sprachgenerator)
[1:08] * sputnik13 (~sputnik13@207.8.121.241) Quit (Ping timeout: 480 seconds)
[1:11] * ircolle (~Adium@2601:1:8380:2d9:f83c:b225:c6cf:cab3) Quit (Quit: Leaving.)
[1:12] * Cube (~Cube@12.248.40.138) Quit (Quit: Leaving.)
[1:14] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has left #ceph
[1:16] * Pedras (~Adium@216.207.42.132) Quit (Ping timeout: 480 seconds)
[1:16] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[1:19] * sroy (~sroy@96.127.230.203) has joined #ceph
[1:25] * Cube (~Cube@12.248.40.138) has joined #ceph
[1:30] * fedgoat (~fedgoat@cpe-68-203-10-64.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[1:30] * The_Bishop (~bishop@f055026026.adsl.alicedsl.de) has joined #ceph
[1:30] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[1:35] * ivotron (~ivotron@dhcp-59-237.cse.ucsc.edu) Quit (Remote host closed the connection)
[1:48] * keeperandy (~textual@c-71-200-84-53.hsd1.md.comcast.net) has joined #ceph
[1:49] * erice (~erice@50.240.86.181) has joined #ceph
[1:53] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[1:53] * sprachgenerator (~sprachgen@130.202.135.187) has joined #ceph
[1:58] * zerick (~eocrospom@190.187.21.53) Quit (Remote host closed the connection)
[2:00] * dpippenger (~riven@cpe-198-72-157-189.socal.res.rr.com) Quit (Quit: Leaving.)
[2:04] * sroy (~sroy@96.127.230.203) Quit (Quit: Quitte)
[2:06] * sjustwork (~sam@2607:f298:a:607:753a:c942:f442:9cb2) Quit (Quit: Leaving.)
[2:09] * rmoe_ (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:09] * dereky (~derek@pool-71-114-104-38.washdc.fios.verizon.net) has joined #ceph
[2:11] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[2:14] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[2:14] * BillK (~BillK-OFT@106-68-205-248.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[2:16] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[2:16] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[2:18] * LeaChim (~LeaChim@host86-166-182-74.range86-166.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:21] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) has joined #ceph
[2:31] * thb (~me@2a02:2028:c2:71a0:6267:20ff:fec9:4e40) has joined #ceph
[2:31] * thb is now known as Guest1531
[2:36] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) Quit (Ping timeout: 480 seconds)
[2:37] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) has joined #ceph
[2:37] * shang (~ShangWu@175.41.48.77) has joined #ceph
[2:41] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[2:44] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[2:48] * glzhao (~glzhao@220.181.11.232) has joined #ceph
[2:52] * mozg (~andrei@host217-46-236-49.in-addr.btopenworld.com) Quit (Ping timeout: 480 seconds)
[2:57] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) has joined #ceph
[3:00] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[3:00] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) Quit ()
[3:01] * markbby (~Adium@168.94.245.3) has joined #ceph
[3:02] * sprachgenerator (~sprachgen@130.202.135.187) Quit (Quit: sprachgenerator)
[3:03] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) Quit (Ping timeout: 480 seconds)
[3:06] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) has joined #ceph
[3:15] * orion195 (~oftc-webi@213.244.168.133) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * bauruine (~bauruine@2a01:4f8:150:6381::545) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * verdurin (~adam@2001:8b0:281:78ec:e2cb:4eff:fe01:f767) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * asmaps (~quassel@2a03:4000:2:3c5::80) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * _are__ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * mschiff (~mschiff@mx10.schiffbauer.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * al (quassel@niel.cx) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * godog (~filo@0001309c.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * yeled (~yeled@spodder.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * scalability-junk (uid6422@id-6422.ealing.irccloud.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * sleinen1 (~Adium@2001:620:0:46:a0e9:db6b:dda9:3cb4) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * bandrus (~Adium@adsl-75-5-250-121.dsl.scrm01.sbcglobal.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * JCL (~JCL@2601:9:5980:39b:5dd1:36e8:d869:d82a) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * mdxi (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * warrenSusui (~Warren@2607:f298:a:607:38fc:445b:1848:70d4) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * paradon_ (~thomas@60.234.66.253) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Meistarin (~coolguy@0001c3c8.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * sage (~quassel@2607:f298:a:607:c83:1b7e:4755:edc4) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * nyerup (irc@jespernyerup.dk) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * musca (musca@tyrael.eu) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * swizgard_ (~swizgard@port-87-193-133-18.static.qsc.de) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Elbandi (~ea333@elbandi.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * finster (~finster@cmdline.guru) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * kraken (~kraken@gw.sepia.ceph.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * toutour (~toutour@causses.idest.org) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * flaxy (~afx@78.130.174.164) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * schmee (~quassel@phobos.isoho.st) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * ido (~ido@00014f21.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * zapotah (~zapotah@dsl-hkibrasgw1-58c08e-250.dhcp.inet.fi) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * dgc (~redacted@bikeshed.us) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * jeremydei (~jdeininge@ip-64-139-50-114.sjc.megapath.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * baffle (baffle@jump.stenstad.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * garphy`aw (~garphy@frank.zone84.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * amospalla (~amospalla@0001a39c.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * acaos (~zac@209.99.103.42) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * via (~via@smtp2.matthewvia.info) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * wattsmarcus5 (~mdw@aa2.linuxbox.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * kuu (~kuu@virtual362.tentacle.fi) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * dis (~dis@109.110.66.27) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * ctd (~root@00011932.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * shang (~ShangWu@175.41.48.77) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Guest1531 (~me@2a02:2028:c2:71a0:6267:20ff:fec9:4e40) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * erice (~erice@50.240.86.181) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * The_Bishop (~bishop@f055026026.adsl.alicedsl.de) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * jo0nas (~jonas@188-183-5-254-static.dk.customer.tdc.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * jks (~jks@3e6b5724.rev.stofanet.dk) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * diegows (~diegows@190.190.5.238) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * erkules (~erkules@port-92-193-25-63.dynamic.qsc.de) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * guppy (~quassel@guppy.xxx) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * abique (~abique@time2market1.epfl.ch) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Svedrin (svedrin@ketos.funzt-halt.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * capri_on (~capri@212.218.127.222) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * ingard (~cake@tu.rd.vc) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * loicd (~loicd@bouncer.dachary.org) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * dlan_ (~dennis@116.228.88.131) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * DLange (~DLange@dlange.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * guerby (~guerby@ip165-ipv6.tetaneutral.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * plantain (~plantain@106.187.96.118) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * toabctl (~toabctl@toabctl.de) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * cjh973 (~cjh973@ps123903.dreamhost.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * tnt (~tnt@ec2-54-200-98-43.us-west-2.compute.amazonaws.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * mancdaz (~mancdaz@2a00:1a48:7807:102:94f4:6b56:ff08:886c) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Kdecherf (~kdecherf@shaolan.kdecherf.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Guest140 (~coyo@thinks.outside.theb0x.org) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * joao (~joao@a79-168-11-205.cpe.netcabo.pt) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * sig_wall (~adjkru@185.14.185.91) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * dereky (~derek@pool-71-114-104-38.washdc.fios.verizon.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * semitech1ical (~adam@ip70-176-51-26.ph.ph.cox.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * TMM (~hp@c97185.upc-c.chello.nl) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * xmltok (~xmltok@cpe-76-90-130-65.socal.res.rr.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Underbyte (~jerrad@pat-global.macpractice.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * JeffK (~JeffK@38.99.52.10) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * simulx (~simulx@66-194-114-178.static.twtelecom.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * athrift (~nz_monkey@203.86.205.13) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * xinxinsh__ (~xinxinsh@fmdmzpr04-ext.fm.intel.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Esmil (esmil@horus.0x90.dk) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Hau_MI (~HauM1@login.univie.ac.at) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * ponyofdeath (~vladi@cpe-76-167-201-214.san.res.rr.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * stewiem20001 (~stewiem20@195.10.250.233) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * ofu (ofu@dedi3.fuckner.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * mjevans (~mje@209.141.34.79) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * ccooke (~ccooke@spirit.gkhs.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * bkero (~bkero@216.151.13.66) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * psieklFH_ (psiekl@wombat.eu.org) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * grifferz_ (~andy@bitfolk.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * lurbs (user@uber.geek.nz) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Jahkeup (Jahkeup@gemini.ca.us.panicbnc.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * chris_lu (~ccc2@bolin.Lib.lehigh.EDU) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * josef_ (~josef@kungsbacka.oderland.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * vhasi (vhasi@vha.si) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * leseb (~leseb@88-190-214-97.rev.dedibox.fr) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * zere (~matt@asklater.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * tomaw (tom@tomaw.noc.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * brambles (lechuck@s0.barwen.ch) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * madkiss (~madkiss@217.194.73.202) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * reed (~reed@75-101-54-131.dsl.static.sonic.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Vacum (~vovo@88.130.221.198) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * jpierre03 (~jpierre03@voyage.prunetwork.fr) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * gregsfortytwo (~Adium@2607:f298:a:607:9050:ab35:adc6:8d17) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * carter (~carter@2600:3c03::f03c:91ff:fe6e:6c01) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * raso (~raso@deb-multimedia.org) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * NafNaf (~NafNaf@5.148.165.184) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * brother (foobaz@vps1.hacking.dk) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * twx_ (~twx@rosamoln.org) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * todin (tuxadero@kudu.in-berlin.de) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * wogri (~wolf@nix.wogri.at) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * LCF (ball8@193.231.broadband16.iol.cz) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Ormod (~valtha@ohmu.fi) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * erwyn (~erwyn@markelous.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * ferai (~quassel@corkblock.jefferai.org) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * joelio (~Joel@88.198.107.214) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * liiwi (liiwi@idle.fi) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Anticimex (anticimex@95.80.32.80) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * tom2 (~jens@s11.jayr.de) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * rektide (~rektide@eldergods.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * jamespage (~jamespage@culvain.gromper.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * keeperandy (~textual@c-71-200-84-53.hsd1.md.comcast.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * mnash (~chatzilla@vpn.expressionanalysis.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * mdjp (~mdjp@213.229.87.114) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * neurodrone (~neurodron@static-108-30-171-7.nycmny.fios.verizon.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * wusui (~Warren@2607:f298:a:607:38fc:445b:1848:70d4) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * geekmush1 (~Adium@cpe-66-68-198-33.rgv.res.rr.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * c74d (~c74d@2002:4404:712c:0:76de:2bff:fed4:2766) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Warod (warod@lakka.kapsi.fi) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * [caveman] (~quassel@boxacle.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * jnq (~jon@95.85.22.50) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * hughsaunders (~hughsaund@wherenow.org) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * zackc (~zackc@0001ba60.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * kwmiebach (sid16855@id-16855.charlton.irccloud.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Guest177 (~jeremy@ip23.67-202-99.static.steadfastdns.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * adam_ (~adam@rincewind.universalconflicts.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * nwf_ (~nwf@67.62.51.95) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * NaioN (stefan@andor.naion.nl) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * fretb (~fretb@frederik.pw) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Azrael (~azrael@terra.negativeblue.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * asadpanda (~asadpanda@2001:470:c09d:0:20c:29ff:fe4e:a66) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * masterpe (~masterpe@2a01:670:400::43) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * dec (~dec@ec2-54-252-14-44.ap-southeast-2.compute.amazonaws.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * ccourtaut (~ccourtaut@2001:41d0:1:eed3::1) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * fred` (fred@earthli.ng) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * ivan` (~ivan`@000130ca.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Clbh (~benoit@cyllene.anchor.net.au) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * sileht (~sileht@gizmo.sileht.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Cube (~Cube@12.248.40.138) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Sysadmin88 (~IceChat77@176.254.32.31) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * angdraug (~angdraug@12.164.168.117) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Fetch (fetch@gimel.cepheid.org) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * jackhill (jackhill@pilot.trilug.org) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * elmo (~james@faun.canonical.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * dalegaard (~dalegaard@vps.devrandom.dk) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * bens (~ben@c-71-231-52-111.hsd1.wa.comcast.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * SpamapS (~clint@184.105.137.237) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * wonko_be (bernard@november.openminds.be) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * darkfader (~floh@88.79.251.60) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * philips (~philips@ec2-23-22-175-220.compute-1.amazonaws.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * jtangwk (~Adium@gateway.tchpc.tcd.ie) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * cephalobot (~ceph@ds2390.dreamservers.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * bdonnahue2 (~James@24-148-64-18.c3-0.mart-ubr2.chi-mart.il.cable.rcn.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * jlogan (~Thunderbi@72.5.59.176) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * zjohnson_ (~zjohnson@guava.jsy.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * cfreak200 (~cfreak200@p4FF3E5A5.dip0.t-ipconnect.de) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * nhm (~nhm@65-128-159-155.mpls.qwest.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * beardo__ (~sma310@beardo.cc.lehigh.edu) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * hflai (hflai@alumni.cs.nctu.edu.tw) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * houkouonchi-work (~linux@12.248.40.138) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * markl (~mark@knm.org) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * blahnana (~bman@us1.blahnana.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Sargun (~sargun@208-106-98-2.static.sonic.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Psi-Jack (~Psi-Jack@psi-jack.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * glzhao (~glzhao@220.181.11.232) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * cmdrk (~lincoln@c-24-12-206-91.hsd1.il.comcast.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * themgt (~themgt@pc-188-95-160-190.cm.vtr.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * leochill (~leochill@nyc-333.nycbit.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * hijacker (~hijacker@bgva.sonic.taxback.ess.ie) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * winston-d (~ubuntu@ec2-54-244-213-72.us-west-2.compute.amazonaws.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Meths (~meths@2.25.189.44) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * aarcane (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * mmmucky (~mucky@mucky.socket7.org) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * ZyTer (~ZyTer@ghostbusters.apinnet.fr) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Nats (~Nats@telstr575.lnk.telstra.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * KindOne (kindone@0001a7db.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * jdmason (~jon@192.55.54.38) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * hjorth (~hjorth@sig9.kill.dk) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * pmatulis_ (~peter@64.34.151.178) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * eternaleye (~eternaley@50.245.141.73) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * singler (~singler@zeta.kirneh.eu) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * sekon (~harish@li291-152.members.linode.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * iggy_ (~iggy@theiggy.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * rsFF (~rsFF@otherreality.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Karcaw (~evan@96-41-200-66.dhcp.elbg.wa.charter.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * mmgaggle (~kyle@cerebrum.dreamservers.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * wrale (~wrale@wrk-28-217.cs.wright.edu) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * svg (~svg@hydargos.ginsys.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Zethrok (~martin@95.154.26.34) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * WintermeW_ (~WintermeW@212-83-158-61.rev.poneytelecom.eu) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * cclien (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * `10_ (~10@juke.fm) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * rturk-away (~rturk@ds2390.dreamservers.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * rBEL (robbe@november.openminds.be) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * clayg_ (~clayg@ec2-54-235-44-253.compute-1.amazonaws.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * jerker (jerker@82ee1319.test.dnsbl.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * eightyeight (~atoponce@atoponce.user.oftc.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * Daviey (~DavieyOFT@bootie.daviey.com) Quit (graviton.oftc.net resistance.oftc.net)
[3:15] * sbadia (~sbadia@yasaw.net) Quit (graviton.oftc.net resistance.oftc.net)
[3:18] * saaby (~as@mail.saaby.com) Quit (Read error: Connection reset by peer)
[3:18] * joshuay04 (~joshuay04@rrcs-74-218-204-10.central.biz.rr.com) Quit (Ping timeout: 480 seconds)
[3:58] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) Quit (Quit: Leaving.)
[4:12] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) Quit (Ping timeout: 480 seconds)
[4:12] * haomaiwa_ (~haomaiwan@106.38.255.123) Quit (Ping timeout: 480 seconds)
[4:56] * dmick (~dmick@2607:f298:a:607:d999:cecb:1914:20ac) Quit (Quit: Leaving.)
[5:57] * markbby (~Adium@168.94.245.3) Quit (Remote host closed the connection)
[7:24] * WarrenUsui (~Warren@2607:f298:a:607:38fc:445b:1848:70d4) has joined #ceph
[7:24] * JoeGruher (~JoeGruher@134.134.139.70) has joined #ceph
[7:24] * madkiss (~madkiss@217.194.73.202) has joined #ceph
[7:24] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[7:24] * ivotron (~ivotron@adsl-99-146-2-252.dsl.pltn13.sbcglobal.net) has joined #ceph
[7:24] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[7:24] * Cube (~Cube@66-87-130-210.pools.spcsdns.net) has joined #ceph
[7:24] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[7:24] * geekmush (~Adium@cpe-66-68-198-33.rgv.res.rr.com) has joined #ceph
[7:24] * Vacum_ (~vovo@88.130.200.249) has joined #ceph
[7:24] * mjevans (~mje@209.141.34.79) has joined #ceph
[7:24] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[7:24] * dmick (~dmick@2607:f298:a:607:35:242d:72e8:b7c) has joined #ceph
[7:24] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[7:24] * joshuay04 (~joshuay04@rrcs-74-218-204-10.central.biz.rr.com) has joined #ceph
[7:24] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) has joined #ceph
[7:24] * haomaiwang (~haomaiwan@118.186.133.131) has joined #ceph
[7:24] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) has joined #ceph
[7:24] * sjusthm (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) has joined #ceph
[7:24] * erkules_ (~erkules@port-92-193-7-20.dynamic.qsc.de) has joined #ceph
[7:24] * saaby_ (~as@mail.saaby.com) has joined #ceph
[7:24] * glzhao (~glzhao@220.181.11.232) has joined #ceph
[7:24] * shang (~ShangWu@175.41.48.77) has joined #ceph
[7:24] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) has joined #ceph
[7:24] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[7:24] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[7:24] * dereky (~derek@pool-71-114-104-38.washdc.fios.verizon.net) has joined #ceph
[7:24] * erice (~erice@50.240.86.181) has joined #ceph
[7:24] * The_Bishop (~bishop@f055026026.adsl.alicedsl.de) has joined #ceph
[7:24] * scalability-junk (uid6422@id-6422.ealing.irccloud.com) has joined #ceph
[7:24] * rsFF (~rsFF@otherreality.net) has joined #ceph
[7:24] * semitech1ical (~adam@ip70-176-51-26.ph.ph.cox.net) has joined #ceph
[7:24] * Sysadmin88 (~IceChat77@176.254.32.31) has joined #ceph
[7:24] * cmdrk (~lincoln@c-24-12-206-91.hsd1.il.comcast.net) has joined #ceph
[7:24] * jo0nas (~jonas@188-183-5-254-static.dk.customer.tdc.net) has joined #ceph
[7:24] * themgt (~themgt@pc-188-95-160-190.cm.vtr.net) has joined #ceph
[7:24] * jks (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[7:24] * TMM (~hp@c97185.upc-c.chello.nl) has joined #ceph
[7:24] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) has joined #ceph
[7:24] * sleinen1 (~Adium@2001:620:0:46:a0e9:db6b:dda9:3cb4) has joined #ceph
[7:24] * leochill (~leochill@nyc-333.nycbit.com) has joined #ceph
[7:24] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[7:24] * bandrus (~Adium@adsl-75-5-250-121.dsl.scrm01.sbcglobal.net) has joined #ceph
[7:24] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[7:24] * JCL (~JCL@2601:9:5980:39b:5dd1:36e8:d869:d82a) has joined #ceph
[7:24] * hijacker (~hijacker@bgva.sonic.taxback.ess.ie) has joined #ceph
[7:24] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[7:24] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) has joined #ceph
[7:24] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[7:24] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) has joined #ceph
[7:24] * winston-d (~ubuntu@ec2-54-244-213-72.us-west-2.compute.amazonaws.com) has joined #ceph
[7:24] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) has joined #ceph
[7:24] * mnash (~chatzilla@vpn.expressionanalysis.com) has joined #ceph
[7:24] * jpierre03 (~jpierre03@voyage.prunetwork.fr) has joined #ceph
[7:24] * Underbyte (~jerrad@pat-global.macpractice.net) has joined #ceph
[7:24] * mdxi (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) has joined #ceph
[7:24] * Meths (~meths@2.25.189.44) has joined #ceph
[7:24] * Fetch (fetch@gimel.cepheid.org) has joined #ceph
[7:24] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[7:24] * orion195 (~oftc-webi@213.244.168.133) has joined #ceph
[7:24] * aarcane (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[7:24] * jackhill (jackhill@pilot.trilug.org) has joined #ceph
[7:24] * cjh973 (~cjh973@ps123903.dreamhost.com) has joined #ceph
[7:24] * guppy (~quassel@guppy.xxx) has joined #ceph
[7:24] * mdjp (~mdjp@213.229.87.114) has joined #ceph
[7:24] * neurodrone (~neurodron@static-108-30-171-7.nycmny.fios.verizon.net) has joined #ceph
[7:24] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[7:24] * jcsp (~Adium@0001bf3a.user.oftc.net) has joined #ceph
[7:24] * wusui (~Warren@2607:f298:a:607:38fc:445b:1848:70d4) has joined #ceph
[7:24] * warrenSusui (~Warren@2607:f298:a:607:38fc:445b:1848:70d4) has joined #ceph
[7:24] * gregsfortytwo (~Adium@2607:f298:a:607:9050:ab35:adc6:8d17) has joined #ceph
[7:24] * elmo (~james@faun.canonical.com) has joined #ceph
[7:24] * JeffK (~JeffK@38.99.52.10) has joined #ceph
[7:24] * abique (~abique@time2market1.epfl.ch) has joined #ceph
[7:24] * simulx (~simulx@66-194-114-178.static.twtelecom.net) has joined #ceph
[7:24] * Svedrin (svedrin@ketos.funzt-halt.net) has joined #ceph
[7:24] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[7:24] * mmmucky (~mucky@mucky.socket7.org) has joined #ceph
[7:24] * ZyTer (~ZyTer@ghostbusters.apinnet.fr) has joined #ceph
[7:24] * ingard (~cake@tu.rd.vc) has joined #ceph
[7:24] * Nats (~Nats@telstr575.lnk.telstra.net) has joined #ceph
[7:24] * bdonnahue2 (~James@24-148-64-18.c3-0.mart-ubr2.chi-mart.il.cable.rcn.com) has joined #ceph
[7:24] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) has joined #ceph
[7:24] * loicd (~loicd@bouncer.dachary.org) has joined #ceph
[7:24] * carter (~carter@2600:3c03::f03c:91ff:fe6e:6c01) has joined #ceph
[7:24] * paradon_ (~thomas@60.234.66.253) has joined #ceph
[7:24] * c74d (~c74d@2002:4404:712c:0:76de:2bff:fed4:2766) has joined #ceph
[7:24] * raso (~raso@deb-multimedia.org) has joined #ceph
[7:24] * KindOne (kindone@0001a7db.user.oftc.net) has joined #ceph
[7:24] * Meistarin (~coolguy@0001c3c8.user.oftc.net) has joined #ceph
[7:24] * dlan_ (~dennis@116.228.88.131) has joined #ceph
[7:24] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[7:24] * DLange (~DLange@dlange.user.oftc.net) has joined #ceph
[7:24] * guerby (~guerby@ip165-ipv6.tetaneutral.net) has joined #ceph
[7:24] * bauruine (~bauruine@2a01:4f8:150:6381::545) has joined #ceph
[7:24] * dalegaard (~dalegaard@vps.devrandom.dk) has joined #ceph
[7:24] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) has joined #ceph
[7:24] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[7:24] * bens (~ben@c-71-231-52-111.hsd1.wa.comcast.net) has joined #ceph
[7:24] * plantain (~plantain@106.187.96.118) has joined #ceph
[7:24] * sage (~quassel@2607:f298:a:607:c83:1b7e:4755:edc4) has joined #ceph
[7:24] * jdmason (~jon@192.55.54.38) has joined #ceph
[7:24] * athrift (~nz_monkey@203.86.205.13) has joined #ceph
[7:24] * nyerup (irc@jespernyerup.dk) has joined #ceph
[7:24] * xinxinsh__ (~xinxinsh@fmdmzpr04-ext.fm.intel.com) has joined #ceph
[7:24] * SpamapS (~clint@184.105.137.237) has joined #ceph
[7:24] * toabctl (~toabctl@toabctl.de) has joined #ceph
[7:24] * NafNaf (~NafNaf@5.148.165.184) has joined #ceph
[7:24] * Warod (warod@lakka.kapsi.fi) has joined #ceph
[7:24] * verdurin (~adam@2001:8b0:281:78ec:e2cb:4eff:fe01:f767) has joined #ceph
[7:24] * musca (musca@tyrael.eu) has joined #ceph
[7:24] * hjorth (~hjorth@sig9.kill.dk) has joined #ceph
[7:24] * tnt (~tnt@ec2-54-200-98-43.us-west-2.compute.amazonaws.com) has joined #ceph
[7:24] * mancdaz (~mancdaz@2a00:1a48:7807:102:94f4:6b56:ff08:886c) has joined #ceph
[7:24] * sig_wall (~adjkru@185.14.185.91) has joined #ceph
[7:24] * Kdecherf (~kdecherf@shaolan.kdecherf.com) has joined #ceph
[7:24] * Guest140 (~coyo@thinks.outside.theb0x.org) has joined #ceph
[7:24] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) has joined #ceph
[7:24] * joao (~joao@a79-168-11-205.cpe.netcabo.pt) has joined #ceph
[7:24] * wonko_be (bernard@november.openminds.be) has joined #ceph
[7:24] * Esmil (esmil@horus.0x90.dk) has joined #ceph
[7:24] * brother (foobaz@vps1.hacking.dk) has joined #ceph
[7:24] * pmatulis_ (~peter@64.34.151.178) has joined #ceph
[7:24] * twx_ (~twx@rosamoln.org) has joined #ceph
[7:24] * swizgard_ (~swizgard@port-87-193-133-18.static.qsc.de) has joined #ceph
[7:24] * Hau_MI (~HauM1@login.univie.ac.at) has joined #ceph
[7:24] * Elbandi (~ea333@elbandi.net) has joined #ceph
[7:24] * finster (~finster@cmdline.guru) has joined #ceph
[7:24] * [caveman] (~quassel@boxacle.net) has joined #ceph
[7:24] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) has joined #ceph
[7:24] * asmaps (~quassel@2a03:4000:2:3c5::80) has joined #ceph
[7:24] * _are__ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[7:24] * ponyofdeath (~vladi@cpe-76-167-201-214.san.res.rr.com) has joined #ceph
[7:24] * darkfader (~floh@88.79.251.60) has joined #ceph
[7:24] * kraken (~kraken@gw.sepia.ceph.com) has joined #ceph
[7:24] * eternaleye (~eternaley@50.245.141.73) has joined #ceph
[7:24] * philips (~philips@ec2-23-22-175-220.compute-1.amazonaws.com) has joined #ceph
[7:24] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[7:24] * jtangwk (~Adium@gateway.tchpc.tcd.ie) has joined #ceph
[7:24] * stewiem20001 (~stewiem20@195.10.250.233) has joined #ceph
[7:24] * jnq (~jon@95.85.22.50) has joined #ceph
[7:24] * cephalobot (~ceph@ds2390.dreamservers.com) has joined #ceph
[7:24] * singler (~singler@zeta.kirneh.eu) has joined #ceph
[7:24] * sekon (~harish@li291-152.members.linode.com) has joined #ceph
[7:24] * mschiff (~mschiff@mx10.schiffbauer.net) has joined #ceph
[7:24] * hughsaunders (~hughsaund@wherenow.org) has joined #ceph
[7:24] * jlogan (~Thunderbi@72.5.59.176) has joined #ceph
[7:24] * ofu (ofu@dedi3.fuckner.net) has joined #ceph
[7:24] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[7:24] * ido (~ido@00014f21.user.oftc.net) has joined #ceph
[7:24] * toutour (~toutour@causses.idest.org) has joined #ceph
[7:24] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[7:24] * flaxy (~afx@78.130.174.164) has joined #ceph
[7:24] * schmee (~quassel@phobos.isoho.st) has joined #ceph
[7:24] * zackc (~zackc@0001ba60.user.oftc.net) has joined #ceph
[7:24] * ccooke (~ccooke@spirit.gkhs.net) has joined #ceph
[7:24] * kwmiebach (sid16855@id-16855.charlton.irccloud.com) has joined #ceph
[7:24] * Guest177 (~jeremy@ip23.67-202-99.static.steadfastdns.net) has joined #ceph
[7:24] * adam_ (~adam@rincewind.universalconflicts.com) has joined #ceph
[7:24] * zjohnson_ (~zjohnson@guava.jsy.net) has joined #ceph
[7:24] * cfreak200 (~cfreak200@p4FF3E5A5.dip0.t-ipconnect.de) has joined #ceph
[7:24] * al (quassel@niel.cx) has joined #ceph
[7:24] * nhm (~nhm@65-128-159-155.mpls.qwest.net) has joined #ceph
[7:24] * bkero (~bkero@216.151.13.66) has joined #ceph
[7:24] * zapotah (~zapotah@dsl-hkibrasgw1-58c08e-250.dhcp.inet.fi) has joined #ceph
[7:24] * nwf_ (~nwf@67.62.51.95) has joined #ceph
[7:24] * dgc (~redacted@bikeshed.us) has joined #ceph
[7:24] * iggy_ (~iggy@theiggy.com) has joined #ceph
[7:24] * beardo__ (~sma310@beardo.cc.lehigh.edu) has joined #ceph
[7:24] * Clbh (~benoit@cyllene.anchor.net.au) has joined #ceph
[7:24] * Anticimex (anticimex@95.80.32.80) has joined #ceph
[7:24] * wogri (~wolf@nix.wogri.at) has joined #ceph
[7:24] * LCF (ball8@193.231.broadband16.iol.cz) has joined #ceph
[7:24] * Ormod (~valtha@ohmu.fi) has joined #ceph
[7:24] * liiwi (liiwi@idle.fi) has joined #ceph
[7:24] * tom2 (~jens@s11.jayr.de) has joined #ceph
[7:24] * erwyn (~erwyn@markelous.net) has joined #ceph
[7:24] * yeled (~yeled@spodder.com) has joined #ceph
[7:24] * godog (~filo@0001309c.user.oftc.net) has joined #ceph
[7:24] * rektide (~rektide@eldergods.com) has joined #ceph
[7:24] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[7:24] * ferai (~quassel@corkblock.jefferai.org) has joined #ceph
[7:24] * joelio (~Joel@88.198.107.214) has joined #ceph
[7:24] * zere (~matt@asklater.com) has joined #ceph
[7:24] * psieklFH_ (psiekl@wombat.eu.org) has joined #ceph
[7:24] * grifferz_ (~andy@bitfolk.com) has joined #ceph
[7:24] * tomaw (tom@tomaw.noc.oftc.net) has joined #ceph
[7:24] * lurbs (user@uber.geek.nz) has joined #ceph
[7:24] * leseb (~leseb@88-190-214-97.rev.dedibox.fr) has joined #ceph
[7:24] * jeremydei (~jdeininge@ip-64-139-50-114.sjc.megapath.net) has joined #ceph
[7:24] * baffle (baffle@jump.stenstad.net) has joined #ceph
[7:24] * wattsmarcus5 (~mdw@aa2.linuxbox.com) has joined #ceph
[7:24] * kuu (~kuu@virtual362.tentacle.fi) has joined #ceph
[7:24] * garphy`aw (~garphy@frank.zone84.net) has joined #ceph
[7:24] * amospalla (~amospalla@0001a39c.user.oftc.net) has joined #ceph
[7:24] * acaos (~zac@209.99.103.42) has joined #ceph
[7:24] * via (~via@smtp2.matthewvia.info) has joined #ceph
[7:24] * Jahkeup (Jahkeup@gemini.ca.us.panicbnc.net) has joined #ceph
[7:24] * ctd (~root@00011932.user.oftc.net) has joined #ceph
[7:24] * vhasi (vhasi@vha.si) has joined #ceph
[7:24] * chris_lu (~ccc2@bolin.Lib.lehigh.EDU) has joined #ceph
[7:24] * dis (~dis@109.110.66.27) has joined #ceph
[7:24] * josef_ (~josef@kungsbacka.oderland.com) has joined #ceph
[7:24] * brambles (lechuck@s0.barwen.ch) has joined #ceph
[7:24] * fred` (fred@earthli.ng) has joined #ceph
[7:24] * NaioN (stefan@andor.naion.nl) has joined #ceph
[7:24] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[7:24] * masterpe (~masterpe@2a01:670:400::43) has joined #ceph
[7:24] * ivan` (~ivan`@000130ca.user.oftc.net) has joined #ceph
[7:24] * fretb (~fretb@frederik.pw) has joined #ceph
[7:24] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[7:24] * Azrael (~azrael@terra.negativeblue.com) has joined #ceph
[7:24] * dec (~dec@ec2-54-252-14-44.ap-southeast-2.compute.amazonaws.com) has joined #ceph
[7:24] * asadpanda (~asadpanda@2001:470:c09d:0:20c:29ff:fe4e:a66) has joined #ceph
[7:24] * sileht (~sileht@gizmo.sileht.net) has joined #ceph
[7:24] * ccourtaut (~ccourtaut@2001:41d0:1:eed3::1) has joined #ceph
[7:24] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) has joined #ceph
[7:24] * Psi-Jack (~Psi-Jack@psi-jack.user.oftc.net) has joined #ceph
[7:24] * blahnana (~bman@us1.blahnana.com) has joined #ceph
[7:24] * jerker (jerker@82ee1319.test.dnsbl.oftc.net) has joined #ceph
[7:24] * clayg_ (~clayg@ec2-54-235-44-253.compute-1.amazonaws.com) has joined #ceph
[7:24] * markl (~mark@knm.org) has joined #ceph
[7:24] * houkouonchi-work (~linux@12.248.40.138) has joined #ceph
[7:24] * Sargun (~sargun@208-106-98-2.static.sonic.net) has joined #ceph
[7:24] * hflai (hflai@alumni.cs.nctu.edu.tw) has joined #ceph
[7:24] * sbadia (~sbadia@yasaw.net) has joined #ceph
[7:24] * Daviey (~DavieyOFT@bootie.daviey.com) has joined #ceph
[7:24] * rBEL (robbe@november.openminds.be) has joined #ceph
[7:24] * rturk-away (~rturk@ds2390.dreamservers.com) has joined #ceph
[7:24] * `10_ (~10@juke.fm) has joined #ceph
[7:24] * cclien (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) has joined #ceph
[7:24] * WintermeW_ (~WintermeW@212-83-158-61.rev.poneytelecom.eu) has joined #ceph
[7:24] * Zethrok (~martin@95.154.26.34) has joined #ceph
[7:24] * svg (~svg@hydargos.ginsys.net) has joined #ceph
[7:24] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[7:24] * eightyeight (~atoponce@atoponce.user.oftc.net) has joined #ceph
[7:24] * wrale (~wrale@wrk-28-217.cs.wright.edu) has joined #ceph
[7:24] * mmgaggle (~kyle@cerebrum.dreamservers.com) has joined #ceph
[7:24] * Karcaw (~evan@96-41-200-66.dhcp.elbg.wa.charter.com) has joined #ceph
[7:27] * wusui (~Warren@2607:f298:a:607:38fc:445b:1848:70d4) Quit (Ping timeout: 480 seconds)
[7:29] * JoeGruher (~JoeGruher@134.134.139.70) Quit (Remote host closed the connection)
[7:35] * \ask (~ask@oz.develooper.com) Quit (Ping timeout: 480 seconds)
[7:41] * haomaiwang (~haomaiwan@118.186.133.131) Quit (Remote host closed the connection)
[7:41] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) has joined #ceph
[7:43] * garphy`aw is now known as garphy
[7:46] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[7:46] * sarob (~sarob@2601:9:7080:13a:687f:acd1:f66e:81c8) has joined #ceph
[7:52] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[7:54] * sarob (~sarob@2601:9:7080:13a:687f:acd1:f66e:81c8) Quit (Ping timeout: 480 seconds)
[7:59] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[8:00] * mattt (~textual@94.236.7.190) has joined #ceph
[8:02] * haomaiwa_ (~haomaiwan@117.79.232.213) has joined #ceph
[8:06] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[8:08] * garphy is now known as garphy`aw
[8:08] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) Quit (Ping timeout: 480 seconds)
[8:12] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[8:30] * mattt_ (~textual@94.236.7.190) has joined #ceph
[8:30] * abique (~abique@time2market1.epfl.ch) has left #ceph
[8:31] * madkiss (~madkiss@217.194.73.202) Quit (Quit: Leaving.)
[8:33] * mattt (~textual@94.236.7.190) Quit (Ping timeout: 480 seconds)
[8:33] * mattt_ is now known as mattt
[8:33] * oro (~oro@2001:620:20:16:30e8:5766:3d72:4d7) has joined #ceph
[8:47] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[8:49] * srenatus (~stephan@g229132093.adsl.alicedsl.de) has joined #ceph
[8:50] * rendar (~s@87.19.182.167) has joined #ceph
[8:54] * mjevans (~mje@209.141.34.79) Quit (Ping timeout: 480 seconds)
[8:55] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:55] * mjevans (~mje@209.141.34.79) has joined #ceph
[8:57] * steki (~steki@91.195.39.5) has joined #ceph
[8:57] * Sysadmin88 (~IceChat77@176.254.32.31) Quit (Quit: Beware of programmers who carry screwdrivers.)
[9:00] * ivotron_ (~ivotron@adsl-99-146-2-252.dsl.pltn13.sbcglobal.net) has joined #ceph
[9:00] * ivotron (~ivotron@adsl-99-146-2-252.dsl.pltn13.sbcglobal.net) Quit (Read error: Connection reset by peer)
[9:00] * ivotron_ (~ivotron@adsl-99-146-2-252.dsl.pltn13.sbcglobal.net) Quit (Read error: Connection reset by peer)
[9:00] * ivotron (~ivotron@adsl-99-146-2-252.dsl.pltn13.sbcglobal.net) has joined #ceph
[9:08] * michal (~michal@193.85.239.162) has joined #ceph
[9:10] * ponyofdeath (~vladi@cpe-76-167-201-214.san.res.rr.com) Quit (Ping timeout: 480 seconds)
[9:12] * sjusthm (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[9:16] <michal> Hi everyone, I would like to evaluate ceph for usage in our environment. Just looking for some recommendations, like if it's ok to run tests against Emperor or would it be better to wait a little bit for Firefly release?
[9:17] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[9:17] * sleinen (~Adium@2001:620:0:2d:a819:4483:4126:807f) has joined #ceph
[9:18] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[9:24] * sleinen1 (~Adium@2001:620:0:46:a0e9:db6b:dda9:3cb4) Quit (Ping timeout: 480 seconds)
[9:25] * sleinen (~Adium@2001:620:0:2d:a819:4483:4126:807f) Quit (Ping timeout: 480 seconds)
[9:41] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) Quit (Ping timeout: 480 seconds)
[9:41] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) has joined #ceph
[9:45] * sleinen (~Adium@2001:620:0:26:3080:59df:7b7f:8f93) has joined #ceph
[9:46] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[9:47] * sarob (~sarob@2601:9:7080:13a:495c:76f9:b52f:5d35) has joined #ceph
[9:49] * Cataglottism (~Serendipi@dsl-087-195-030-170.solcon.nl) has joined #ceph
[9:52] * srenatus (~stephan@g229132093.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[9:52] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[9:55] * sarob (~sarob@2601:9:7080:13a:495c:76f9:b52f:5d35) Quit (Ping timeout: 480 seconds)
[10:05] * ponyofdeath (~vladi@cpe-76-167-201-214.san.res.rr.com) has joined #ceph
[10:07] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) Quit (Quit: Leaving)
[10:11] * Cube1 (~Cube@66-87-130-210.pools.spcsdns.net) has joined #ceph
[10:13] * TMM (~hp@c97185.upc-c.chello.nl) Quit (Ping timeout: 480 seconds)
[10:15] * fghaas (~florian@85-127-219-50.dynamic.xdsl-line.inode.at) has joined #ceph
[10:17] * Cube (~Cube@66-87-130-210.pools.spcsdns.net) Quit (Read error: Operation timed out)
[10:23] * thb (~me@2a02:2028:c2:71a0:6267:20ff:fec9:4e40) has joined #ceph
[10:23] * thb is now known as Guest1581
[10:24] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[10:28] * Guest1581 is now known as thb
[10:31] <joao> michal, emperor if fine
[10:32] <joao> if you are looking forward to erasure coding however, then maybe you should wait for firefly
[10:32] <joao> the again
[10:32] <joao> *then again
[10:32] <joao> you can use emperor now and upgrade later once firefly it's out
[10:32] <joao> your call
[10:33] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:33] <michal> joao: thanks!
[10:34] <michal> I was trying to use ceph two years ago and experienced few glitches in testing env. And it seems that ceph is still not in 1.0 state, so I figured out that I rather ask.
[10:37] * LeaChim (~LeaChim@host86-166-182-74.range86-166.btcentralplus.com) has joined #ceph
[10:38] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[10:39] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[10:43] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[10:43] * allsystemsarego (~allsystem@188.25.129.255) has joined #ceph
[10:45] <joao> michal, the object and block parts of ceph are pretty rock-solid now
[10:45] <michal> joao: ok, that's what I've red:) Any info about the fs itself?:-)
[10:46] <joao> works for some people, you should be okay
[10:47] <joao> I'm not familiar with the progress on the cephfs front
[10:47] <joao> fwiw, I have a one node cephfs deployment at home and it seems to be holding up fairly well
[10:47] <michal> joao: if I understand correctly, then firefly should be pretty close to RC1?
[10:48] <joao> yeah, should come out real soon now
[10:48] * partner_ (joonas@ajaton.net) has joined #ceph
[10:48] <michal> cool.
[10:48] <joao> let me check what's the timeline on the tracker
[10:49] <michal> joao: if I read that one correctly, then it's as before... everything is slacking behind the planned schedule:-)
[10:49] <joao> looks like firefly rc1's development should cease in 3 days (end of the week)
[10:49] <michal> that would be uber cool.
[10:50] * sarob (~sarob@2601:9:7080:13a:bc3e:454c:270f:9ffc) has joined #ceph
[10:59] * sarob (~sarob@2601:9:7080:13a:bc3e:454c:270f:9ffc) Quit (Ping timeout: 480 seconds)
[10:59] <michal> joao: thanks for info! Going for testing.
[10:59] <joao> let us know if you bump into any issues :)
[11:16] * Cataglottism (~Serendipi@dsl-087-195-030-170.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[11:17] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[11:23] * ivotron (~ivotron@adsl-99-146-2-252.dsl.pltn13.sbcglobal.net) Quit (Remote host closed the connection)
[11:23] * sleinen1 (~Adium@130.59.94.214) has joined #ceph
[11:26] * lafouine41 (~lafouine4@LMontsouris-656-01-03-3.w80-12.abo.wanadoo.fr) has joined #ceph
[11:29] * sleinen2 (~Adium@2001:620:0:26:9cfb:20a8:9cf1:a12d) has joined #ceph
[11:31] * sleinen (~Adium@2001:620:0:26:3080:59df:7b7f:8f93) Quit (Ping timeout: 480 seconds)
[11:32] * shang (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[11:36] * sleinen1 (~Adium@130.59.94.214) Quit (Ping timeout: 480 seconds)
[11:50] * joelio (~Joel@88.198.107.214) Quit (Ping timeout: 480 seconds)
[11:53] * joelio (~Joel@88.198.107.214) has joined #ceph
[11:54] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[12:00] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[12:06] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[12:14] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Quit: Textual IRC Client: www.textualapp.com)
[12:16] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Remote host closed the connection)
[12:19] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[12:27] * beardo_ (~sma310@beardo.cc.lehigh.edu) has joined #ceph
[12:27] * keith4_ (~keith4@greed.cc.lehigh.edu) has joined #ceph
[12:27] * chris_lu_ (~ccc2@bolin.Lib.lehigh.EDU) has joined #ceph
[12:27] * oro (~oro@2001:620:20:16:30e8:5766:3d72:4d7) Quit (Ping timeout: 480 seconds)
[12:28] * beardo__ (~sma310@beardo.cc.lehigh.edu) Quit (Read error: Operation timed out)
[12:29] * banks (~banks@host86-154-234-37.range86-154.btcentralplus.com) has joined #ceph
[12:31] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[12:33] * fghaas (~florian@85-127-219-50.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[12:33] * keith4 (~keith4@greed.cc.lehigh.edu) Quit (Ping timeout: 480 seconds)
[12:34] * chris_lu (~ccc2@bolin.Lib.lehigh.EDU) Quit (Ping timeout: 480 seconds)
[12:38] * oro (~oro@2001:620:20:222:810:82b4:3407:f85d) has joined #ceph
[12:43] * diegows (~diegows@190.190.5.238) has joined #ceph
[12:46] * glzhao (~glzhao@220.181.11.232) Quit (Quit: leaving)
[12:53] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[12:55] * fdmanana (~fdmanana@bl5-78-108.dsl.telepac.pt) has joined #ceph
[12:57] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[12:58] * i_m (~ivan.miro@deibp9eh1--blueice4n2.emea.ibm.com) has joined #ceph
[12:58] * i_m (~ivan.miro@deibp9eh1--blueice4n2.emea.ibm.com) Quit ()
[12:58] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) has joined #ceph
[13:00] * fghaas (~florian@85-127-219-50.dynamic.xdsl-line.inode.at) has joined #ceph
[13:04] * fghaas (~florian@85-127-219-50.dynamic.xdsl-line.inode.at) Quit ()
[13:05] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[13:05] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) Quit (Read error: Operation timed out)
[13:05] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has left #ceph
[13:13] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[13:14] * cronix (~cronix@et-0-29.gw-nat.bs.kae.de.oneandone.net) has joined #ceph
[13:20] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[13:22] * r0r_tag (~nick@greenback.pod4.org) has joined #ceph
[13:23] * r0r_taga (~nick@greenback.pod4.org) has joined #ceph
[13:23] * ivotron (~ivotron@adsl-99-146-2-252.dsl.pltn13.sbcglobal.net) has joined #ceph
[13:26] * elder_ (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[13:28] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[13:31] * ivotron (~ivotron@adsl-99-146-2-252.dsl.pltn13.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[13:38] * fghaas (~florian@85-127-219-50.dynamic.xdsl-line.inode.at) has joined #ceph
[13:39] * JCL (~JCL@2601:9:5980:39b:5dd1:36e8:d869:d82a) Quit (Quit: Leaving.)
[13:40] * joshuay04 (~joshuay04@rrcs-74-218-204-10.central.biz.rr.com) Quit (Read error: Connection reset by peer)
[13:40] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[13:45] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[13:48] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[13:50] * t0rn (~ssullivan@c-24-11-198-35.hsd1.mi.comcast.net) has joined #ceph
[13:53] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) has joined #ceph
[13:57] <saaby_> gents, did some sort of cluster-wide max_backfills make its way to dumpling?
[13:58] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[13:58] <saaby_> we just added 48 osd's to our production environment running dumpling, and they should all have one pg backfilling, but only ~20-25 pg's are actually backfilling at a time
[13:58] * garphy`aw is now known as garphy
[13:59] <saaby_> which is probably a good idea to not put too much pressure on the mons, but I havent seen any references to a change like this..
[13:59] <saaby_> soo... anyone? :)
[14:00] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[14:04] * michal (~michal@193.85.239.162) Quit (Quit: Konversation terminated!)
[14:08] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[14:09] <fghaas> are you sure you're not simply hitting the 10 backfills per osd limit?
[14:11] <saaby_> fghaas: yes, actually we have a limit of one backfill per osd.
[14:11] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has joined #ceph
[14:12] <saaby_> so when adding 36 osd's we should (and did earlier) see 36'ish pg's backfilling simultaneously
[14:13] <saaby_> I am wondering if the fact that we changed from using "crushmap dump, edit, inject" to using the "ceph osd crush" commands has anything to do with it..
[14:13] <saaby_> we added three servers, one at a time, this time. And there is a tendency that all osd's on the first server are busy backfilling, the second fewer, and then the third even fewer osd's backfilling.
[14:14] <tnt> saaby_: well the limit might apply to the source OSD as well.
[14:14] <fghaas> well do you have any PGs in wait-backfill?
[14:14] <saaby_> earlier we would have just edited the map, so all three servers was added at once
[14:14] <saaby_> tnt: hah.. yeah.. maybe
[14:14] <saaby_> calc
[14:14] <saaby_> whoops
[14:15] <fghaas> saaby_:
[14:15] <fghaas> osd max backfills
[14:15] <fghaas> Description: The maximum number of backfills allowed to *or from* a single OSD.
[14:15] <fghaas> http://ceph.com/docs/master/rados/configuration/osd-config-ref/
[14:15] <saaby_> yeah, trying to do the math on that now
[14:16] <fghaas> also, just because you add 48 OSDs doesn't necessarily mean that that many PGs do in fact get remapped, hence my question of whether any of your PGs are actually waiting to backfill
[14:16] <saaby_> fghaas: I have ~9000 pg's in wait-backfill now
[14:17] <saaby_> yep, I have
[14:18] <saaby_> I should have ~260 PGs hit each OSD
[14:18] <fghaas> yeah well, but if you're only allowing one backfill per OSD, then I wouldn't be surprised at your wait-backfill piling up
[14:19] <fghaas> I don't quite follow the reasoning behind that, by the way, as backfills do get downgraded in I/O priority and shouldn't have that big of an impact on foreground r/w operation
[14:19] <saaby_> it's actually to prevent our backend network from being congested..
[14:20] <saaby_> it is just a convenient way of limiting recovery speeds
[14:20] * [caveman] (~quassel@boxacle.net) Quit (Remote host closed the connection)
[14:20] * [cave] (~quassel@boxacle.net) has joined #ceph
[14:22] <saaby_> just did the math, I should have 240 OSDs which can deliver data to those new 36 OSDs.. I would expect to hit ~36 concurrent backfills.. But I am at ~20. weird.
[14:32] * BillK (~BillK-OFT@106-68-205-248.dyn.iinet.net.au) has joined #ceph
[14:33] * jskinner (~jskinner@69.170.148.179) has joined #ceph
[14:44] * BillK (~BillK-OFT@106-68-205-248.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[14:46] * BillK (~BillK-OFT@124-168-237-176.dyn.iinet.net.au) has joined #ceph
[14:50] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) has joined #ceph
[14:53] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[14:53] * cronix (~cronix@et-0-29.gw-nat.bs.kae.de.oneandone.net) Quit (Read error: Connection reset by peer)
[14:55] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[14:56] * markbby (~Adium@168.94.245.3) has joined #ceph
[15:03] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[15:04] * sroy (~sroy@207.96.182.162) has joined #ceph
[15:07] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[15:08] * thomnico (~thomnico@95.215.123.46) has joined #ceph
[15:09] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[15:11] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[15:12] * thomnico (~thomnico@95.215.123.46) Quit ()
[15:12] * thomnico (~thomnico@95.215.123.46) has joined #ceph
[15:23] * thomnico (~thomnico@95.215.123.46) Quit (Quit: Ex-Chat)
[15:24] * ivotron (~ivotron@adsl-99-146-2-252.dsl.pltn13.sbcglobal.net) has joined #ceph
[15:30] * sjm (~sjm@static-70-19-27-28.nycmny.east.verizon.net) has joined #ceph
[15:32] * ivotron (~ivotron@adsl-99-146-2-252.dsl.pltn13.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[15:38] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[15:39] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) Quit (Quit: Leaving.)
[15:39] <banks> Hi, anyone around who uses Ceph to store data for analysis? I know you can run hadoop on top of CephFS although that is not supported in production yet. I wondered if there were any alternatives people are using for processing log data or similar in Ceph? Anyone use custom OSD classes to allow some computations to be done with efficient data locality?
[15:45] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) Quit (Quit: Leaving.)
[15:45] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) has joined #ceph
[15:48] * sjm (~sjm@static-70-19-27-28.nycmny.east.verizon.net) has left #ceph
[15:50] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) Quit (Quit: Leaving.)
[15:51] * oro (~oro@2001:620:20:222:810:82b4:3407:f85d) Quit (Ping timeout: 480 seconds)
[15:52] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[15:56] * linuxkidd (~linuxkidd@2001:420:2100:2258:39d3:de25:be2d:1e03) has joined #ceph
[15:57] * mjevans (~mje@209.141.34.79) Quit (Ping timeout: 480 seconds)
[16:01] * dereky (~derek@pool-71-114-104-38.washdc.fios.verizon.net) Quit (Quit: dereky)
[16:01] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has left #ceph
[16:02] * abique (~abique@time2market1.epfl.ch) has joined #ceph
[16:02] * oro (~oro@2001:620:20:222:810:82b4:3407:f85d) has joined #ceph
[16:06] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[16:10] * nwat_ (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[16:12] * mjevans (~mje@209.141.34.79) has joined #ceph
[16:14] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:15] * diegows (~diegows@190.216.51.2) has joined #ceph
[16:16] * nwat__ (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[16:16] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:22] * nwat_ (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:23] * allsystemsarego (~allsystem@188.25.129.255) Quit (Quit: Leaving)
[16:26] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[16:32] * mo- (~mo@2a01:4f8:141:3264::3) has joined #ceph
[16:33] * dereky (~derek@129-2-129-152.wireless.umd.edu) has joined #ceph
[16:34] * fdmanana (~fdmanana@bl5-78-108.dsl.telepac.pt) Quit (Quit: Leaving)
[16:37] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[16:42] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Remote host closed the connection)
[16:44] * nwat__ (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:49] * dmsimard (~Adium@108.163.152.2) Quit (Quit: Leaving.)
[16:50] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[16:54] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Read error: Connection reset by peer)
[16:55] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[16:56] * sleinen (~Adium@130.59.94.214) has joined #ceph
[16:58] <bdonnahue2> can anyone help me with a ceph deploy issue?
[16:58] <bdonnahue2> my monitors are not reaching quorum because the keyring is not being exchanged
[17:00] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[17:00] * fghaas1 (~florian@91-119-140-244.dynamic.xdsl-line.inode.at) has joined #ceph
[17:00] * lafouine41 (~lafouine4@LMontsouris-656-01-03-3.w80-12.abo.wanadoo.fr) Quit (Read error: Connection reset by peer)
[17:01] * fghaas1 (~florian@91-119-140-244.dynamic.xdsl-line.inode.at) Quit ()
[17:01] * lafouine41 (~lafouine4@LMontsouris-656-01-03-3.w80-12.abo.wanadoo.fr) has joined #ceph
[17:02] * fghaas (~florian@85-127-219-50.dynamic.xdsl-line.inode.at) Quit (Ping timeout: 480 seconds)
[17:03] * Cube1 (~Cube@66-87-130-210.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[17:03] * sleinen2 (~Adium@2001:620:0:26:9cfb:20a8:9cf1:a12d) Quit (Ping timeout: 480 seconds)
[17:04] * sprachgenerator (~sprachgen@130.202.135.204) has joined #ceph
[17:04] * sleinen (~Adium@130.59.94.214) Quit (Ping timeout: 480 seconds)
[17:07] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[17:09] <darkfader> ceph day was great
[17:09] * sarob (~sarob@2601:9:7080:13a:25a3:9fc4:ae29:27ee) has joined #ceph
[17:10] * sleinen (~Adium@130.59.94.214) has joined #ceph
[17:11] * sleinen1 (~Adium@2001:620:0:26:2987:4550:238f:3a40) has joined #ceph
[17:16] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) Quit (Quit: Leaving.)
[17:17] * sarob (~sarob@2601:9:7080:13a:25a3:9fc4:ae29:27ee) Quit (Ping timeout: 480 seconds)
[17:18] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[17:18] * sleinen (~Adium@130.59.94.214) Quit (Ping timeout: 480 seconds)
[17:18] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[17:19] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[17:20] * xdeller (~xdeller@109.188.127.223) has joined #ceph
[17:21] * alaind (~dechorgna@161.105.182.35) has joined #ceph
[17:22] * steki (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[17:23] * oro (~oro@2001:620:20:222:810:82b4:3407:f85d) Quit (Ping timeout: 480 seconds)
[17:25] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[17:25] * ivotron (~ivotron@adsl-99-146-2-252.dsl.pltn13.sbcglobal.net) has joined #ceph
[17:26] * oro (~oro@2001:620:20:16:3df3:876d:8467:2eed) has joined #ceph
[17:29] * mo- (~mo@2a01:4f8:141:3264::3) Quit (Quit: leaving)
[17:30] * joshuay04 (~joshuay04@rrcs-74-218-204-10.central.biz.rr.com) has joined #ceph
[17:30] * ivotron (~ivotron@adsl-99-146-2-252.dsl.pltn13.sbcglobal.net) Quit (Read error: Operation timed out)
[17:30] <joshuay04> Anyone have recommendations on good long lasting SSD for journal?
[17:30] * sleinen (~Adium@2001:620:0:46:15f5:4346:e461:6ce4) has joined #ceph
[17:31] * ircolle (~Adium@2601:1:8380:2d9:393c:5410:2aca:4e03) has joined #ceph
[17:32] * mattt (~textual@94.236.7.190) Quit (Ping timeout: 480 seconds)
[17:33] * alaind (~dechorgna@161.105.182.35) Quit (Ping timeout: 480 seconds)
[17:35] * mattt (~textual@94.236.7.190) has joined #ceph
[17:36] <janos> joshuay04, i've heard good things about the intel S3700 line
[17:36] <janos> some in here use them iirc
[17:37] * sleinen1 (~Adium@2001:620:0:26:2987:4550:238f:3a40) Quit (Ping timeout: 480 seconds)
[17:37] <darkfader> i'm eyeing the hitachi ssd400s (b rev) for this, but they're kinda hard to find
[17:38] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[17:38] <darkfader> i had found one benchmark with various ssds, the s3700 looked great there compared to most SSD
[17:38] <darkfader> and the hitachi (2x price) was the flat line underneath it
[17:38] <darkfader> (latency graph)
[17:41] <joshuay04> janos: Will that be fast enough? It says write speed 200MB/s?
[17:41] <janos> that sounds slow
[17:42] <janos> sequential is rated at 460/480'ish i thought
[17:42] <janos> ah i'm looking at a larger one
[17:42] <janos> interesting, the 100GB is rated 200
[17:42] <janos> hrm
[17:43] <janos> yeah that sounds gimpy
[17:44] * diegows (~diegows@190.216.51.2) Quit (Ping timeout: 480 seconds)
[17:44] <darkfader> janos:i thnk they are faster dependent on the size, but the main difference with the S3700 and better is they keep this rate
[17:44] <darkfader> most others will maybe do 400MB/s... "most of the time"
[17:44] <janos> yeah they are supposed to be durable and consistent
[17:45] * fghaas (~florian@91-119-140-244.dynamic.xdsl-line.inode.at) has joined #ceph
[17:45] * alaind (~dechorgna@161.105.182.35) has joined #ceph
[17:45] <janos> a small one with only 200 write limits the utility though
[17:45] <darkfader> yes :/
[17:46] * JoeGruher (~JoeGruher@jfdmzpr01-ext.jf.intel.com) has joined #ceph
[17:47] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[17:48] * vipulnayyar (~vipul@182.68.218.94) has joined #ceph
[17:50] * oro (~oro@2001:620:20:16:3df3:876d:8467:2eed) Quit (Ping timeout: 480 seconds)
[17:50] * vipulnayyar (~vipul@182.68.218.94) Quit ()
[17:51] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:53] * SvenPHX1 (~Adium@wsip-174-79-34-244.ph.ph.cox.net) Quit (Quit: Leaving.)
[17:53] * SvenPHX1 (~Adium@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[17:55] <joshuay04> This is important, continuing on from yesterdays convo I did a lot of testing last night. The user who highly recommended not using consumer grade ssd as journal was 100% correct. My journals have been stored on an OCZ drive for a year now which is rated 500MB/500. After 1 year of 100GB+ a day of write the current speed of the ssd is not even 1/3 that of what it used to be. I have checked and all 5 identical drives ar
[17:55] <joshuay04> e running the same slow speed.
[17:56] <janos> dang
[17:57] * lafouine41 (~lafouine4@LMontsouris-656-01-03-3.w80-12.abo.wanadoo.fr) Quit ()
[17:57] <joshuay04> at this point it is faster to store journal/osd on same spindle disk than to use my SSDs for the journal
[17:59] <fghaas> thanks joshuay04 :)
[17:59] <joshuay04> fghaas: no thank you!
[17:59] <joshuay04> I forogt you were the one who mentioned it
[18:00] <janos> so the next test is to mount thumbdrives and use them for journals
[18:00] * janos ducks
[18:02] * Gamekiller77 (~Gamekille@2001:420:28c:1007:f889:d84:a530:e871) has joined #ceph
[18:02] <Gamekiller77> any ETA on firefly ?
[18:03] <joshuay04> janos: make sure it is usb 3!
[18:03] * janos checks for the blue ports
[18:04] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) Quit (Quit: Leaving.)
[18:05] * mattt_ (~textual@92.52.76.140) has joined #ceph
[18:06] * tsnider (~oftc-webi@216.240.30.25) has joined #ceph
[18:08] * mattt (~textual@94.236.7.190) Quit (Ping timeout: 480 seconds)
[18:08] * mattt_ is now known as mattt
[18:08] <fghaas> joshuay04: often, if your alternate option is a *cheap* (slow) SSD, it's better to stick with in-filestore journals
[18:08] <fghaas> i.e. put the journal on your spinners
[18:09] <tsnider> I'm looking at using ssds for journals. Is there any problems using multiple partitions on a single ssd so it can be used for multiple osd journals, or should there be a 1:1 relationship between ssds and journals? I don't have enough ssds for all osd journals.
[18:09] <joshuay04> THanks, it appears as if that is my future until my next budget. I am shooting for 150MB/s cluster speed. right now I am at 84
[18:10] <fghaas> tsnider: it's ok to have multiple partitions on a single SSD, but don't shoot for more than 4 (6 is absolute tops, but only for a really high-throughput SSD)
[18:10] * sarob (~sarob@2601:9:7080:13a:69fc:bed0:5fab:e3ca) has joined #ceph
[18:10] <fghaas> also make sure you leave about 25% of the drive unpartitioned so the SSD can use that for wear leveling
[18:11] <tsnider> fghaas -- thx that's what I assumed but wanted to check with others who know better.
[18:16] * JCL (~JCL@2601:9:5980:39b:8d82:b92f:d7f3:1bff) has joined #ceph
[18:16] * alaind (~dechorgna@161.105.182.35) Quit (Ping timeout: 480 seconds)
[18:18] * sarob (~sarob@2601:9:7080:13a:69fc:bed0:5fab:e3ca) Quit (Ping timeout: 480 seconds)
[18:22] * humbolt (~elias@93-82-44-212.adsl.highway.telekom.at) has joined #ceph
[18:27] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:28] * humbolt1 (~elias@194-118-227-52.adsl.highway.telekom.at) Quit (Ping timeout: 480 seconds)
[18:30] * owenmurr (~owenmurr@109.175.201.0) has joined #ceph
[18:33] * sarob (~sarob@2601:9:7080:13a:d4d0:6774:b2f1:c20d) has joined #ceph
[18:34] * xmltok (~xmltok@cpe-76-90-130-65.socal.res.rr.com) has joined #ceph
[18:35] * owenmurr (~owenmurr@109.175.201.0) Quit (Quit: Lost terminal)
[18:38] * sjm (~sjm@gzac10-107-1.nje.twosigma.com) has joined #ceph
[18:39] * owenmurr (~owenmurr@109.175.201.0) has joined #ceph
[18:39] <saturnine> Anyone using KVM/QEMU with RBD?
[18:39] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) has joined #ceph
[18:41] * sjm (~sjm@gzac10-107-1.nje.twosigma.com) has left #ceph
[18:41] * mattt (~textual@92.52.76.140) Quit (Read error: Connection reset by peer)
[18:43] * BillK (~BillK-OFT@124-168-237-176.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[18:47] * joshd1 (~jdurgin@2602:306:c5db:310:1cb3:d5b0:a038:ec95) has joined #ceph
[18:47] <Gamekiller77> well with openstack and kvm
[18:47] * owenmurr (~owenmurr@109.175.201.0) Quit (Quit: Lost terminal)
[18:47] * t0rn (~ssullivan@c-24-11-198-35.hsd1.mi.comcast.net) Quit (Quit: Leaving.)
[18:47] * owenmurr (~owenmurr@109.175.201.0) has joined #ceph
[18:48] * owenmurr (~owenmurr@109.175.201.0) Quit ()
[18:48] <hasues> Gamekiller77: I'm heading there, but not made it quite yet.
[18:48] <Gamekiller77> i going to KVM native also
[18:48] <hasues> Also want to look LXC.
[18:48] <Gamekiller77> looking at management tools that support RBD and KVM
[18:48] * owenmurr (~owenmurr@109.175.201.0) has joined #ceph
[18:48] <Gamekiller77> like Proxmox and others
[18:49] * owenmurr (~owenmurr@109.175.201.0) Quit ()
[18:50] * owenmurr (~owenmurr@109.175.201.0) has joined #ceph
[18:50] * xarses (~andreww@12.164.168.117) has joined #ceph
[18:52] * xarses_ (~andreww@12.164.168.117) has joined #ceph
[18:52] * xarses (~andreww@12.164.168.117) Quit ()
[18:52] * xarses_ (~andreww@12.164.168.117) Quit ()
[18:52] * xarses (~andreww@12.164.168.117) has joined #ceph
[18:53] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[18:56] * rturk-away is now known as rturk
[18:56] * garphy is now known as garphy`aw
[18:59] * diegows (~diegows@190.216.51.2) has joined #ceph
[19:04] * via (~via@smtp2.matthewvia.info) Quit (Ping timeout: 480 seconds)
[19:11] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[19:12] <joshuay04> Gamekiller77: Check out Opennebula and Cloudstack
[19:12] * jhujhiti (~jhujhiti@00012a8b.user.oftc.net) has joined #ceph
[19:13] <jhujhiti> is it safe to run a qemu built for bobtail against a newer version cluster? i'd like to upgrade the bobtail cluster but i can't reboot the running VMs
[19:14] <Gamekiller77> i looking for something to replace vCenter i need HA and DRS type features
[19:14] <hasues> joshuay04: How would either of those help noting he is using OpenStack?
[19:14] <Gamekiller77> openstack for true cloud need something more traditional VM type work
[19:14] <Gamekiller77> yah thanks hasues
[19:14] <hasues> joshuay04: Mind you I looked at both of those, and I really liked OpenNebula.
[19:15] <jhujhiti> +1 opennebula
[19:15] <hasues> Gamekiller77: Hm, for HA features in OpenStack, I had to look at what other vendors were doing with it, like say Mirantis.
[19:16] <hasues> Gamekiller77: did you check out Mirantis's fuel?
[19:16] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[19:16] <fghaas> there's other HA approaches out there, mind you
[19:16] <joshuay04> hasues: That is my favorite as well. When evaluating different architecture I like to say "What happens to my machines in a total cluster failure". Opennebula from what I have seen it the only one that can get back up and running from scratch in 1 day
[19:16] * sarob (~sarob@2601:9:7080:13a:d4d0:6774:b2f1:c20d) Quit (Remote host closed the connection)
[19:17] * sarob (~sarob@2601:9:7080:13a:d4d0:6774:b2f1:c20d) has joined #ceph
[19:17] <hasues> joshuay04: I wanted to use OpenNebula as well where I work, but OpenStack was picked due to market momentum...yet here we are still putting one together and looking at vendors :\
[19:17] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[19:17] <hasues> fghaas: fill us in
[19:18] <fghaas> http://docs.openstack.org/high-availability-guide/content/
[19:18] <hasues> fghaas: thanks.
[19:18] <jhujhiti> i had so many issues with openstack i swore off of it and painfully moved everyone to opennebula. ever since my blood pressure has been much lower
[19:18] <joshuay04> hasues: That was our IT department as well. Openstack was all the talk. I keep fighting it off but every month someone comes up to me saying "Have you heard of this cool cloud technology called openstack"
[19:18] <fghaas> an alternate approach is http://docwiki.cisco.com/wiki/OpenStack_Havana_Release:_High-Availability_Manual_Deployment_Guide
[19:19] <kitz> Do I have to use "ceph osd crush set" to move an OSD or can I just "ceph osd crush move"? (I don't want to have to specify the weight)
[19:19] <jhujhiti> is there any way i can rate-limit the disk bandwidth (in bps or iops) used by a deep scrub?
[19:19] <Gamekiller77> hasues, i can not talk to mirantis ;) i work for a big IT company
[19:20] <Gamekiller77> and HA not in infra i have that done that
[19:20] <hasues> So you guys are using Ceph with OpenNebula?
[19:20] <jhujhiti> yes
[19:20] <Gamekiller77> i talking about let say i have an app that not meant for openstack
[19:20] <Gamekiller77> not cloudy
[19:20] <Gamekiller77> single instance
[19:20] <jhujhiti> there were a few patches i had to apply to 4.2, but apparently they're fixed in 4.4 (i haven't upgraded yet)
[19:20] <fghaas> Gamekiller77: maybe we're small enough that you can talk to us. :) (hastexo, that is) obviously, Inktank would be happy to help with Ceph integration in OpenStack too
[19:20] <Gamekiller77> i need that VM or instance to then reboot if a KVM blade dies
[19:21] <Gamekiller77> fghaas, i am a partner with intank
[19:21] <Gamekiller77> inktank
[19:21] <fghaas> Gamekiller77: ... then you're outlining exactly the scenario of my talk way-back-when at the San Francisco OpenStack summit
[19:21] <joshuay04> partial, I have a cluster running for evaluation just waiting on money
[19:21] <fghaas> so are we, btw...
[19:21] <Gamekiller77> fghaas, you work ink tank
[19:21] <fghaas> nope, I did say hastexo above
[19:21] <Gamekiller77> sorry
[19:21] <Gamekiller77> yah i am a partner with inktank
[19:22] <Gamekiller77> i have more then a couple Ceph cluster working
[19:22] <Gamekiller77> using my native hardware with out going in to how i work for
[19:22] <joshuay04> Has anyone tried Ganeti?
[19:23] <rotbeard> fghaas, you weren't around today at ceph day were you?
[19:23] <Gamekiller77> all i know i been asking to use KVM+Ceph to do the same work load i have in ESXi/vCenter
[19:23] <fghaas> rotbeard: nope, madkiss was waving our flag in Frankfurt
[19:24] <Gamekiller77> i love openstack and it works well
[19:24] <Gamekiller77> just need more VM instance HA where the app does not support the AZ type setup
[19:25] <fghaas> Gamekiller77: you can do that
[19:25] * sjusthm (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) has joined #ceph
[19:25] * sarob (~sarob@2601:9:7080:13a:d4d0:6774:b2f1:c20d) Quit (Ping timeout: 480 seconds)
[19:25] <rotbeard> he was. and he did a nice talk btw ;)
[19:26] * ivotron (~ivotron@adsl-99-146-2-252.dsl.pltn13.sbcglobal.net) has joined #ceph
[19:26] <fghaas> Gamekiller77: you may want to look at http://youtu.be/_mXtOeaKL8s
[19:26] <Gamekiller77> i will thanks
[19:26] <fghaas> rotbeard: thanks! I'll pass that on
[19:27] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[19:27] <fghaas> effectively, you manage nova-compute as an HA resource, override hostname and use resume_guest_state_on_host_boot
[19:27] <fghaas> that's the whole trick, really
[19:27] <Gamekiller77> hmm
[19:27] <fghaas> then as one of your nodes dies, poof the other takes over
[19:28] <Gamekiller77> hmm that is very cool
[19:28] <Gamekiller77> next on my list to test
[19:28] <Gamekiller77> still trying to get livemigraion to work
[19:28] <Gamekiller77> then that
[19:28] <fghaas> this does require, of course, that your VMs boot from volume
[19:28] <fghaas> or that your ephemeral storage is itself shared
[19:28] <Gamekiller77> yes all use cases that where on ESXi are volume booting
[19:28] <Gamekiller77> well all my storage is ceph
[19:29] <Gamekiller77> hence me being here
[19:29] <fghaas> riht
[19:29] <fghaas> right
[19:29] <fghaas> but that doesn't immediately imply that your /var/lib/nova/instances is mounted off RBD, or uses CephFS
[19:30] <Gamekiller77> well it is for me
[19:30] <Gamekiller77> hehe
[19:30] <fghaas> then you're fine
[19:30] <Gamekiller77> yes i should be
[19:30] <rotbeard> Gamekiller77, are you using vmware + ceph?
[19:30] <Gamekiller77> not yet
[19:30] <rotbeard> k
[19:30] <Gamekiller77> but on my way to
[19:30] <Gamekiller77> want to test cephFS with NFS head
[19:31] <Gamekiller77> as esxi has not native RBD support as of yet
[19:31] <rotbeard> well, we're too. for now we are playing with an iscsi proxy in front of ceph
[19:31] <fghaas> remember not to use kernel NFS over CephFS
[19:31] <rotbeard> Gamekiller77, yep,. that's the point.
[19:31] * dereky (~derek@129-2-129-152.wireless.umd.edu) Quit (Quit: dereky)
[19:32] <fghaas> unless you enjoy network stack deadlocks under memory pressure
[19:32] <Gamekiller77> i poked my vmware sales person with a very big stick about that
[19:33] <Gamekiller77> this is just to play
[19:34] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[19:34] * ivotron (~ivotron@adsl-99-146-2-252.dsl.pltn13.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[19:34] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[19:35] * NetWeaver (~NetWeaver@host.cctv.org) has joined #ceph
[19:36] <rotbeard> no idea why vmware doesn't support such types of storage :(
[19:37] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[19:38] * sjm (~sjm@gzac10-107-1.nje.twosigma.com) has joined #ceph
[19:42] * sarob (~sarob@2601:9:7080:13a:cde:ffb5:88ef:2de1) has joined #ceph
[19:44] * sarob_ (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[19:45] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Ping timeout: 480 seconds)
[19:45] * jskinner_ (~jskinner@69.170.148.179) has joined #ceph
[19:47] * joshd2 (~jdurgin@2602:306:c5db:310:b983:5a5f:68ee:3cfa) has joined #ceph
[19:49] * mo- (~mo@2a01:4f8:141:3264::3) has joined #ceph
[19:49] <NetWeaver> Does anyone know of an approach to bare metal PXE RBD? Only stuff I can find is about pushing it through things like an iSCSI front end, but I find no iPXE, com32's, etc for a rados block device in my research, and can't imagine I'm the only one who wants this.
[19:50] * sarob (~sarob@2601:9:7080:13a:cde:ffb5:88ef:2de1) Quit (Ping timeout: 480 seconds)
[19:50] <mo-> so uhm, Im looking at a ceph cluster running on .61.2 and 1/3 mons died (disk full...)
[19:50] <mo-> can I add a .61.9 mond to that cluster just like that?
[19:50] <mo-> *mon
[19:51] * joshd1 (~jdurgin@2602:306:c5db:310:1cb3:d5b0:a038:ec95) Quit (Ping timeout: 480 seconds)
[19:52] * steki (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[19:52] * sarob_ (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:52] * jskinner (~jskinner@69.170.148.179) Quit (Ping timeout: 480 seconds)
[19:52] * garphy`aw is now known as garphy
[19:54] <mo-> or do I need to find a .61.2 deb for the new node?
[19:55] * tsnider (~oftc-webi@216.240.30.25) Quit (Quit: Page closed)
[19:56] * dgbaley27 (~matt@c-76-120-64-12.hsd1.co.comcast.net) has joined #ceph
[19:57] * garphy is now known as garphy`aw
[20:00] * sjm (~sjm@gzac10-107-1.nje.twosigma.com) Quit (Quit: Leaving.)
[20:00] * sjm (~sjm@gzac10-107-1.nje.twosigma.com) has joined #ceph
[20:01] * steki (~steki@fo-d-130.180.254.37.targo.rs) Quit (Ping timeout: 480 seconds)
[20:03] * Cube (~Cube@66-87-64-73.pools.spcsdns.net) has joined #ceph
[20:05] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) has joined #ceph
[20:11] <fghaas> NetWeaver: that is indeed the most promising approach (and most versatile, too, considering rbd isn't available for everything)
[20:11] <fghaas> slap LIO on that thing and you get even front-end FC support for pretty much free
[20:12] * rturk is now known as rturk-away
[20:13] <dmick> jerker: http://tracker.ceph.com/issues/7558
[20:15] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[20:17] * arbrandes (~arbrandes@189.78.56.95) has joined #ceph
[20:18] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[20:18] <mo-> anybody got something bout different versions of mons in a cluster? can I add a .61.9 mon to a .61.2 cluster or should I get .61.2 on the new mon
[20:18] * elder_ (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Remote host closed the connection)
[20:19] * rturk-away is now known as rturk
[20:20] <hasues> When I run ceph-deploy commands, I see "unhandled exceptions" in sys.excepthook. with no original exception.
[20:20] <hasues> happens at the end.
[20:24] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[20:26] * dereky (~derek@pool-71-114-104-38.washdc.fios.verizon.net) has joined #ceph
[20:27] <hasues> Hm, is there a recommended version of python to use with ceph-deploy?
[20:28] * dereky (~derek@pool-71-114-104-38.washdc.fios.verizon.net) Quit ()
[20:28] * dereky (~derek@pool-71-114-104-38.washdc.fios.verizon.net) has joined #ceph
[20:33] * dgbaley27 (~matt@c-76-120-64-12.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[20:39] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[20:41] <jhujhiti> is it safe to run a qemu built for bobtail against a newer version cluster? i'd like to upgrade the bobtail cluster but i can't reboot the running VMs
[20:44] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[20:48] <alfredodeza> hasues: that is a known issue and is safe to ignore
[20:48] <alfredodeza> tricky to get rid of too :)
[20:48] <hasues> alfredodeza: I get various errors, so I'm a bit concerned.
[20:49] <alfredodeza> errors are one thing, the sys.excepthook is another
[20:49] <alfredodeza> those are safe to ignore
[20:49] <alfredodeza> there is a new release to come up that should help to improve that too
[20:49] <hasues> Unhandled exception in thread started by <function run_and_release at 0x1e57cf8>
[20:50] <alfredodeza> hasues: yep, ignore
[20:50] <alfredodeza> the upcoming release will help
[20:50] <alfredodeza> that has a TypeError in there as well right?
[20:51] <hasues> alfredodeza: I don't believe so.
[20:51] <alfredodeza> hasues: this one? https://bitbucket.org/hpk42/execnet/issue/29/new-typeerror-issues-when-closing-the
[20:51] <alfredodeza> that is going to be addressed in the next release. It is already in master
[20:52] <hasues> Yeah, mine looks like that sans the type error
[21:00] * rturk is now known as rturk-away
[21:00] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[21:01] * BillK (~BillK-OFT@124-168-237-176.dyn.iinet.net.au) has joined #ceph
[21:07] * erice (~erice@50.240.86.181) Quit (Ping timeout: 480 seconds)
[21:08] * fghaas (~florian@91-119-140-244.dynamic.xdsl-line.inode.at) has left #ceph
[21:10] * diegows (~diegows@190.216.51.2) Quit (Ping timeout: 480 seconds)
[21:11] * brambles (lechuck@s0.barwen.ch) Quit (Ping timeout: 480 seconds)
[21:13] * brambles (lechuck@s0.barwen.ch) has joined #ceph
[21:18] * allsystemsarego (~allsystem@188.25.129.255) has joined #ceph
[21:19] * NetWeaver (~NetWeaver@host.cctv.org) Quit (Ping timeout: 480 seconds)
[21:25] * BillK (~BillK-OFT@124-168-237-176.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[21:26] * ivotron (~ivotron@adsl-99-146-2-252.dsl.pltn13.sbcglobal.net) has joined #ceph
[21:27] * zerick (~eocrospom@190.187.21.53) Quit (Read error: Connection reset by peer)
[21:33] <bdonnahue2> im having trouble getting my monitors in sync
[21:33] <bdonnahue2> health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean;
[21:33] <bdonnahue2> does anyone know what this means
[21:35] * ivotron (~ivotron@adsl-99-146-2-252.dsl.pltn13.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[21:35] <mo-> guys. got a cluster with 2/3 mons up, I added one mon that is up, but ceph health blocks and only prints messages of being unable to reach the offline mon
[21:35] <mo-> does that mean the new node is getting the store.db and I must be patient?
[21:36] * SvenPHX1 (~Adium@wsip-174-79-34-244.ph.ph.cox.net) has left #ceph
[21:37] * cephn00b (~cephn00b@64.191.222.117) has joined #ceph
[21:38] <bens> welcome to the party, n00b
[21:38] <cephn00b> hey bens.
[21:38] <cephn00b> has anyone been able to simulate multipath via iSCSI utilizing tgt?
[21:40] <cephn00b> i would like to give a vmware host 2 paths to the same image on my ceph cluster
[21:41] <rotbeard> bdonnahue2, do you have more than 1 server with osds?
[21:43] <bdonnahue2> right now i have two monitors and two osds i plan to use but they have not be prepared or activated yet
[21:44] <bdonnahue2> only the monitors have need added to the cluster
[21:44] <bdonnahue2> monmap e1: 2 mons at {OS-004-OSD-001=50.4.50.101:6789/0,OS-004-OSD-003=50.4.50.103:6789/0}, election epoch 4, quorum 0,1 OS-004-OSD-001,OS-004-OSD-003
[21:44] <bdonnahue2> *running monitor on same host at osd
[21:45] * TMM (~hp@c97185.upc-c.chello.nl) has joined #ceph
[21:46] * xarses_ (~andreww@12.164.168.117) has joined #ceph
[21:46] <rotbeard> bdonnahue2, I am pretty new to ceph and can't explain why, but in default with one storage node, I never saw clean PGs :P
[21:46] * arbrandes (~arbrandes@189.78.56.95) Quit (Quit: Leaving)
[21:47] <bdonnahue2> i think the pgs are inactive because the monitors are nto in quorum
[21:47] * sroy (~sroy@207.96.182.162) Quit (Quit: Quitte)
[21:48] <bdonnahue2> i could be wrong though
[21:48] <bdonnahue2> election epoch 4, quorum 0,1 OS-004-OSD-001,OS-004-OSD-003
[21:48] <janos> rotbeard, with one node you should not see clean in a default config
[21:48] <janos> the default failure domain is host. so basically you'd be half-dead by default with one sotrage node
[21:49] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) has joined #ceph
[21:49] <janos> *storage
[21:50] <bdonnahue2> janos any idea about two nodes?
[21:50] * garphy`aw is now known as garphy
[21:51] <janos> two storage nodes is perfectly reasonable to expect HEALTH_OK
[21:52] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[21:52] <janos> where node = host
[21:52] <hasues> When I am creating monitors for nodes, do I need to create each one as initial?
[21:53] * rturk-away is now known as rturk
[21:53] * Sysadmin88 (~IceChat77@176.254.32.31) has joined #ceph
[21:53] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[21:54] <bdonnahue2> no you can do one and add more later
[21:54] <bdonnahue2> or do them all at the same time i believe
[21:54] <bdonnahue2> janos any advice for troublshooting my quorum
[21:55] <janos> sorry, in middle of some unrealted mess. off top of my head - having even-numbered mons is not a great idea
[21:55] <janos> are you having issues adding the 3rd?
[21:56] <hasues> So I used mon create instead of mon create-inital, and I want to destroy it.
[21:56] <hasues> It states that I have a PermissionError.
[21:56] <rotbeard> bdonnahue2, this what janos said
[21:56] <janos> 2 can't achieve quorum. that's a tie
[21:57] <rotbeard> with 2 nodes in my locally testlab I reached a healthy state
[21:57] <rotbeard> even with 1 mon
[21:57] <janos> yeah 1 mon is fine
[21:58] <rotbeard> btw thanks janos for pointing out that default failure domain thing
[21:58] <janos> np
[22:01] <bdonnahue2> ok ill try adding a third node later tonight. thanks for the info
[22:03] * cephn00b (~cephn00b@64.191.222.117) has left #ceph
[22:03] * valeech (~valeech@64.191.222.117) has joined #ceph
[22:04] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[22:04] * ChanServ sets mode +v andreask
[22:05] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[22:05] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[22:08] * BillK (~BillK-OFT@124-168-237-176.dyn.iinet.net.au) has joined #ceph
[22:12] * erice (~erice@65.114.129.62) has joined #ceph
[22:20] <bdonnahue2> does ceph deploy work for x86?
[22:21] <bdonnahue2> keep seeing this when i run ceph install
[22:21] <bdonnahue2> [WARNIN] http://ceph.com/rpm-emperor/el6/i386/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
[22:21] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[22:24] * piezo (~chatzilla@107-197-220-222.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[22:29] <jcsp> hmm, I'm not sure we build 32 bit packages
[22:30] <jcsp> what type of system are you running on our of curiosity?
[22:37] <bdonnahue2> its an intel dual core
[22:37] <bdonnahue2> it's one of my less functional hypervisors so i use it for dev stuff
[22:41] <hasues> When I try to run ceph mon remove <node> I get a permissions error.
[22:42] <hasues> Thoughts?
[22:42] <hasues> Some rule where you can not remove an initial or?
[22:44] * sarob (~sarob@mobile-166-137-187-196.mycingular.net) has joined #ceph
[22:46] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) Quit (Quit: Leaving)
[22:51] * rendar (~s@87.19.182.167) Quit ()
[22:51] * linuxkidd (~linuxkidd@2001:420:2100:2258:39d3:de25:be2d:1e03) Quit (Quit: Leaving)
[22:52] * sarob (~sarob@mobile-166-137-187-196.mycingular.net) Quit (Ping timeout: 480 seconds)
[22:54] * erice (~erice@65.114.129.62) Quit (Ping timeout: 480 seconds)
[22:55] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[22:56] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[22:57] <bdonnahue2> hasus i got all kinds of permission issues unless i ran things as root on all machines
[23:00] * dpippenger (~riven@cpe-198-72-157-189.socal.res.rr.com) has joined #ceph
[23:01] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Quit: Leaving.)
[23:08] <hasues> bdonnahue2: States not to do that. But I tried it as root and have the same issue
[23:10] <hasues> Error connecting to cluster: PermissionError
[23:10] <hasues> That as root as well.
[23:13] <dmick> hasues: client.admin key available, readable, etc?
[23:14] <hasues> dmick: -rw-r--r--. 1 root root 77 Feb 27 15:41 /var/lib/ceph/mon/ceph-ceph2/keyring
[23:14] <hasues> 644 perms, so I guess so?
[23:15] <saturnine> So I'm doing some testing with RBD+KVM
[23:15] <dmick> that....looks like the monitor's keyring, not the client.admin keyring that the client would use
[23:15] <hasues> dmick: I am on the client running the command locally as opposed from the admin node through ceph-deploy. Where should I be looking?
[23:15] <saturnine> Running 3x nodes with 82TB OSDs each
[23:15] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[23:16] <dmick> by default in /etc/ceph/ceph.client.admin.keyring
[23:16] <saturnine> Getting ~61MB/s writes, and ~28MB/s reads in a VM with virtio
[23:16] <dmick> unless you have configured it different
[23:17] <dmick> if you ceph-deploy'ed the cluster, it won't have that keyring unless you also ceph-deploy-admin'ed the node you're on
[23:17] <dmick> that's what ceph-deploy admin does
[23:17] <hasues> dmick: Oh, I don't know that I ever issued the admin command with ceph-deploy
[23:17] <saturnine> That seem about right for 7200RPM disks w/ SSD journals?
[23:17] <dmick> can you run any ceph command at all?
[23:17] <hasues> dmick: on the node, there is no ceph.client.admin.keyring
[23:18] <hasues> dmick: no, if I type ceph it says permissions
[23:18] <xmltok> if i am modifying my crushmap by hand, does the weighting needs to add up evenly through the layers to get a proper distribution? for example the weight of the host should be the sum of my osd device weights? and racks the sum of the hosts
[23:18] <dmick> well, yes, then, the issue is not connected with the monitor, the issue is that you don't have client permissions at all on that machine
[23:18] * owenmurr (~owenmurr@109.175.201.0) Quit (Quit: Lost terminal)
[23:19] * hasues scratches head.
[23:19] <dmick> in a nutshell: the cluster has stored key/permission pairs
[23:19] <dmick> anything connecting has to use the right key to get that permission
[23:19] <dmick> there are separate keys for each connection type; one of them is "client.admin", which is "the ceph command"
[23:20] <dmick> so any host running that command has to have the right key in a keyring file to be able to authenticate with the cluster and get the permissions it needs
[23:20] <hasues> dmick: ah, so what command creates that keyring
[23:20] <dmick> it is likely that the ceph-deploy host has it in a local directory; ceph-deploy admin distributes it
[23:20] <hasues> I see that file on another node.
[23:20] <dmick> "the cluster creation process" creates that keyring
[23:20] <hasues> So I'm curious why it isn't on the other
[23:21] <dmick> there is, of course, no way to answer that question with names like "another node" and "other node"
[23:21] <hasues> three nodes
[23:21] <dmick> go to where you ran ceph-deploy
[23:21] <hasues> ceph1, ceph2, ceph3.
[23:21] <dmick> is it in that dir?
[23:21] <hasues> ceph1 and ceph2 are to be osds
[23:21] <hasues> ceph3 is where i run ceph-deploy
[23:21] <hasues> Yes
[23:22] <dmick> $ ceph-deploy admin -h
[23:22] <dmick> usage: ceph-deploy admin [-h] [HOST [HOST ...]]
[23:22] <dmick> Push configuration and client.admin key to a remote host.
[23:22] <dmick> positional arguments:
[23:22] <dmick> HOST host to configure for ceph administration
[23:22] * allsystemsarego (~allsystem@188.25.129.255) Quit (Quit: Leaving)
[23:22] <hasues> Curious, it seems like the preflight and storage quick start would have mentioned this first?
[23:22] <dmick> http://ceph.com/docs/master/rados/deployment/ceph-deploy-admin/
[23:23] <dmick> arguably that could be higher in the list on http://ceph.com/docs/master/rados/deployment/
[23:23] <hasues> apparently :)
[23:26] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Remote host closed the connection)
[23:27] <mo-> su uh.. still waiting on this add monitor process to finish. how much larger is this store.db gonna grow :S Im looking at 31GB and counting
[23:27] * ivotron (~ivotron@adsl-99-146-2-252.dsl.pltn13.sbcglobal.net) has joined #ceph
[23:28] * thb (~me@0001bd58.user.oftc.net) Quit (Quit: Leaving.)
[23:28] * BillK (~BillK-OFT@124-168-237-176.dyn.iinet.net.au) Quit (Quit: ZNC - http://znc.in)
[23:28] <hasues> dmick: So, i can now run the ceph command, but I can't tell it to remove the monitor there.
[23:29] <dmick> and I bet you're just now typing to explain what the symptom is now
[23:29] <hasues> ceph --cluster=ceph -n mon. -k /var/lib/ceph/mon/ceph-ceph2/keyring mon remove ceph2
[23:29] <hasues> 2014-02-27 17:26:21.295131 7f2dfe8f1700 0 librados: mon. authentication error (1) Operation not permitted
[23:29] <hasues> Error connecting to cluster: PermissionError
[23:30] <dmick> where did those -n and -k switches come from?
[23:30] <hasues> dmick: When I originally issued this from ceph-deploy, it put them there. So I merely copied the command and ran it locally on the node.
[23:30] <dmick> all that stuff I just said about client.admin and keyrings? that's completely subverted by using those switches
[23:31] <dmick> keys are looked up in keyrings by name. that's -n, default client.admin
[23:31] * dereky (~derek@pool-71-114-104-38.washdc.fios.verizon.net) Quit (Quit: dereky)
[23:31] <dmick> keyrings are found by searching sets of paths, default /etc/ceph; that's overridden by -k
[23:32] <dmick> so you don't want them
[23:32] <hasues> dmick: curious, so what created those in the first place?
[23:33] <mo-> so... Im looking at a mon thats currently getting added to the cluster. there is a hard limit of 50G for disk space and Im not sure what to do if the mon hits that limit
[23:33] * jskinner (~jskinner@69.170.148.179) has joined #ceph
[23:34] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[23:34] * The_Bishop_ (~bishop@f050145151.adsl.alicedsl.de) has joined #ceph
[23:35] <mo-> the store.db folders on the pre-existing mons are about 15G in size, the new mon is already sitting at 27G and counting
[23:35] * ivotron (~ivotron@adsl-99-146-2-252.dsl.pltn13.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[23:37] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[23:37] * valeech (~valeech@64.191.222.117) Quit (Quit: valeech)
[23:39] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) Quit (Quit: Computer has gone to sleep.)
[23:40] * jskinner_ (~jskinner@69.170.148.179) Quit (Ping timeout: 480 seconds)
[23:41] * The_Bishop (~bishop@f055026026.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[23:41] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[23:41] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[23:42] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[23:44] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[23:46] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Remote host closed the connection)
[23:46] * sarob (~sarob@mobile-166-137-187-196.mycingular.net) has joined #ceph
[23:47] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[23:48] * al (quassel@niel.cx) Quit (Remote host closed the connection)
[23:49] * sarob (~sarob@mobile-166-137-187-196.mycingular.net) Quit (Read error: Connection reset by peer)
[23:50] * al (quassel@niel.cx) has joined #ceph
[23:55] * rturk is now known as rturk-away
[23:56] * neurodrone (~neurodron@static-108-30-171-7.nycmny.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[23:58] * al (quassel@niel.cx) Quit (Remote host closed the connection)
[23:59] * neurodrone (~neurodron@static-108-30-171-7.nycmny.fios.verizon.net) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.