#ceph IRC Log

Index

IRC Log for 2015-02-20

Timestamps are in GMT/BST.

[0:00] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:00] * georgem (~Adium@184.151.178.17) Quit (Quit: Leaving.)
[0:05] * wer_ (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[0:05] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[0:13] * DV (~veillard@2001:41d0:1:d478::1) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Sysadmin88 (~IceChat77@2.125.213.8) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * nigwil (~Oz@li747-216.members.linode.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * stephan1 (~Adium@dslb-178-008-020-100.178.008.pools.vodafone-ip.de) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * zigo (quasselcor@atl.apt-proxy.gplhost.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Vacuum (~vovo@88.130.211.235) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * lxo (~aoliva@lxo.user.oftc.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * oms101 (~oms101@p20030057EA081E00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * madkiss1 (~madkiss@ip5b418369.dynamic.kabel-deutschland.de) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * _NiC (~kristian@aeryn.ronningen.no) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Hell_Fire_ (~hellfire@123-243-155-184.static.tpgi.com.au) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * tserong (~tserong@203-173-33-52.dyn.iinet.net.au) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * sh (~sh@2001:6f8:1337:0:50f0:a8fe:9b20:7f3e) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * marcan (bip@marcansoft.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * infernix (nix@2001:41f0::ffff:ffff:ffff:fffe) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * coreping (~Michael_G@n1.coreping.org) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * zz_hitsumabushi (~hitsumabu@175.184.30.148) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * palmeida (~palmeida@gandalf.wire-consulting.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * wolsen (~wolsen@162.213.34.152) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * redf (~red@chello084112110034.11.11.vie.surfer.at) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * saltsa (~joonas@dsl-hkibrasgw1-58c01a-36.dhcp.inet.fi) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * raso (~raso@deb-multimedia.org) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * dis (~dis@109.110.67.201) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * al (quassel@niel.cx) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Xiol (~Xiol@shrike.daneelwell.eu) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Tim_ (~tim@rev-178.21.220.91.quarantainenet.nl) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * thb (~me@0001bd58.user.oftc.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * LeaChim (~LeaChim@host86-147-114-247.range86-147.btcentralplus.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * fretb (frederik@november.openminds.be) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * fdmanana (~fdmanana@bl4-182-212.dsl.telepac.pt) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Kioob`Taff (~plug-oliv@2a01:e35:2e8a:1e0::42:10) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * sileht (~sileht@gizmo.sileht.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * bauruine (~bauruine@wotan.tuxli.ch) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * a1-away (~jelle@62.27.85.48) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * _karl (~karl@kamr.at) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * mschiff (~mschiff@mx10.schiffbauer.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Amto_res1 (~amto_res@ks312256.kimsufi.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * jamespage (~jamespage@culvain.gromper.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * harmw (~harmw@chat.manbearpig.nl) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * foxxx0 (~fox@2a01:4f8:200:216b::2) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * MaZ- (~maz@00016955.user.oftc.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * frickler (~jens@v1.jayr.de) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * via (~via@smtp2.matthewvia.info) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Hazelesque (~hazel@2a03:9800:10:13::2) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * tomaw (tom@tomaw.noc.oftc.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * bd (~bd@mail.bc-bd.org) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Anticimex (anticimex@95.80.32.80) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * lurbs (user@uber.geek.nz) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * tries_ (~tries__@2a01:2a8:2000:ffff:1260:4bff:fe6f:af91) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * tries (ident@easytux.ch) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * irq0 (~seri@amy.irq0.org) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * mo- (~mo@2a01:4f8:141:3264:c0f:fee:0:4) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * boolman (boolman@79.138.78.238) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * mui (mui@eutanasia.mui.fi) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * kaisan (~kai@zaphod.xs4all.nl) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * ToM- (~tom@atlas.planetix.fi) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * wonko_be_ (bernard@november.openminds.be) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * baffle (baffle@jump.stenstad.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Zethrok (~martin@95.154.26.34) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * mmgaggle (~kyle@cerebrum.dreamservers.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * todin (tuxadero@kudu.in-berlin.de) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * olc-_ (~olecam@93.184.35.82) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Svedrin (svedrin@elwing.funzt-halt.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * SWAT (~swat@cyberdyneinc.xs4all.nl) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * sbadia (~sbadia@marcellin.sebian.fr) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * wintamut1 (~wintamute@mail.wintamute.org) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * rektide (~rektide@eldergods.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * wer_ (~wer@206-248-239-142.unassigned.ntelos.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * JCL (~JCL@ip24-253-45-236.lv.lv.cox.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * bandrus (~brian@50.23.113.236-static.reverse.softlayer.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * jskinner (~jskinner@host-95-2-129.infobunker.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * delattec (~cdelatte@204-235-114.165.twcable.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * sputnik13 (~sputnik13@74.202.214.170) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Kupo1 (~tyler.wil@23.111.254.159) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * evilrob00 (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * moore (~moore@64.202.160.88) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Concubidated (~Adium@71.21.5.251) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * angdraug (~angdraug@c-50-174-102-105.hsd1.ca.comcast.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * scuttlemonkey (~scuttle@nat-pool-rdu-t.redhat.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * xarses (~andreww@12.164.168.117) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * danieagle (~Daniel@200-148-39-11.dsl.telesp.net.br) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * zerick (~zerick@irc.quassel.zerick.me) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * jtang (~jtang@109.255.42.21) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * fvl (~fvl@ipjusup.net.tomline.ru) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * [Leeloo] (~Leeloo@ec2-54-88-140-156.compute-1.amazonaws.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * stj (~stj@2604:a880:800:10::2cc:b001) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * davidzlap (~Adium@2605:e000:1313:8003:215a:ad8f:b630:36ee) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * pmxceph (~pmxceph@208.98.194.163) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * houkouonchi-work (~linux@2607:f298:b:635:225:90ff:fe39:38ce) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * thomas (uid68081@id-68081.charlton.irccloud.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * eternaleye (~eternaley@50.245.141.77) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * mondkalbantrieb (~quassel@mondkalbantrieb.de) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * classicsnail (~David@2600:3c01::f03c:91ff:fe96:d3c0) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Bosse (~bosse@rifter2.klykken.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * qybl (~foo@maedhros.krzbff.de) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * blynch (~blynch@vm-nat.msi.umn.edu) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Annttu (annttu@0001934a.user.oftc.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Georgyo (~georgyo@shamm.as) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * tcatm (~quassel@2a01:4f8:200:71e3:5054:ff:feff:cbce) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * HauM1 (~HauM1@login.univie.ac.at) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * \ask (~ask@oz.develooper.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * sw3 (sweaung@2400:6180:0:d0::66:100f) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * fam_away (~famz@nat-pool-bos-t.redhat.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * chutz (~chutz@rygel.linuxfreak.ca) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Andreas-IPO (~andreas@2a01:2b0:2000:11::cafe) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * nolan (~nolan@2001:470:1:41:a800:ff:fe3e:ad08) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * chasmo77 (~chas77@158.183-62-69.ftth.swbr.surewest.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Gugge-47527 (gugge@kriminel.dk) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * macjack1 (~Thunderbi@123.51.160.200) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * dmick (~dmick@2607:f298:a:607:c5ec:52cf:f46:69f5) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * smokedmeets (~smokedmee@c-67-174-241-112.hsd1.ca.comcast.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * j^2 (sid14252@id-14252.brockwell.irccloud.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * dwm (~dwm@northrend.tastycake.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * dlan (~dennis@116.228.88.131) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * yehudasa_ (~yehudasa@2607:f298:a:607:cd77:18f1:8c32:62c2) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * theanalyst (theanalyst@open.source.rocks.my.socks.firrre.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * carmstrong (sid22558@id-22558.uxbridge.irccloud.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * ipolyzos (sid45277@id-45277.uxbridge.irccloud.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * fred`` (fred@earthli.ng) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * cuqa (~oftc-webi@212.224.70.43) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * _prime_ (~oftc-webi@199.168.44.192) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * derjohn_mob (~aj@tmo-113-135.customers.d1-online.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * SPACESHIP (~chatzilla@d24-141-231-126.home.cgocable.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * diegows (~diegows@190.190.5.238) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * nhm (~nhm@65-128-165-174.mpls.qwest.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * ifur (~osm@0001f63e.user.oftc.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * ShaunR (~ShaunR@staff.ndchost.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * puffy (~puffy@50.185.218.255) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * cholcombe973 (~chris@73.25.105.99) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * linuxkidd (~linuxkidd@92.sub-70-210-196.myvzw.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * debian112 (~bcolbert@24.126.201.64) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * tupper_ (~tcole@rtp-isp-nat-pool1-1.cisco.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * synacksyn (6dbebb8f@107.161.19.109) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * bro_ (~flybyhigh@panik.darksystem.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Nats (~natscogs@114.31.195.238) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * gsilvis (~andovan@c-75-69-162-72.hsd1.ma.comcast.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Tene (~tene@173.13.139.236) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * garphy`aw (~garphy@frank.zone84.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * ZyTer (~ZyTer@ghostbusters.apinnet.fr) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * cronix1 (~cronix@5.199.139.166) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * oblu (~o@62.109.134.112) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * sig_wall (~adjkru@xn--hwgz2tba.lamo.su) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * fouxm (~foucault@ks01.commit.ninja) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * mookins (~mookins@induct3.lnk.telstra.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * athrift_ (~nz_monkey@203.86.205.13) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * ctd (~root@00011932.user.oftc.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * singler (~singler@zeta.kirneh.eu) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * pdrakeweb (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * nwat (~nwat@kyoto.soe.ucsc.edu) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * kraken (~kraken@gw.sepia.ceph.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * carter (~carter@li98-136.members.linode.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * med (~medberry@71.74.177.250) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * ohnomrbill (~ohnomrbil@c-67-174-241-112.hsd1.ca.comcast.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * trociny (~mgolub@93.183.239.2) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * seapasul1i (~seapasull@95.85.33.150) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * mattronix (~quassel@fw1.sdc.mattronix.nl) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * gregsfortytwo (~gregsfort@209.132.181.86) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * treaki (~treaki@p4FDF62BB.dip0.t-ipconnect.de) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * sudocat (~davidi@192.185.1.20) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * capri (~capri@212.218.127.222) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * SteveCapper (~steven@marmot.wormnet.eu) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * spudly (~spudly@ext-tok.murf.org) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * tobiash_ (~quassel@mail.bmw-carit.de) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * morse (~morse@supercomputing.univpm.it) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * CephTestC (~CephTestC@199.91.185.156) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * cmorandin (~cmorandin@194.206.51.157) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * jamespd (~mucky@mucky.socket7.org) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * _br_ (~bjoern_of@213-239-215-232.clients.your-server.de) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * kapil (~ksharma@2620:113:80c0:5::2222) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * nljmo (~nljmo@5ED6C263.cm-7-7d.dynamic.ziggo.nl) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * mivaho (~quassel@2001:983:eeb4:1:c0de:69ff:fe2f:5599) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * bjornar (~bjornar@ns3.uniweb.no) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * maxxware (~maxx@149.210.133.105) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * mlausch (~mlausch@2001:8d8:1fe:7:893:30c1:d742:22fb) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * joao (~joao@249.38.136.95.rev.vodafone.pt) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * lmb (lmb@212.8.204.10) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * mfa298 (~mfa298@gateway.yapd.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * amospalla (~amospalla@0001a39c.user.oftc.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * CAPSLOCK2000 (~oftc@2001:610:748:1::8) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * liiwi (liiwi@idle.fi) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * BranchPredictor (branch@predictor.org.pl) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * beuwolf (~flo@62.113.200.37) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * winston-d (~ubuntu@ec2-54-244-213-72.us-west-2.compute.amazonaws.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * xophe (~xophe@ows-5-104-102-23.eu-west-1.compute.outscale.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * dvanders (~dvanders@dvanders-hpi5.cern.ch) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * flaf (~flaf@2001:41d0:1:7044::1) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * chrome0 (~chrome0@static.202.35.46.78.clients.your-server.de) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * NotExist (~notexist@kvps-180-235-255-92.secure.ne.jp) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * ccourtaut_ (~ccourtaut@2001:41d0:1:eed3::1) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * tacticus (~tacticus@v6.kca.id.au) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * guppy (~quassel@guppy.xxx) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * lightspeed (~lightspee@2001:8b0:16e:1:8326:6f70:89f:8f9c) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * renzhi (~renzhi@116.226.62.53) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * psiekl (psiekl@wombat.eu.org) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * DLange (~DLange@dlange.user.oftc.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Rickus (~Rickus@office.protected.ca) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * sage (~quassel@cpe-76-95-230-100.socal.res.rr.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * kevinkevin (52edc5d1@107.161.19.109) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * rmoe (~quassel@12.164.168.117) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * nitti (~nitti@162.222.47.218) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * asalor (~asalor@2a00:1028:96c1:4f6a:204:e2ff:fea1:64e6) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * MACscr (~Adium@2601:d:c800:de3:514c:70d1:205f:a953) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * fsimonce (~simon@host217-37-dynamic.30-79-r.retail.telecomitalia.it) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * yghannam (~yghannam@0001f8aa.user.oftc.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * bearkitten (~bearkitte@cpe-66-27-98-26.san.res.rr.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * guerby (~guerby@ip165-ipv6.tetaneutral.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * halbritt (~halbritt@65.50.222.90) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * jnq (~jnq@95.85.22.50) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * L2SHO (~L2SHO@2001:19f0:1000:5123:8c84:23f:8ca:f675) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * yogh (~yogh@sol.kvlt.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * dmsimard (~dmsimard@198.72.123.202) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * nwf (~nwf@00018577.user.oftc.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * masterpe (~masterpe@2a01:670:400::43) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * rwheeler (~rwheeler@173.48.208.246) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * jwilkins (~jwilkins@c-67-180-123-48.hsd1.ca.comcast.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * jks (~jks@178.155.151.121) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * boichev (~boichev@213.169.56.130) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * skullone (~skullone@shell.skull-tech.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * SamYaple (~SamYaple@162.209.126.134) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * MrBy (~MrBy@85.115.23.2) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * leseb- (~leseb@81-64-215-19.rev.numericable.fr) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * `10 (~10@69.169.91.14) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Kingrat (~shiny@cpe-96-29-149-153.swo.res.rr.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * epf (epf@epf.im) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * andrewschoen (~andrewsch@50.56.86.195) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * burley_ (~khemicals@cpe-98-28-233-158.woh.res.rr.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * loganlsf1d (~logan@216.245.207.2) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * ismell (~ismell@host-24-52-35-110.beyondbb.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Meths (~meths@2.30.117.115) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * codice_ (~toodles@97-94-175-73.static.mtpk.ca.charter.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * cmdrk_ (~lincoln@c-67-165-142-226.hsd1.il.comcast.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * mdxi_ (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * eqhmcow (~eqhmcow@adsl-74-242-202-15.rmo.bellsouth.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * kvanals (kvanals@kvanals.org) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * rhamon (~rhamon@208.71.184.41) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * s3an2 (~sean@korn.s3an.me.uk) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * runfromn1where (~runfromno@pool-70-104-139-21.nycmny.fios.verizon.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * destrudo (~destrudo@64.142.74.180) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * gabrtv (sid36209@id-36209.brockwell.irccloud.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * ndevos (~ndevos@nat-pool-ams2-5.redhat.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * off_rhoden (~off_rhode@209.132.181.86) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * dustinm` (~dustinm`@2607:5300:100:200::160d) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * dalegaard-39554 (~dalegaard@vps.devrandom.dk) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * rturk|afk (~rturk@nat-pool-rdu-t.redhat.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * schmee (~quassel@phobos.isoho.st) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * portante (~portante@nat-pool-bos-t.redhat.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * phantomcircuit (~phantomci@smartcontracts.us) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * toabctl (~toabctl@toabctl.de) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * kevincox (~kevincox@4.s.kevincox.ca) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * pmatulis (~peter@ec2-23-23-42-7.compute-1.amazonaws.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * jkappert (~jkappert@5.39.189.119) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * acaos (~zac@209.99.103.42) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Azrael (~azrael@terra.negativeblue.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * TomB_ (~tom@167.88.45.146) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * purpleidea (~james@216.252.94.181) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * trond_ (~trond@evil-server.alseth.info) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * jeffhung_ (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * darkfader (~floh@88.79.251.60) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * beardo (~beardo__@beardo.cc.lehigh.edu) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * hybrid512 (~walid@195.200.167.70) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * shk (sid33582@id-33582.uxbridge.irccloud.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * supay (sid47179@id-47179.uxbridge.irccloud.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * rturk-away (~rturk@ds3553.dreamservers.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * scalability-junk_ (sid6422@id-6422.charlton.irccloud.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Pintomatic (sid25118@id-25118.charlton.irccloud.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * KindOne (kindone@0001a7db.user.oftc.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * fitzdsl (~Romain@dedibox.fitzdsl.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * _nick (~nick@zarquon.dischord.org) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * devicenull (sid4013@id-4013.charlton.irccloud.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * smiley_ (~smiley@205.153.36.170) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * danderson (~dave@atlas.natulte.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * tchmnkyz (tchmnkyz@0001638b.user.oftc.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * tnt (~tnt@ec2-54-200-98-43.us-west-2.compute.amazonaws.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Fetch (fetch@gimel.cepheid.org) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * jgornick (~jgornick@2600:3c00::f03c:91ff:fedf:72b4) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * soren (~soren@00013a4f.user.oftc.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * shaon_ (~shaon@198.50.164.24) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * jackhill (~jackhill@bog.hcoop.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * JoeJulian (~JoeJulian@shared.gaealink.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * terje_ (~joey@63.228.91.225) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * bilco105_ (~bilco105@irc.bilco105.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * magicrobotmonkey (~abassett@ec2-50-18-55-253.us-west-1.compute.amazonaws.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * DrewBeer (~DrewBeer@216.152.240.203) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * terje__ (~root@135.109.216.239) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * lkoranda (~lkoranda@213.175.37.10) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * mtanski (~mtanski@65.244.82.98) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * colonD (~colonD@173-165-224-105-minnesota.hfc.comcastbusiness.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * ByteSore (~bytesore@5.39.189.119) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * saturnine (~saturnine@66.219.20.211) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * markl (~mark@knm.org) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * alexbligh1 (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * sadbox (~jmcguire@sadbox.org) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Jahkeup (Jahkeup@gemini.ca.us.panicbnc.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * blahnana (~bman@104-97-248-162-static.reverse.queryfoundry.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * nyov (~nyov@178.33.33.184) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * Psi-Jack (~psi-jack@lhmon.linux-help.org) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * WintermeW_ (~WintermeW@212-83-158-61.rev.poneytelecom.eu) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * SpamapS (~clint@xencbyrum2.srihosting.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * dec (~dec@ec2-54-66-50-124.ap-southeast-2.compute.amazonaws.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * cfreak200 (andi@p4FF3E9B0.dip0.t-ipconnect.de) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * philips (~philips@ec2-54-196-103-51.compute-1.amazonaws.com) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * ktdreyer (~kdreyer@polyp.adiemus.org) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * loicd (~loicd@cmd179.fsffrance.org) Quit (coulomb.oftc.net charon.oftc.net)
[0:13] * funnel (~funnel@0001c7d4.user.oftc.net) Quit (coulomb.oftc.net charon.oftc.net)
[0:16] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[0:16] * wer_ (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[0:16] * Rickus (~Rickus@office.protected.ca) has joined #ceph
[0:16] * derjohn_mob (~aj@tmo-113-135.customers.d1-online.com) has joined #ceph
[0:16] * SPACESHIP (~chatzilla@d24-141-231-126.home.cgocable.net) has joined #ceph
[0:16] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[0:16] * diegows (~diegows@190.190.5.238) has joined #ceph
[0:16] * JCL (~JCL@ip24-253-45-236.lv.lv.cox.net) has joined #ceph
[0:16] * nhm (~nhm@65-128-165-174.mpls.qwest.net) has joined #ceph
[0:16] * cuqa (~oftc-webi@212.224.70.43) has joined #ceph
[0:16] * bandrus (~brian@50.23.113.236-static.reverse.softlayer.com) has joined #ceph
[0:16] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[0:16] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[0:16] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[0:16] * davidzlap (~Adium@2605:e000:1313:8003:215a:ad8f:b630:36ee) has joined #ceph
[0:16] * sage (~quassel@cpe-76-95-230-100.socal.res.rr.com) has joined #ceph
[0:16] * LeaChim (~LeaChim@host86-147-114-247.range86-147.btcentralplus.com) has joined #ceph
[0:16] * delattec (~cdelatte@204-235-114.165.twcable.com) has joined #ceph
[0:16] * kevinkevin (52edc5d1@107.161.19.109) has joined #ceph
[0:16] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) has joined #ceph
[0:16] * treaki (~treaki@p4FDF62BB.dip0.t-ipconnect.de) has joined #ceph
[0:16] * sputnik13 (~sputnik13@74.202.214.170) has joined #ceph
[0:16] * Kupo1 (~tyler.wil@23.111.254.159) has joined #ceph
[0:16] * evilrob00 (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) has joined #ceph
[0:16] * ifur (~osm@0001f63e.user.oftc.net) has joined #ceph
[0:16] * moore (~moore@64.202.160.88) has joined #ceph
[0:16] * ShaunR (~ShaunR@staff.ndchost.com) has joined #ceph
[0:16] * rmoe (~quassel@12.164.168.117) has joined #ceph
[0:16] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[0:16] * sudocat (~davidi@192.185.1.20) has joined #ceph
[0:16] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) has joined #ceph
[0:16] * angdraug (~angdraug@c-50-174-102-105.hsd1.ca.comcast.net) has joined #ceph
[0:16] * scuttlemonkey (~scuttle@nat-pool-rdu-t.redhat.com) has joined #ceph
[0:16] * xarses (~andreww@12.164.168.117) has joined #ceph
[0:16] * puffy (~puffy@50.185.218.255) has joined #ceph
[0:16] * cholcombe973 (~chris@73.25.105.99) has joined #ceph
[0:16] * linuxkidd (~linuxkidd@92.sub-70-210-196.myvzw.com) has joined #ceph
[0:16] * nitti (~nitti@162.222.47.218) has joined #ceph
[0:16] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[0:16] * tupper_ (~tcole@rtp-isp-nat-pool1-1.cisco.com) has joined #ceph
[0:16] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[0:16] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) has joined #ceph
[0:16] * danieagle (~Daniel@200-148-39-11.dsl.telesp.net.br) has joined #ceph
[0:16] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[0:16] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) has joined #ceph
[0:16] * capri (~capri@212.218.127.222) has joined #ceph
[0:16] * asalor (~asalor@2a00:1028:96c1:4f6a:204:e2ff:fea1:64e6) has joined #ceph
[0:16] * synacksyn (6dbebb8f@107.161.19.109) has joined #ceph
[0:16] * zerick (~zerick@irc.quassel.zerick.me) has joined #ceph
[0:16] * jtang (~jtang@109.255.42.21) has joined #ceph
[0:16] * SteveCapper (~steven@marmot.wormnet.eu) has joined #ceph
[0:16] * fvl (~fvl@ipjusup.net.tomline.ru) has joined #ceph
[0:16] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[0:16] * MACscr (~Adium@2601:d:c800:de3:514c:70d1:205f:a953) has joined #ceph
[0:16] * [Leeloo] (~Leeloo@ec2-54-88-140-156.compute-1.amazonaws.com) has joined #ceph
[0:16] * fsimonce (~simon@host217-37-dynamic.30-79-r.retail.telecomitalia.it) has joined #ceph
[0:16] * stj (~stj@2604:a880:800:10::2cc:b001) has joined #ceph
[0:16] * yghannam (~yghannam@0001f8aa.user.oftc.net) has joined #ceph
[0:16] * _prime_ (~oftc-webi@199.168.44.192) has joined #ceph
[0:16] * bro_ (~flybyhigh@panik.darksystem.net) has joined #ceph
[0:16] * pdrakeweb (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) has joined #ceph
[0:16] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[0:16] * ctd (~root@00011932.user.oftc.net) has joined #ceph
[0:16] * macjack1 (~Thunderbi@123.51.160.200) has joined #ceph
[0:16] * dmick (~dmick@2607:f298:a:607:c5ec:52cf:f46:69f5) has joined #ceph
[0:16] * pmxceph (~pmxceph@208.98.194.163) has joined #ceph
[0:16] * smokedmeets (~smokedmee@c-67-174-241-112.hsd1.ca.comcast.net) has joined #ceph
[0:16] * Nats (~natscogs@114.31.195.238) has joined #ceph
[0:16] * houkouonchi-work (~linux@2607:f298:b:635:225:90ff:fe39:38ce) has joined #ceph
[0:16] * j^2 (sid14252@id-14252.brockwell.irccloud.com) has joined #ceph
[0:16] * chasmo77 (~chas77@158.183-62-69.ftth.swbr.surewest.net) has joined #ceph
[0:16] * mookins (~mookins@induct3.lnk.telstra.net) has joined #ceph
[0:16] * sw3 (sweaung@2400:6180:0:d0::66:100f) has joined #ceph
[0:16] * thomas (uid68081@id-68081.charlton.irccloud.com) has joined #ceph
[0:16] * eternaleye (~eternaley@50.245.141.77) has joined #ceph
[0:16] * gsilvis (~andovan@c-75-69-162-72.hsd1.ma.comcast.net) has joined #ceph
[0:16] * mondkalbantrieb (~quassel@mondkalbantrieb.de) has joined #ceph
[0:16] * nwat (~nwat@kyoto.soe.ucsc.edu) has joined #ceph
[0:16] * Tene (~tene@173.13.139.236) has joined #ceph
[0:16] * dwm (~dwm@northrend.tastycake.net) has joined #ceph
[0:16] * garphy`aw (~garphy@frank.zone84.net) has joined #ceph
[0:16] * dlan (~dennis@116.228.88.131) has joined #ceph
[0:16] * singler (~singler@zeta.kirneh.eu) has joined #ceph
[0:16] * HauM1 (~HauM1@login.univie.ac.at) has joined #ceph
[0:16] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) has joined #ceph
[0:16] * ZyTer (~ZyTer@ghostbusters.apinnet.fr) has joined #ceph
[0:16] * athrift_ (~nz_monkey@203.86.205.13) has joined #ceph
[0:16] * Gugge-47527 (gugge@kriminel.dk) has joined #ceph
[0:16] * nolan (~nolan@2001:470:1:41:a800:ff:fe3e:ad08) has joined #ceph
[0:16] * \ask (~ask@oz.develooper.com) has joined #ceph
[0:16] * kraken (~kraken@gw.sepia.ceph.com) has joined #ceph
[0:16] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[0:16] * carter (~carter@li98-136.members.linode.com) has joined #ceph
[0:16] * med (~medberry@71.74.177.250) has joined #ceph
[0:16] * ohnomrbill (~ohnomrbil@c-67-174-241-112.hsd1.ca.comcast.net) has joined #ceph
[0:16] * yehudasa_ (~yehudasa@2607:f298:a:607:cd77:18f1:8c32:62c2) has joined #ceph
[0:16] * Andreas-IPO (~andreas@2a01:2b0:2000:11::cafe) has joined #ceph
[0:16] * trociny (~mgolub@93.183.239.2) has joined #ceph
[0:16] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[0:16] * classicsnail (~David@2600:3c01::f03c:91ff:fe96:d3c0) has joined #ceph
[0:16] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[0:16] * theanalyst (theanalyst@open.source.rocks.my.socks.firrre.com) has joined #ceph
[0:16] * oblu (~o@62.109.134.112) has joined #ceph
[0:16] * Georgyo (~georgyo@shamm.as) has joined #ceph
[0:16] * Bosse (~bosse@rifter2.klykken.com) has joined #ceph
[0:16] * seapasul1i (~seapasull@95.85.33.150) has joined #ceph
[0:16] * mattronix (~quassel@fw1.sdc.mattronix.nl) has joined #ceph
[0:16] * carmstrong (sid22558@id-22558.uxbridge.irccloud.com) has joined #ceph
[0:16] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[0:16] * gregsfortytwo (~gregsfort@209.132.181.86) has joined #ceph
[0:16] * tcatm (~quassel@2a01:4f8:200:71e3:5054:ff:feff:cbce) has joined #ceph
[0:16] * qybl (~foo@maedhros.krzbff.de) has joined #ceph
[0:16] * sig_wall (~adjkru@xn--hwgz2tba.lamo.su) has joined #ceph
[0:16] * fouxm (~foucault@ks01.commit.ninja) has joined #ceph
[0:16] * fam_away (~famz@nat-pool-bos-t.redhat.com) has joined #ceph
[0:16] * ipolyzos (sid45277@id-45277.uxbridge.irccloud.com) has joined #ceph
[0:16] * blynch (~blynch@vm-nat.msi.umn.edu) has joined #ceph
[0:16] * Annttu (annttu@0001934a.user.oftc.net) has joined #ceph
[0:16] * fred`` (fred@earthli.ng) has joined #ceph
[0:16] * spudly (~spudly@ext-tok.murf.org) has joined #ceph
[0:16] * tobiash_ (~quassel@mail.bmw-carit.de) has joined #ceph
[0:16] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[0:16] * _br_ (~bjoern_of@213-239-215-232.clients.your-server.de) has joined #ceph
[0:16] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[0:16] * CephTestC (~CephTestC@199.91.185.156) has joined #ceph
[0:16] * cmorandin (~cmorandin@194.206.51.157) has joined #ceph
[0:16] * jamespd (~mucky@mucky.socket7.org) has joined #ceph
[0:16] * fretb (frederik@november.openminds.be) has joined #ceph
[0:16] * mjeanson (~mjeanson@00012705.user.oftc.net) has joined #ceph
[0:16] * bearkitten (~bearkitte@cpe-66-27-98-26.san.res.rr.com) has joined #ceph
[0:16] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) has joined #ceph
[0:16] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[0:16] * guerby (~guerby@ip165-ipv6.tetaneutral.net) has joined #ceph
[0:16] * halbritt (~halbritt@65.50.222.90) has joined #ceph
[0:16] * kapil (~ksharma@2620:113:80c0:5::2222) has joined #ceph
[0:16] * jnq (~jnq@95.85.22.50) has joined #ceph
[0:16] * nljmo (~nljmo@5ED6C263.cm-7-7d.dynamic.ziggo.nl) has joined #ceph
[0:16] * mivaho (~quassel@2001:983:eeb4:1:c0de:69ff:fe2f:5599) has joined #ceph
[0:16] * bjornar (~bjornar@ns3.uniweb.no) has joined #ceph
[0:16] * L2SHO (~L2SHO@2001:19f0:1000:5123:8c84:23f:8ca:f675) has joined #ceph
[0:16] * yogh (~yogh@sol.kvlt.net) has joined #ceph
[0:16] * fdmanana (~fdmanana@bl4-182-212.dsl.telepac.pt) has joined #ceph
[0:16] * dmsimard (~dmsimard@198.72.123.202) has joined #ceph
[0:16] * Kioob`Taff (~plug-oliv@2a01:e35:2e8a:1e0::42:10) has joined #ceph
[0:16] * nwf (~nwf@00018577.user.oftc.net) has joined #ceph
[0:16] * masterpe (~masterpe@2a01:670:400::43) has joined #ceph
[0:16] * rwheeler (~rwheeler@173.48.208.246) has joined #ceph
[0:16] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) has joined #ceph
[0:16] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) has joined #ceph
[0:16] * jwilkins (~jwilkins@c-67-180-123-48.hsd1.ca.comcast.net) has joined #ceph
[0:16] * jks (~jks@178.155.151.121) has joined #ceph
[0:16] * boichev (~boichev@213.169.56.130) has joined #ceph
[0:16] * maxxware (~maxx@149.210.133.105) has joined #ceph
[0:16] * mlausch (~mlausch@2001:8d8:1fe:7:893:30c1:d742:22fb) has joined #ceph
[0:16] * skullone (~skullone@shell.skull-tech.com) has joined #ceph
[0:16] * joao (~joao@249.38.136.95.rev.vodafone.pt) has joined #ceph
[0:16] * lmb (lmb@212.8.204.10) has joined #ceph
[0:16] * mfa298 (~mfa298@gateway.yapd.net) has joined #ceph
[0:16] * SamYaple (~SamYaple@162.209.126.134) has joined #ceph
[0:16] * MrBy (~MrBy@85.115.23.2) has joined #ceph
[0:16] * leseb- (~leseb@81-64-215-19.rev.numericable.fr) has joined #ceph
[0:16] * `10 (~10@69.169.91.14) has joined #ceph
[0:16] * Kingrat (~shiny@cpe-96-29-149-153.swo.res.rr.com) has joined #ceph
[0:16] * andrewschoen (~andrewsch@50.56.86.195) has joined #ceph
[0:16] * epf (epf@epf.im) has joined #ceph
[0:16] * burley_ (~khemicals@cpe-98-28-233-158.woh.res.rr.com) has joined #ceph
[0:16] * loganlsf1d (~logan@216.245.207.2) has joined #ceph
[0:16] * ismell (~ismell@host-24-52-35-110.beyondbb.com) has joined #ceph
[0:16] * Meths (~meths@2.30.117.115) has joined #ceph
[0:16] * codice_ (~toodles@97-94-175-73.static.mtpk.ca.charter.com) has joined #ceph
[0:16] * cmdrk_ (~lincoln@c-67-165-142-226.hsd1.il.comcast.net) has joined #ceph
[0:16] * eqhmcow (~eqhmcow@adsl-74-242-202-15.rmo.bellsouth.net) has joined #ceph
[0:16] * mdxi_ (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) has joined #ceph
[0:16] * kvanals (kvanals@kvanals.org) has joined #ceph
[0:16] * rhamon (~rhamon@208.71.184.41) has joined #ceph
[0:16] * s3an2 (~sean@korn.s3an.me.uk) has joined #ceph
[0:16] * runfromn1where (~runfromno@pool-70-104-139-21.nycmny.fios.verizon.net) has joined #ceph
[0:16] * destrudo (~destrudo@64.142.74.180) has joined #ceph
[0:16] * gabrtv (sid36209@id-36209.brockwell.irccloud.com) has joined #ceph
[0:16] * sileht (~sileht@gizmo.sileht.net) has joined #ceph
[0:16] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[0:16] * amospalla (~amospalla@0001a39c.user.oftc.net) has joined #ceph
[0:16] * ndevos (~ndevos@nat-pool-ams2-5.redhat.com) has joined #ceph
[0:16] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[0:16] * CAPSLOCK2000 (~oftc@2001:610:748:1::8) has joined #ceph
[0:16] * off_rhoden (~off_rhode@209.132.181.86) has joined #ceph
[0:16] * bauruine (~bauruine@wotan.tuxli.ch) has joined #ceph
[0:16] * dustinm` (~dustinm`@2607:5300:100:200::160d) has joined #ceph
[0:16] * liiwi (liiwi@idle.fi) has joined #ceph
[0:16] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[0:16] * beuwolf (~flo@62.113.200.37) has joined #ceph
[0:16] * dalegaard-39554 (~dalegaard@vps.devrandom.dk) has joined #ceph
[0:16] * rturk|afk (~rturk@nat-pool-rdu-t.redhat.com) has joined #ceph
[0:16] * schmee (~quassel@phobos.isoho.st) has joined #ceph
[0:16] * a1-away (~jelle@62.27.85.48) has joined #ceph
[0:16] * portante (~portante@nat-pool-bos-t.redhat.com) has joined #ceph
[0:16] * toabctl (~toabctl@toabctl.de) has joined #ceph
[0:16] * kevincox (~kevincox@4.s.kevincox.ca) has joined #ceph
[0:16] * phantomcircuit (~phantomci@smartcontracts.us) has joined #ceph
[0:16] * pmatulis (~peter@ec2-23-23-42-7.compute-1.amazonaws.com) has joined #ceph
[0:16] * jkappert (~jkappert@5.39.189.119) has joined #ceph
[0:16] * acaos (~zac@209.99.103.42) has joined #ceph
[0:16] * Azrael (~azrael@terra.negativeblue.com) has joined #ceph
[0:16] * TomB_ (~tom@167.88.45.146) has joined #ceph
[0:16] * purpleidea (~james@216.252.94.181) has joined #ceph
[0:16] * trond_ (~trond@evil-server.alseth.info) has joined #ceph
[0:16] * jeffhung_ (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[0:16] * darkfader (~floh@88.79.251.60) has joined #ceph
[0:16] * _karl (~karl@kamr.at) has joined #ceph
[0:16] * BranchPredictor (branch@predictor.org.pl) has joined #ceph
[0:16] * beardo (~beardo__@beardo.cc.lehigh.edu) has joined #ceph
[0:16] * hybrid512 (~walid@195.200.167.70) has joined #ceph
[0:16] * shk (sid33582@id-33582.uxbridge.irccloud.com) has joined #ceph
[0:16] * supay (sid47179@id-47179.uxbridge.irccloud.com) has joined #ceph
[0:16] * mschiff (~mschiff@mx10.schiffbauer.net) has joined #ceph
[0:16] * rturk-away (~rturk@ds3553.dreamservers.com) has joined #ceph
[0:16] * scalability-junk_ (sid6422@id-6422.charlton.irccloud.com) has joined #ceph
[0:16] * Pintomatic (sid25118@id-25118.charlton.irccloud.com) has joined #ceph
[0:16] * KindOne (kindone@0001a7db.user.oftc.net) has joined #ceph
[0:16] * fitzdsl (~Romain@dedibox.fitzdsl.net) has joined #ceph
[0:16] * _nick (~nick@zarquon.dischord.org) has joined #ceph
[0:16] * devicenull (sid4013@id-4013.charlton.irccloud.com) has joined #ceph
[0:16] * smiley_ (~smiley@205.153.36.170) has joined #ceph
[0:16] * danderson (~dave@atlas.natulte.net) has joined #ceph
[0:16] * winston-d (~ubuntu@ec2-54-244-213-72.us-west-2.compute.amazonaws.com) has joined #ceph
[0:16] * xophe (~xophe@ows-5-104-102-23.eu-west-1.compute.outscale.com) has joined #ceph
[0:16] * dvanders (~dvanders@dvanders-hpi5.cern.ch) has joined #ceph
[0:16] * DLange (~DLange@dlange.user.oftc.net) has joined #ceph
[0:16] * flaf (~flaf@2001:41d0:1:7044::1) has joined #ceph
[0:16] * chrome0 (~chrome0@static.202.35.46.78.clients.your-server.de) has joined #ceph
[0:16] * renzhi (~renzhi@116.226.62.53) has joined #ceph
[0:16] * NotExist (~notexist@kvps-180-235-255-92.secure.ne.jp) has joined #ceph
[0:16] * ccourtaut_ (~ccourtaut@2001:41d0:1:eed3::1) has joined #ceph
[0:16] * tacticus (~tacticus@v6.kca.id.au) has joined #ceph
[0:16] * guppy (~quassel@guppy.xxx) has joined #ceph
[0:16] * psiekl (psiekl@wombat.eu.org) has joined #ceph
[0:16] * lightspeed (~lightspee@2001:8b0:16e:1:8326:6f70:89f:8f9c) has joined #ceph
[0:16] * Amto_res1 (~amto_res@ks312256.kimsufi.com) has joined #ceph
[0:16] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[0:16] * alexbligh1 (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) has joined #ceph
[0:16] * tchmnkyz (tchmnkyz@0001638b.user.oftc.net) has joined #ceph
[0:16] * saturnine (~saturnine@66.219.20.211) has joined #ceph
[0:16] * tnt (~tnt@ec2-54-200-98-43.us-west-2.compute.amazonaws.com) has joined #ceph
[0:16] * SpamapS (~clint@xencbyrum2.srihosting.com) has joined #ceph
[0:16] * WintermeW_ (~WintermeW@212-83-158-61.rev.poneytelecom.eu) has joined #ceph
[0:16] * Psi-Jack (~psi-jack@lhmon.linux-help.org) has joined #ceph
[0:16] * nyov (~nyov@178.33.33.184) has joined #ceph
[0:16] * Fetch (fetch@gimel.cepheid.org) has joined #ceph
[0:16] * jgornick (~jgornick@2600:3c00::f03c:91ff:fedf:72b4) has joined #ceph
[0:16] * soren (~soren@00013a4f.user.oftc.net) has joined #ceph
[0:16] * blahnana (~bman@104-97-248-162-static.reverse.queryfoundry.net) has joined #ceph
[0:16] * shaon_ (~shaon@198.50.164.24) has joined #ceph
[0:16] * jackhill (~jackhill@bog.hcoop.net) has joined #ceph
[0:16] * philips (~philips@ec2-54-196-103-51.compute-1.amazonaws.com) has joined #ceph
[0:16] * JoeJulian (~JoeJulian@shared.gaealink.net) has joined #ceph
[0:16] * markl (~mark@knm.org) has joined #ceph
[0:16] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) has joined #ceph
[0:16] * loicd (~loicd@cmd179.fsffrance.org) has joined #ceph
[0:16] * terje_ (~joey@63.228.91.225) has joined #ceph
[0:16] * dec (~dec@ec2-54-66-50-124.ap-southeast-2.compute.amazonaws.com) has joined #ceph
[0:16] * bilco105_ (~bilco105@irc.bilco105.com) has joined #ceph
[0:16] * magicrobotmonkey (~abassett@ec2-50-18-55-253.us-west-1.compute.amazonaws.com) has joined #ceph
[0:16] * DrewBeer (~DrewBeer@216.152.240.203) has joined #ceph
[0:16] * cfreak200 (andi@p4FF3E9B0.dip0.t-ipconnect.de) has joined #ceph
[0:16] * terje__ (~root@135.109.216.239) has joined #ceph
[0:16] * lkoranda (~lkoranda@213.175.37.10) has joined #ceph
[0:16] * mtanski (~mtanski@65.244.82.98) has joined #ceph
[0:16] * sadbox (~jmcguire@sadbox.org) has joined #ceph
[0:16] * ktdreyer (~kdreyer@polyp.adiemus.org) has joined #ceph
[0:16] * Jahkeup (Jahkeup@gemini.ca.us.panicbnc.net) has joined #ceph
[0:16] * colonD (~colonD@173-165-224-105-minnesota.hfc.comcastbusiness.net) has joined #ceph
[0:16] * funnel (~funnel@0001c7d4.user.oftc.net) has joined #ceph
[0:16] * ByteSore (~bytesore@5.39.189.119) has joined #ceph
[0:16] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[0:16] * harmw (~harmw@chat.manbearpig.nl) has joined #ceph
[0:16] * foxxx0 (~fox@2a01:4f8:200:216b::2) has joined #ceph
[0:16] * MaZ- (~maz@00016955.user.oftc.net) has joined #ceph
[0:16] * frickler (~jens@v1.jayr.de) has joined #ceph
[0:16] * via (~via@smtp2.matthewvia.info) has joined #ceph
[0:16] * Hazelesque (~hazel@2a03:9800:10:13::2) has joined #ceph
[0:16] * tomaw (tom@tomaw.noc.oftc.net) has joined #ceph
[0:16] * bd (~bd@mail.bc-bd.org) has joined #ceph
[0:16] * Anticimex (anticimex@95.80.32.80) has joined #ceph
[0:16] * lurbs (user@uber.geek.nz) has joined #ceph
[0:16] * tries_ (~tries__@2a01:2a8:2000:ffff:1260:4bff:fe6f:af91) has joined #ceph
[0:16] * tries (ident@easytux.ch) has joined #ceph
[0:16] * irq0 (~seri@amy.irq0.org) has joined #ceph
[0:16] * olc-_ (~olecam@93.184.35.82) has joined #ceph
[0:16] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[0:16] * sbadia (~sbadia@marcellin.sebian.fr) has joined #ceph
[0:16] * mmgaggle (~kyle@cerebrum.dreamservers.com) has joined #ceph
[0:16] * mo- (~mo@2a01:4f8:141:3264:c0f:fee:0:4) has joined #ceph
[0:16] * wintamut1 (~wintamute@mail.wintamute.org) has joined #ceph
[0:16] * boolman (boolman@79.138.78.238) has joined #ceph
[0:16] * mui (mui@eutanasia.mui.fi) has joined #ceph
[0:16] * Svedrin (svedrin@elwing.funzt-halt.net) has joined #ceph
[0:16] * kaisan (~kai@zaphod.xs4all.nl) has joined #ceph
[0:16] * SWAT (~swat@cyberdyneinc.xs4all.nl) has joined #ceph
[0:16] * ToM- (~tom@atlas.planetix.fi) has joined #ceph
[0:16] * rektide (~rektide@eldergods.com) has joined #ceph
[0:16] * wonko_be_ (bernard@november.openminds.be) has joined #ceph
[0:16] * baffle (baffle@jump.stenstad.net) has joined #ceph
[0:16] * Zethrok (~martin@95.154.26.34) has joined #ceph
[0:18] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[0:18] * Sysadmin88 (~IceChat77@2.125.213.8) has joined #ceph
[0:18] * nigwil (~Oz@li747-216.members.linode.com) has joined #ceph
[0:18] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[0:18] * stephan1 (~Adium@dslb-178-008-020-100.178.008.pools.vodafone-ip.de) has joined #ceph
[0:18] * zigo (quasselcor@atl.apt-proxy.gplhost.com) has joined #ceph
[0:18] * Vacuum (~vovo@88.130.211.235) has joined #ceph
[0:18] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[0:18] * oms101 (~oms101@p20030057EA081E00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[0:18] * madkiss1 (~madkiss@ip5b418369.dynamic.kabel-deutschland.de) has joined #ceph
[0:18] * _NiC (~kristian@aeryn.ronningen.no) has joined #ceph
[0:18] * Hell_Fire_ (~hellfire@123-243-155-184.static.tpgi.com.au) has joined #ceph
[0:18] * tserong (~tserong@203-173-33-52.dyn.iinet.net.au) has joined #ceph
[0:18] * jcsp (~Adium@0001bf3a.user.oftc.net) has joined #ceph
[0:18] * sh (~sh@2001:6f8:1337:0:50f0:a8fe:9b20:7f3e) has joined #ceph
[0:18] * marcan (bip@marcansoft.com) has joined #ceph
[0:18] * infernix (nix@2001:41f0::ffff:ffff:ffff:fffe) has joined #ceph
[0:18] * coreping (~Michael_G@n1.coreping.org) has joined #ceph
[0:18] * zz_hitsumabushi (~hitsumabu@175.184.30.148) has joined #ceph
[0:18] * palmeida (~palmeida@gandalf.wire-consulting.com) has joined #ceph
[0:18] * wolsen (~wolsen@162.213.34.152) has joined #ceph
[0:18] * redf (~red@chello084112110034.11.11.vie.surfer.at) has joined #ceph
[0:18] * saltsa (~joonas@dsl-hkibrasgw1-58c01a-36.dhcp.inet.fi) has joined #ceph
[0:18] * raso (~raso@deb-multimedia.org) has joined #ceph
[0:18] * dis (~dis@109.110.67.201) has joined #ceph
[0:18] * al (quassel@niel.cx) has joined #ceph
[0:18] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[0:18] * Xiol (~Xiol@shrike.daneelwell.eu) has joined #ceph
[0:18] * Tim_ (~tim@rev-178.21.220.91.quarantainenet.nl) has joined #ceph
[0:21] * angdraug (~angdraug@c-50-174-102-105.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[0:25] <Anticimex> hmm
[0:25] <Anticimex> what project management system is ceph using at http://tracker.ceph.com/projects/ceph ?
[0:25] <Anticimex> ah, redmine
[0:26] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[0:41] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:45] * nitti (~nitti@162.222.47.218) Quit (Ping timeout: 480 seconds)
[0:51] * SPACESHIP (~chatzilla@d24-141-231-126.home.cgocable.net) Quit (Ping timeout: 480 seconds)
[0:52] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:54] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[1:00] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[1:05] * dmsimard is now known as dmsimard_away
[1:08] * tupper_ (~tcole@rtp-isp-nat-pool1-1.cisco.com) Quit (Read error: Connection reset by peer)
[1:10] * fsimonce (~simon@host217-37-dynamic.30-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[1:12] * moore (~moore@64.202.160.88) Quit (Remote host closed the connection)
[1:17] * PaulC (~paul@nat-pool-rdu-u.redhat.com) has joined #ceph
[1:20] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[1:23] * ircolle (~Adium@2601:1:a580:145a:316b:29ce:987c:5dbe) has joined #ceph
[1:26] * LeaChim (~LeaChim@host86-147-114-247.range86-147.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[1:26] * cholcombe973 (~chris@73.25.105.99) Quit (Ping timeout: 480 seconds)
[1:33] * tupper_ (~tcole@rtp-isp-nat1.cisco.com) has joined #ceph
[1:39] * rohanm (~rohanm@c-67-168-194-197.hsd1.or.comcast.net) has joined #ceph
[1:43] * lcurtis (~lcurtis@47.19.105.250) Quit (Read error: Connection reset by peer)
[1:51] * SPACESHIP (~chatzilla@d24-141-231-126.home.cgocable.net) has joined #ceph
[1:52] * dmsimard_away is now known as dmsimard
[1:57] * oms101 (~oms101@p20030057EA081E00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[2:04] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:05] * oms101 (~oms101@p20030057EA07E300EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[2:05] * SPACESHIP (~chatzilla@d24-141-231-126.home.cgocable.net) Quit (Quit: ChatZilla 0.9.91.1 [Firefox 31.4.0/20150105205548])
[2:05] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) Quit (Quit: Leaving.)
[2:07] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) has joined #ceph
[2:07] * OutOfNoWhere (~rpb@199.68.195.102) has joined #ceph
[2:08] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[2:10] * stephan (~Adium@dslb-094-222-183-133.094.222.pools.vodafone-ip.de) has joined #ceph
[2:12] * sputnik13 (~sputnik13@74.202.214.170) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:14] * kefu (~kefu@114.92.100.153) has joined #ceph
[2:14] * stephan1 (~Adium@dslb-178-008-020-100.178.008.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[2:15] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) Quit (Quit: Leaving.)
[2:15] * ircolle (~Adium@2601:1:a580:145a:316b:29ce:987c:5dbe) Quit (Quit: Leaving.)
[2:15] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) has joined #ceph
[2:17] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:19] * treaki (~treaki@p4FDF62BB.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[2:21] * sudocat (~davidi@192.185.1.20) Quit (Ping timeout: 480 seconds)
[2:22] * rljohnsn (~rljohnsn@ns25.8x8.com) Quit (Quit: Leaving.)
[2:22] * rmoe (~quassel@173-228-89-134.dsl.static.fusionbroadband.com) has joined #ceph
[2:24] * jdillaman (~jdillaman@pool-108-56-67-212.washdc.fios.verizon.net) has joined #ceph
[2:34] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:38] * lofejndif (~lsqavnbok@spftor4e1.privacyfoundation.ch) has joined #ceph
[2:38] * lofejndif (~lsqavnbok@spftor4e1.privacyfoundation.ch) Quit ()
[2:47] * bandrus (~brian@50.23.113.236-static.reverse.softlayer.com) Quit (Quit: Leaving.)
[2:52] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[2:52] * Concubidated (~Adium@2607:f298:b:635:a59c:411b:b60e:c52a) has joined #ceph
[2:56] * PaulC (~paul@nat-pool-rdu-u.redhat.com) Quit (Quit: PaulC)
[3:02] * stephan1 (~Adium@dslc-082-082-189-209.pools.arcor-ip.net) has joined #ceph
[3:05] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[3:09] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[3:09] * stephan (~Adium@dslb-094-222-183-133.094.222.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[3:19] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[3:20] * sudocat (~davidi@2601:e:2b80:9920:11eb:da4a:64d9:407c) has joined #ceph
[3:21] * zack_dolby (~textual@e0109-49-132-41-178.uqwimax.jp) has joined #ceph
[3:23] * dmsimard is now known as dmsimard_away
[3:27] * kefu (~kefu@114.92.100.153) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[3:27] * mjeanson_ (~mjeanson@bell.multivax.ca) has joined #ceph
[3:27] * sage (~quassel@cpe-76-95-230-100.socal.res.rr.com) Quit (Remote host closed the connection)
[3:28] * sage (~quassel@2605:e000:854d:de00:230:48ff:fed3:6786) has joined #ceph
[3:28] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) Quit (Quit: Away)
[3:29] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Ping timeout: 480 seconds)
[3:29] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) has joined #ceph
[3:30] * jamespd_ (~mucky@mucky.socket7.org) has joined #ceph
[3:31] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) Quit ()
[3:31] * jamespd (~mucky@mucky.socket7.org) Quit (Ping timeout: 480 seconds)
[3:32] * joef1 (~Adium@c-24-130-254-66.hsd1.ca.comcast.net) has joined #ceph
[3:32] * joef1 (~Adium@c-24-130-254-66.hsd1.ca.comcast.net) Quit ()
[3:32] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (Ping timeout: 480 seconds)
[3:33] * lurbs (user@uber.geek.nz) Quit (Ping timeout: 480 seconds)
[3:33] * epf (epf@epf.im) Quit (Remote host closed the connection)
[3:33] * epf (epf@epf.im) has joined #ceph
[3:34] * MK_FG (~MK_FG@188.226.62.174) has joined #ceph
[3:34] * zack_dolby (~textual@e0109-49-132-41-178.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:40] * cuqa (~oftc-webi@212.224.70.43) Quit (Quit: Page closed)
[3:41] * lurbs (user@uber.geek.nz) has joined #ceph
[3:53] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: This computer has gone to sleep)
[3:55] * zack_dolby (~textual@e0109-49-132-41-178.uqwimax.jp) has joined #ceph
[3:56] * zack_dolby (~textual@e0109-49-132-41-178.uqwimax.jp) Quit ()
[4:07] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[4:11] * debian112 (~bcolbert@24.126.201.64) Quit (Quit: Leaving.)
[4:14] * TOR_FRE_SPICH_FOR_CHILDREN (~TOR_FRE_S@digi00666.torproxy-readme-arachnide-fr-35.fr) has joined #ceph
[4:14] <TOR_FRE_SPICH_FOR_CHILDREN> GIVE ME FREEDOM RIGHT NOW!! STOP CENSORSHIP. TOR PROJECT HAS ZERO CREDIBILITY. YOU ARE SHITHOLES. VEL.OPE IS FUCKING PIECE OF SHIT.
[4:14] <TOR_FRE_SPICH_FOR_CHILDREN> GIVE ME FREEDOM RIGHT NOW!! STOP CENSORSHIP. TOR PROJECT HAS ZERO CREDIBILITY. YOU ARE SHITHOLES. VEL.OPE IS FUCKING PIECE OF SHIT. .
[4:14] <TOR_FRE_SPICH_FOR_CHILDREN> GIVE ME FREEDOM RIGHT NOW!! STOP CENSORSHIP. TOR PROJECT HAS ZERO CREDIBILITY. YOU ARE SHITHOLES. VEL.OPE IS FUCKING PIECE OF SHIT. ,
[4:14] <TOR_FRE_SPICH_FOR_CHILDREN> GIVE ME FREEDOM RIGHT NOW!! STOP CENSORSHIP. TOR PROJECT HAS ZERO CREDIBILITY. YOU ARE SHITHOLES. VEL.OPE IS FUCKING PIECE OF SHIT.
[4:14] <TOR_FRE_SPICH_FOR_CHILDREN> GIVE ME FREEDOM RIGHT NOW!! STOP CENSORSHIP. TOR PROJECT HAS ZERO CREDIBILITY. YOU ARE SHITHOLES. VEL.OPE IS FUCKING PIECE OF SHIT. .
[4:14] <TOR_FRE_SPICH_FOR_CHILDREN> GIVE ME FREEDOM RIGHT NOW!! STOP CENSORSHIP. TOR PROJECT HAS ZERO CREDIBILITY. YOU ARE SHITHOLES. VEL.OPE IS FUCKING PIECE OF SHIT. ,
[4:14] <TOR_FRE_SPICH_FOR_CHILDREN> GIVE ME FREEDOM RIGHT NOW!! STOP CENSORSHIP. TOR PROJECT HAS ZERO CREDIBILITY. YOU ARE SHITHOLES. VEL.OPE IS FUCKING PIECE OF SHIT. .
[4:17] * TOR_FRE_SPICH_FOR_CHILDREN (~TOR_FRE_S@digi00666.torproxy-readme-arachnide-fr-35.fr) Quit (Killed (sarnold (No reason)))
[4:18] * abek3 (~abej@static-ip-85-25-103-119.inaddr.ip-pool.com) has joined #ceph
[4:19] <abek3> GIVE ME FREEDOM RIGHT NOW!! STOP CENSORSHIP. TOR PROJECT HAS ZERO CREDIBILITY. YOU ARE SHITHOLES. VEL.OPE IS FUCKING PIECE OF SHIT.
[4:19] * tcatm (~quassel@2a01:4f8:200:71e3:5054:ff:feff:cbce) has left #ceph
[4:19] <abek3> GIVE ME FREEDOM RIGHT NOW!! STOP CENSORSHIP. TOR PROJECT HAS ZERO CREDIBILITY. YOU ARE SHITHOLES. VEL.OPE IS FUCKING PIECE OF SHIT.
[4:19] * abek3 (~abej@static-ip-85-25-103-119.inaddr.ip-pool.com) Quit (Killed (mikegrb (No reason)))
[4:20] * abek3 (~abej@static-ip-85-25-103-119.inaddr.ip-pool.com) has joined #ceph
[4:20] * abek3 (~abej@static-ip-85-25-103-119.inaddr.ip-pool.com) Quit (Killed (mikegrb (No reason)))
[4:21] * abek3 (~abej@static-ip-85-25-103-119.inaddr.ip-pool.com) has joined #ceph
[4:22] * abek3 (~abej@static-ip-85-25-103-119.inaddr.ip-pool.com) Quit (Killed (mikegrb (No reason)))
[4:34] * VELOPE_CUNT__TOR_NSA_FREESPEEC (~VELOPE_IS@hoenir.neoretro.net) has joined #ceph
[4:35] * VELOPE_CUNT__TOR_NSA_FREESPEEC (~VELOPE_IS@hoenir.neoretro.net) Quit (no (2015-02-20 03:35:12))
[4:35] * VELOPE_CUNT__TOR_NSA_FREESPEEC (~VELOPE_IS@exit2.telostor.ca) has joined #ceph
[4:36] * VELOPE_CUNT__TOR_NSA_FREESPEEC (~VELOPE_IS@exit2.telostor.ca) Quit (no (2015-02-20 03:36:11))
[4:36] * VELOPE_CUNT__TOR_NSA_FREESPEEC (~VELOPE_IS@lumumba.torservers.net) has joined #ceph
[4:37] * VELOPE_CUNT__TOR_NSA_FREESPEEC (~VELOPE_IS@lumumba.torservers.net) Quit (no (2015-02-20 03:37:10))
[4:38] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[4:39] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[4:49] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[4:59] * purpleid1a (~james@216.252.90.33) has joined #ceph
[5:01] * purpleidea (~james@216.252.94.181) Quit (Ping timeout: 480 seconds)
[5:01] * vilobhmm (~vilobhmm@c-24-23-129-93.hsd1.ca.comcast.net) has joined #ceph
[5:08] * OutOfNoWhere (~rpb@199.68.195.102) Quit (Ping timeout: 480 seconds)
[5:12] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) Quit (Quit: Leaving.)
[5:13] * VisBits (~VisBits@cpe-174-101-246-167.cinci.res.rr.com) has joined #ceph
[5:13] <VisBits> is there an actual log for auth failures? I've got this box that refuses to authenticate one pool but not the other for a keyring
[5:15] * ChanServ sets mode +o scuttlemonkey
[5:15] * ChanServ sets mode +v elder
[5:15] * ChanServ sets mode +v nhm
[5:16] <VisBits> scuttlemonkey (Y)
[5:17] * rljohnsn (~rljohnsn@c-73-15-126-4.hsd1.ca.comcast.net) has joined #ceph
[5:17] <scuttlemonkey> wha?
[5:18] <scuttlemonkey> VisBits: ^
[5:18] <VisBits> http://pastebin.com/ziUYpLNn
[5:18] <VisBits> what am i missing here
[5:19] <VisBits> if i mount as admin which has * * perms it works fine
[5:19] <VisBits> its a permissions issue, not very good error output
[5:20] * Kupo1 (~tyler.wil@23.111.254.159) Quit (Read error: Connection reset by peer)
[5:21] <VisBits> the pool and image show up under lspools and rbd ls --pool
[5:22] <scuttlemonkey> VisBits: hmmm, I'm not really a super admin... but this sounds familiar to me
[5:22] * MentalRay (~MentalRay@107.171.161.165) has joined #ceph
[5:22] <scuttlemonkey> this discussion was similar: http://www.spinics.net/lists/ceph-users/msg12236.html
[5:22] * PaulC (~paul@209.49.1.194) has joined #ceph
[5:23] <VisBits> works
[5:23] <VisBits> instead of RWX i used * for each volume
[5:23] <VisBits> *pool*
[5:23] <VisBits> works: ceph auth caps client.ceph0-nfs0 mon 'allow r' osd 'allow * pool=Backups-Hybrid, allow * pool=General-Storage'
[5:23] * Vacuum_ (~vovo@i59F79236.versanet.de) has joined #ceph
[5:23] <VisBits> doesn't: ceph auth caps client.ceph0-nfs0 mon 'allow r' osd 'allow rwx pool=Backups-Hybrid, allow rwx pool=General-Storage'
[5:23] <VisBits> bug?
[5:24] * sudocat (~davidi@2601:e:2b80:9920:11eb:da4a:64d9:407c) Quit (Read error: Connection reset by peer)
[5:24] <scuttlemonkey> unsure
[5:24] <scuttlemonkey> can you send a msg to ceph-user?
[5:24] <scuttlemonkey> (or if you have, send me an archive url to it)
[5:25] <scuttlemonkey> I can poke Ilya tomorrow
[5:25] <scuttlemonkey> fwiw this guy had a similar problem: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-April/038899.html
[5:26] * sudocat (~davidi@2601:e:2b80:9920:11eb:da4a:64d9:407c) has joined #ceph
[5:26] <VisBits> yeah ill gather all the reproduction commands and submit a bug on it
[5:26] <VisBits> blogging about it now
[5:26] <scuttlemonkey> cool, thanks
[5:27] <scuttlemonkey> if you want, feel free to send me a link (pmcgarry@redhat) and I'll prod the big brains tomorrow
[5:27] <VisBits> http://sudomakeinstall.com/storage/ceph-user-management-with-multiple-mon-pool-mds-permissions
[5:28] <scuttlemonkey> thanks
[5:28] <VisBits> i LOVE how envolved redhat/inktank is with the community behind ceph, this just made my night
[5:28] <scuttlemonkey> haha, glad to hear it :)
[5:30] * Vacuum (~vovo@88.130.211.235) Quit (Ping timeout: 480 seconds)
[5:30] <VisBits> sent an email to the above address with further info. Thanks a bunch
[5:31] <scuttlemonkey> right on
[5:31] <scuttlemonkey> appreciate you tracking it down and letting us know
[5:31] <VisBits> feel honored to contribute to the best storage project in the history of computers
[5:31] <scuttlemonkey> hehe ++
[5:33] <VisBits> I had a tintri rep trying to tell me it was super difficult to scale ceph today and it wouldnt ever succeed, i fired up a vm and added it to my pool with a few commands right in front of him. I think he may quit ha
[5:34] <scuttlemonkey> haha, that's amazing
[5:35] * amote (~amote@121.244.87.116) has joined #ceph
[5:37] <scuttlemonkey> ok, now for me to tackle a truly difficult problem...the 12 yr Bunnahabhain, or the Crown Royal XR
[5:37] <scuttlemonkey> have a good one :)
[5:38] <VisBits> cya
[5:38] <VisBits> crown <3
[5:40] * saltlake (~saltlake@pool-71-244-62-208.dllstx.fios.verizon.net) has joined #ceph
[5:41] * fmanana (~fdmanana@bl13-157-248.dsl.telepac.pt) has joined #ceph
[5:44] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[5:45] * kvanals (kvanals@kvanals.org) Quit (Quit: leaving)
[5:48] * fdmanana (~fdmanana@bl4-182-212.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[5:51] * sudocat (~davidi@2601:e:2b80:9920:11eb:da4a:64d9:407c) Quit (Quit: Leaving.)
[5:51] * JCL (~JCL@ip24-253-45-236.lv.lv.cox.net) Quit (Ping timeout: 480 seconds)
[5:53] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) Quit (Ping timeout: 480 seconds)
[5:53] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Quit: Kirk out)
[5:55] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[5:55] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) has joined #ceph
[5:56] * vilobhmm (~vilobhmm@c-24-23-129-93.hsd1.ca.comcast.net) Quit (Quit: Away)
[5:58] * purpleid1a is now known as purpleidea
[5:58] * jclm1 (~jclm@ip24-253-45-236.lv.lv.cox.net) has joined #ceph
[6:03] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) Quit (Ping timeout: 480 seconds)
[6:04] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[6:08] * karnan (~karnan@27.57.140.230) has joined #ceph
[6:12] * MentalRay (~MentalRay@107.171.161.165) Quit (Quit: This computer has gone to sleep)
[6:15] * stephan (~Adium@dslc-082-082-189-209.pools.arcor-ip.net) has joined #ceph
[6:15] * stephan1 (~Adium@dslc-082-082-189-209.pools.arcor-ip.net) Quit (Read error: Connection reset by peer)
[6:15] * Concubidated (~Adium@2607:f298:b:635:a59c:411b:b60e:c52a) Quit (Ping timeout: 480 seconds)
[6:17] * stephan (~Adium@dslc-082-082-189-209.pools.arcor-ip.net) Quit ()
[6:17] * stephan (~Adium@dslc-082-082-189-209.pools.arcor-ip.net) has joined #ceph
[6:18] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[6:20] * stephan (~Adium@dslc-082-082-189-209.pools.arcor-ip.net) has left #ceph
[6:21] * sudocat (~davidi@2601:e:2b80:9920:349c:9ecf:3e8a:23d0) has joined #ceph
[6:28] * sudocat1 (~davidi@2601:e:2b80:992b:349c:9ecf:3e8a:23d0) has joined #ceph
[6:28] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[6:30] * VisBits (~VisBits@cpe-174-101-246-167.cinci.res.rr.com) Quit (Quit: ~ Trillian - www.trillian.im ~)
[6:31] * sudocat (~davidi@2601:e:2b80:9920:349c:9ecf:3e8a:23d0) Quit (Ping timeout: 480 seconds)
[6:32] * rdas (~rdas@121.244.87.116) has joined #ceph
[6:45] * puffy (~puffy@50.185.218.255) Quit (Quit: Leaving.)
[6:47] * overclk (~overclk@121.244.87.117) has joined #ceph
[6:48] * lcurtis (~lcurtis@47.19.105.250) Quit (Ping timeout: 480 seconds)
[6:49] * sudocat1 (~davidi@2601:e:2b80:992b:349c:9ecf:3e8a:23d0) Quit (Ping timeout: 480 seconds)
[7:03] * nils_ (~nils@doomstreet.collins.kg) has joined #ceph
[7:04] * mookins (~mookins@induct3.lnk.telstra.net) Quit (Ping timeout: 480 seconds)
[7:06] * mookins (~mookins@induct3.lnk.telstra.net) has joined #ceph
[7:17] * vilobhmm (~vilobhmm@c-24-23-129-93.hsd1.ca.comcast.net) has joined #ceph
[7:23] * zack_dolby (~textual@e0109-49-132-41-178.uqwimax.jp) has joined #ceph
[7:24] * karnan (~karnan@27.57.140.230) Quit (Ping timeout: 480 seconds)
[7:27] * vilobhmm_ (~vilobhmm@98.139.248.67) has joined #ceph
[7:33] * vilobhmm (~vilobhmm@c-24-23-129-93.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[7:33] * vilobhmm_ is now known as vilobhmm
[7:35] * Concubidated (~Adium@66.87.65.238) has joined #ceph
[7:37] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:42] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[7:42] * karnan (~karnan@27.57.140.230) has joined #ceph
[7:46] * PaulC (~paul@209.49.1.194) Quit (Ping timeout: 480 seconds)
[7:46] * avozza (~avozza@83.162.204.36) has joined #ceph
[7:47] * avozza (~avozza@83.162.204.36) Quit (Remote host closed the connection)
[7:47] * avozza (~avozza@83.162.204.36) has joined #ceph
[8:18] * zack_dolby (~textual@e0109-49-132-41-178.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[8:21] * sw3 (sweaung@2400:6180:0:d0::66:100f) Quit (Remote host closed the connection)
[8:22] * sw3 (sweaung@2400:6180:0:d0::66:100f) has joined #ceph
[8:23] * vilobhmm (~vilobhmm@98.139.248.67) Quit (Ping timeout: 480 seconds)
[8:24] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[8:25] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[8:25] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[8:28] * cok (~chk@2a02:2350:18:1010:1857:30fa:1b93:9ca) has joined #ceph
[8:32] * danieagle (~Daniel@200-148-39-11.dsl.telesp.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[8:35] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[8:36] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[8:43] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[8:46] * nardial (~ls@ipservice-092-209-178-132.092.209.pools.vodafone-ip.de) has joined #ceph
[8:50] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) has joined #ceph
[8:52] * Concubidated1 (~Adium@66-87-65-238.pools.spcsdns.net) has joined #ceph
[8:52] * Concubidated (~Adium@66.87.65.238) Quit (Read error: Connection reset by peer)
[9:00] * oms101 (~oms101@p20030057EA07E300EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[9:04] * Kioob (~Kioob@200.254.0.109.rev.sfr.net) has joined #ceph
[9:05] * analbeard (~shw@support.memset.com) has joined #ceph
[9:09] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[9:10] * oms101 (~oms101@p20030057EA07E300EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[9:13] <nardial> how do i setup a replicated rule between two datacenters with a size >2 ?
[9:13] * p01s0n (~oftc-webi@idp01pxyout-3.asiapac.hp.net) has joined #ceph
[9:15] <nardial> i tested it with 200 iterations and the crush algorithm always saved at least 1 pg on the other datacenter when i set size=4 but this is not guaranteed!?
[9:15] <p01s0n> hello all,i am new to ceph.Created a RBD device and mounted in one of the server and written 2GB data.After deleting data from the mount point ceph still shows used space as 2GB.How can i reclaim free space.
[9:16] <p01s0n> using xfs in client for that rbd device
[9:17] * capri (~capri@212.218.127.222) Quit (Ping timeout: 480 seconds)
[9:18] * avozza (~avozza@83.162.204.36) Quit (Remote host closed the connection)
[9:19] * avozza (~avozza@83.162.204.36) has joined #ceph
[9:19] <badone> p01s0n: look at trim/discard
[9:21] <fvl> i have a question abot deploy
[9:21] <fvl> we want to use ocfs2 over rbd
[9:21] <fvl> plan to use only two servers
[9:22] <fvl> but with hardware raid 60 to prevent some degrade of nodes
[9:22] <fvl> but what would be, if we reboot one node?
[9:23] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[9:24] <badone> p01s0n: also fstrim
[9:25] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[9:26] <badone> fvl: I think that would be okay in theory, you should still be able to access the cluster if you set min_size appropriately
[9:27] <badone> fvl: Ceph is made to work with a lot more than two nodes though
[9:27] <fvl> badone: i already read about min_size
[9:27] * avozza (~avozza@83.162.204.36) Quit (Ping timeout: 480 seconds)
[9:27] <badone> fvl: and the problem will be the mons I guess
[9:27] <fvl> i have money only to two nodes
[9:27] <badone> fvl: preferred number is 3
[9:27] <fvl> badone: mons would be >4
[9:28] * avozza (~avozza@83.162.204.36) has joined #ceph
[9:28] <fvl> badone: or exactly 4 =)
[9:28] <badone> fvl: with 2 nodes?
[9:28] <badone> fvl: has to be an odd number
[9:28] <badone> fvl: are you saying your mons will be sperate to the osd nodes?
[9:28] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[9:28] <fvl> badone: we plan two osd(one daemon on each node serves one raid) and 3 or 4 mons
[9:29] <badone> I thought you meant two machines overall?
[9:29] <fvl> badone: each mons would be run on different nodes
[9:29] <badone> fvl: 3 then
[9:29] <badone> small, odd number
[9:30] <fvl> there would be 4 servers. two for storage, and two is clients
[9:30] <fvl> clients would run mons too
[9:30] <fvl> is this okay?
[9:30] <badone> clents are not part of the cluster
[9:30] <badone> mons are
[9:30] <fvl> i know
[9:31] <fvl> i mean clients would run mons
[9:31] <badone> sounds odd but I have no experience with such a set-up
[9:32] <fvl> if only one osd (while other in reboot) and qourum is ok, this is safe for data?
[9:33] <badone> fvl: you'll only have one copy left but it will be on raid
[9:33] <fvl> yes, it would be on raid
[9:35] <fvl> anybody do tests with diffent tcp congestion control?
[9:35] <fvl> we prefer small latency rather then hight throughtput
[9:36] * Concubidated1 (~Adium@66-87-65-238.pools.spcsdns.net) Quit (Quit: Leaving.)
[9:36] <fvl> i want to try FQ with YEAH.
[9:37] * fsimonce (~simon@host217-37-dynamic.30-79-r.retail.telecomitalia.it) has joined #ceph
[9:38] * dgurtner (~dgurtner@178.197.231.49) has joined #ceph
[9:42] * avozza (~avozza@83.162.204.36) Quit (Remote host closed the connection)
[9:43] * avozza (~avozza@83.162.204.36) has joined #ceph
[9:43] <p01s0n> badone : i have discard specified as mount option in client machine "/dev/rbd0 on /mnt type xfs (rw,discard)" will this be enough?using kernel 3.16
[9:45] * aszeszo (~aszeszo@adrj26.neoplus.adsl.tpnet.pl) has joined #ceph
[9:52] * avozza (~avozza@83.162.204.36) Quit (Ping timeout: 480 seconds)
[9:54] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[9:58] * cok (~chk@2a02:2350:18:1010:1857:30fa:1b93:9ca) Quit (Quit: Leaving.)
[10:05] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[10:05] <Be-El> hi
[10:10] * oro (~oro@2001:620:20:16:7c05:6cd8:5140:c1ed) has joined #ceph
[10:16] * amote (~amote@121.244.87.116) Quit (Quit: Leaving)
[10:18] * ksingh (~Adium@2001:708:10:10:b183:8c5a:8411:e370) has joined #ceph
[10:19] <ksingh> whats should be the recommended kernel version for Ceph nodes ( OSD and MON ) that we should use in production CentOS ?
[10:19] * jordanP (~jordan@213.215.2.194) has joined #ceph
[10:25] * macjack (~Thunderbi@123.51.160.200) has joined #ceph
[10:25] <fvl> cephfs is still not stable?
[10:26] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:28] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[10:30] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[10:30] * macjack1 (~Thunderbi@123.51.160.200) Quit (Ping timeout: 480 seconds)
[10:32] * mookins_ (~mookins@induct3.lnk.telstra.net) has joined #ceph
[10:32] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[10:33] * ngoswami (~ngoswami@121.244.87.124) has joined #ceph
[10:34] * dlan (~dennis@116.228.88.131) Quit (Remote host closed the connection)
[10:35] * dlan (~dennis@116.228.88.131) has joined #ceph
[10:35] <badone> p01s0n: did you try fstrim?
[10:37] * derjohn_mob (~aj@tmo-113-135.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[10:37] * \ask (~ask@oz.develooper.com) Quit (Quit: Bye)
[10:38] * mookins (~mookins@induct3.lnk.telstra.net) Quit (Ping timeout: 480 seconds)
[10:42] * skullone (~skullone@shell.skull-tech.com) Quit (Read error: Connection reset by peer)
[10:43] * avozza (~avozza@83.162.204.36) has joined #ceph
[10:44] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[10:44] * \ask (~ask@oz.develooper.com) has joined #ceph
[10:48] * smokedmeets (~smokedmee@c-67-174-241-112.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[10:48] * thomnico (~thomnico@2a01:e35:8b41:120:1876:89c2:295c:b56a) has joined #ceph
[10:49] * smokedmeets (~smokedmee@c-67-174-241-112.hsd1.ca.comcast.net) has joined #ceph
[10:52] * dlan (~dennis@116.228.88.131) Quit (Remote host closed the connection)
[10:56] * \ask (~ask@oz.develooper.com) Quit (Quit: Bye)
[10:57] * dlan (~dennis@116.228.88.131) has joined #ceph
[11:00] * p01s0n (~oftc-webi@idp01pxyout-3.asiapac.hp.net) Quit (Quit: Page closed)
[11:01] <synacksyn> after running ceph pg force_create_pg my.pg, my pg is stuck 'creating'. I've read about marking one of its OSD down for a few seconds, then bring it back up: only makes the status change to 'incomplete'.
[11:02] <synacksyn> ceph health detail tells about ops being blocked on this OSD.
[11:02] * \ask (~ask@oz.develooper.com) has joined #ceph
[11:08] * eternaleye (~eternaley@50.245.141.77) Quit (Ping timeout: 480 seconds)
[11:09] * capri (~capri@212.218.127.222) has joined #ceph
[11:11] <synacksyn> Originally, I had this problem on an other OSD. Failing to recover, I reweighted the faulty OSD to remap my pg. Now it is on two other OSDs, I still have the same issue. Any idea?
[11:13] * synacksyn is now known as kevinkevin-work
[11:20] * linjan_ (~linjan@213.8.240.146) has joined #ceph
[11:25] * ngoswami_ (~ngoswami@121.244.87.116) has joined #ceph
[11:31] * ngoswami (~ngoswami@121.244.87.124) Quit (Ping timeout: 480 seconds)
[11:32] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[11:34] * xophe (~xophe@ows-5-104-102-23.eu-west-1.compute.outscale.com) Quit (Ping timeout: 480 seconds)
[11:36] * derjohn_mob (~aj@2001:6f8:1337:0:2c1e:b4e:cda7:890c) has joined #ceph
[11:37] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[11:50] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[11:57] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[11:59] * treaki (~treaki@p4FDF6487.dip0.t-ipconnect.de) has joined #ceph
[12:00] * davidzlap (~Adium@2605:e000:1313:8003:215a:ad8f:b630:36ee) Quit (Read error: Connection reset by peer)
[12:00] * davidzlap (~Adium@2605:e000:1313:8003:b4bb:843:8e41:7c09) has joined #ceph
[12:01] * treaki (~treaki@p4FDF6487.dip0.t-ipconnect.de) Quit (Max SendQ exceeded)
[12:01] * treaki (~treaki@p4FDF6487.dip0.t-ipconnect.de) has joined #ceph
[12:04] * Kioob (~Kioob@200.254.0.109.rev.sfr.net) Quit (Quit: Leaving.)
[12:04] * mjeanson_ (~mjeanson@bell.multivax.ca) Quit (Remote host closed the connection)
[12:05] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[12:16] * oms101 (~oms101@p20030057EA07E300EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[12:17] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[12:19] * avozza (~avozza@83.162.204.36) Quit (Remote host closed the connection)
[12:19] * avozza (~avozza@83.162.204.36) has joined #ceph
[12:21] * xophe (~xophe@ows-5-104-102-23.eu-west-1.compute.outscale.com) has joined #ceph
[12:28] * vbellur (~vijay@122.167.104.218) has joined #ceph
[12:28] * derjohn_mobi (~aj@b2b-94-79-172-98.unitymedia.biz) has joined #ceph
[12:32] * diegows (~diegows@190.190.5.238) has joined #ceph
[12:34] * derjohn_mob (~aj@2001:6f8:1337:0:2c1e:b4e:cda7:890c) Quit (Ping timeout: 480 seconds)
[12:38] * derjohn_mobi (~aj@b2b-94-79-172-98.unitymedia.biz) Quit (Ping timeout: 480 seconds)
[12:42] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[12:45] * cok (~chk@2a02:2350:18:1010:9535:af30:63be:20ef) has joined #ceph
[12:48] * saltlake (~saltlake@pool-71-244-62-208.dllstx.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[12:49] * linjan_ (~linjan@213.8.240.146) Quit (Ping timeout: 480 seconds)
[12:58] * linjan_ (~linjan@176.195.60.70) has joined #ceph
[13:06] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[13:15] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[13:24] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[13:25] * kanagaraj (~kanagaraj@27.7.33.172) has joined #ceph
[13:31] * delatte (~cdelatte@67.197.3.123) has joined #ceph
[13:31] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[13:32] * vbellur (~vijay@122.167.104.218) Quit (Ping timeout: 480 seconds)
[13:33] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[13:38] * delattec (~cdelatte@204-235-114.165.twcable.com) Quit (Read error: Connection reset by peer)
[13:40] <ZyTer> hi
[13:42] <ZyTer> after change one drive, (osd.3) , read, and apply this doc step by step,( http://karan-mj.blogspot.fr/2014/03/admin-guide-replacing-failed-disk-in.html )
[13:42] <ZyTer> but after, "ceph osd create" system say me "3"
[13:43] <ZyTer> the i prepare, i zap my drive : ceph-disk zap /dev/sdg
[13:43] <ZyTer> and prepare the new disk, ceph-disk-prepare --cluster-uuid xxxxx -- /dev/sdg /dev/sdb2
[13:44] <ZyTer> but after "ceph osd tree", i can see : osd.12 up !! (why 12 ?) and osd.3 down ...?
[13:44] * vbellur (~vijay@122.178.205.42) has joined #ceph
[13:45] <ZyTer> i dont understand why there are a new osd.12 ... is it possible to replace the osd.12 by osd.3 ?
[13:49] <fvl> swap?
[13:51] * lalatenduM (~lalatendu@122.171.68.54) has joined #ceph
[13:59] <Be-El> ZyTer: how should ceph-disk know about the osd id?
[14:03] * ade (~abradshaw@dslb-188-102-071-027.188.102.pools.vodafone-ip.de) has joined #ceph
[14:03] <ZyTer> Be-El: i dont know... but if you have another doc or procedure for change drive, and keep the same osd number...
[14:04] * kanagaraj (~kanagaraj@27.7.33.172) Quit (Quit: Leaving)
[14:04] <Be-El> ZyTer: you probably cannot keep it, since ceph is allocating the first available numver afaik
[14:05] * karnan (~karnan@27.57.140.230) Quit (Ping timeout: 480 seconds)
[14:05] <Be-El> so if you only have to change a single drive, you may get the same id. for more than one drive the odds are 1:n
[14:06] <ZyTer> yes but i have destroy all the osd.3 in crush and ceph.conf, when i run "ceph osd create" the response are : "3" ... and after, the osd is "12"
[14:06] <Be-El> yes, because ceph-disk is allocating another id...
[14:06] <ZyTer> Be-El: ok...
[14:07] <Be-El> except you pass the uuid of an existing osd (which can also be set during ceph osd create... )
[14:08] <ZyTer> there are another way, to prepare disk ? (not with ceph-disk)
[14:08] <Be-El> none that i'm aware of
[14:09] * georgem (~Adium@184.151.190.252) has joined #ceph
[14:09] <Be-El> if you really want to have the same osd id, remove the osd, cross your fingers that noone else is removing an osd in the mean time, generate a uuid, allocate a osd with specifying the uuid, and finally setup the osd content with ceph-disk, also specifying the uuid
[14:10] <Be-El> if you have 'holes' in your osd allocation (e.g. a formerly removed osd that has not been replaced), it won't work
[14:10] <ZyTer> Be-El: hum... ok...
[14:10] <ZyTer> Be-El: thanks for your help...
[14:12] * linjan_ (~linjan@176.195.60.70) Quit (Ping timeout: 480 seconds)
[14:13] * nardial (~ls@ipservice-092-209-178-132.092.209.pools.vodafone-ip.de) Quit (Quit: Leaving)
[14:13] <Be-El> ZyTer: i just had a look at ceph-disk. it will probably allocate a new osd in all cases
[14:14] * derjohn_mob (~aj@b2b-94-79-172-98.unitymedia.biz) has joined #ceph
[14:15] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[14:17] * nhm (~nhm@65-128-165-174.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[14:17] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[14:20] * saltlake (~saltlake@pool-71-244-62-208.dllstx.fios.verizon.net) has joined #ceph
[14:22] * rwheeler (~rwheeler@173.48.208.246) Quit (Quit: Leaving)
[14:22] * karnan (~karnan@27.57.140.230) has joined #ceph
[14:25] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[14:25] * linjan_ (~linjan@80.179.241.26) has joined #ceph
[14:29] * pavera_ (~tomc@64-199-185-54.ip.mcleodusa.net) has joined #ceph
[14:30] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[14:31] <pavera_> hey folks, just wondering if there is a document somewhere about the proper way to ???restart??? a whole cluster??? I attempted to reboot all the nodes in my cluster last night, and when it came back up I???ve got a good number (43) pgs stuck unclean in various states??? some inactive, 1 down peering, some active but stuck unclean???
[14:32] <pavera_> I???ve tried to follow the pg troubleshooting guide, but nothing in my cluster matches the examples??? I don???t have any osds down
[14:32] <Be-El> pavera_: did you set the noout and/or nodown flag prior to shutting down the cluster?
[14:32] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[14:33] <pavera_> no, probably should have
[14:33] <pavera_> is that the only ???rule??? to safely performing this operation?
[14:34] <Be-El> pavera_: no clue yet, but i'm facing the same problem next weekend
[14:34] <pavera_> luckily its just a test cluster, the data doesn???t matter, but before going to production I need to have these procedures nailed down and documented
[14:34] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[14:34] <pavera_> I need to either know how to safely reboot, or how to recover from the currrent situation
[14:34] <Be-El> pavera_: from the documentation and some talks here on the channel, i think the best procedure is setting the flags, shutting down rgw/mds, shutting down osds and finally the mons
[14:35] * Kioob (~Kioob@200.254.0.109.rev.sfr.net) has joined #ceph
[14:35] <Be-El> pavera_: restart in th reverse order
[14:35] <Be-El> pavera_: in the current situation you can try to find out which pgs are affected, and why they do not recover completely
[14:35] <pavera_> yeah, I???ve tried doing that as per the pg troubleshooting docs
[14:36] <Be-El> pavera_: in many cases restarting the involved osd process solves the problem
[14:36] <pavera_> but the output doesnt line up with the limited examples in the doc
[14:36] <pavera_> it looks like the down+peering pg got passed around alot as the osds came back online
[14:36] <pavera_> it seems to think there might be data in like 8 or 10 osds
[14:36] * jks (~jks@178.155.151.121) Quit (Quit: jks)
[14:37] <pavera_> but all of those osds are up, not sure why its not able to proceed
[14:37] <Be-El> can you upload the output of ceph pg dump for one of the pgs somewhere?
[14:37] <Be-El> eh...ceph pg query
[14:38] <pavera_> yeah, one sec
[14:39] * alram (~alram@LAubervilliers-656-1-17-4.w217-128.abo.wanadoo.fr) has joined #ceph
[14:40] <Kingrat> pavera_, you should shut down your other services using ceph, set noout, and then shut down the ceph nodes
[14:41] <Kingrat> afaik that is the way to do it
[14:41] <Kingrat> if you are restarting, just noout and then reboot them one at a time
[14:41] * georgem (~Adium@184.151.190.252) Quit (Quit: Leaving.)
[14:41] <Kingrat> and wait for it to come back active+clean after each one
[14:42] * a1-away is now known as AbyssOne
[14:43] * jks (~jks@178.155.151.121) has joined #ceph
[14:43] <pavera_> http://pastebin.com/05pRBWCM
[14:43] <pavera_> that is I think the worst one, down+remapped+peering, and it seems to indicate its probing 8 osds for objects???
[14:49] <Be-El> if i interpret the recovery section correct, it is currently requesting information from osd 110 and 246
[14:49] * sjm (~sjm@pool-98-109-11-113.nwrknj.fios.verizon.net) has joined #ceph
[14:49] <nils_> come to think of it you'd also run into this sort of problem when the datacenter loses power
[14:50] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) has joined #ceph
[14:50] <Be-El> nils_: that's what failure domains and emergency power supplies are for
[14:51] <Be-El> nils_: if the complete datacenter goes dark, you probably have more trouble than restarting a ceph cluster
[14:51] <pavera_> well??? most likely in my case the ceph cluster coming back clean would be my biggest problem???
[14:51] <nils_> Be-El: definitely. It's one of those "should never happen" situations that happen far too often
[14:52] <pavera_> but hopefully if it all goes dark simultaneously there wouldn???t be time for ceph to try to remap any pgs
[14:52] <pavera_> and as long as I start things up in the correct order it should be ok
[14:52] <pavera_> I???ve certainly uncleanly shut down smaller clusters and had it come back nicely
[14:52] <Be-El> pavera_: do you see any recovery operation currently going on on the cluster? or is stuck in that state?
[14:52] <pavera_> stuck
[14:53] <pavera_> it hasn???t moved since last night around 8pm
[14:53] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) has joined #ceph
[14:53] <pavera_> but, as you mentioned I restarted one of the osds that had ???slow??? requests
[14:53] <pavera_> and everything just cleared up except for this pg
[14:53] <Bosse> pavera_: could you see what happens if you give osd.423 a little restart? i found it to be a little odd that this OSD isn't in the 'acting' list in your query, but I'm just guessing here.
[14:53] <pavera_> so thats good, and I don???t see any degraded objects
[14:54] <pavera_> yeah I???ll bounce that one next
[14:55] <Be-El> pavera_: you use a replication size of 3?
[14:55] <pavera_> that did it!
[14:55] <pavera_> yeah
[14:56] <Be-El> pavera_: if you have a close look at the query, you'll note the difference between 'up' and 'acting'
[14:56] <pavera_> yeah I noticed that
[14:56] <pavera_> but didn???t know how to interpret it
[14:57] <Be-El> i think peering means that the osd has collected all data for a pgs, but did not report it to the monitor yet
[14:57] <pavera_> or that just restarting the osd could fix it
[14:57] <Be-El> pavera_: the restart has triggered the report to the monitor
[14:59] <nils_> it just so happens that I'm trying to break my cluster at the moment, or at least see how it reacts
[14:59] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[14:59] <pavera_> well thanks for your time!
[14:59] <pavera_> I???m happy it was as easy as restarting 2 osds
[15:04] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[15:06] <pavera_> I???m specifically planning for the eventuality of the datacenter going dark, in another life we had 2 or 3 occasions when the datacenter was able to give us advanced warning and we were able to cleanly shut things down???
[15:06] <pavera_> I just need our procedures in place since ???just turn it off??? is not a good option apparently :)
[15:11] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[15:15] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[15:15] * ngoswami_ (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[15:15] * nitti (~nitti@162.222.47.218) has joined #ceph
[15:16] * delattec (~cdelatte@204-235-114.167.twcable.com) has joined #ceph
[15:17] * pavera_ (~tomc@64-199-185-54.ip.mcleodusa.net) Quit (Quit: pavera_)
[15:21] * morse (~morse@supercomputing.univpm.it) Quit (Ping timeout: 480 seconds)
[15:22] * delatte (~cdelatte@67.197.3.123) Quit (Ping timeout: 480 seconds)
[15:23] * ade (~abradshaw@dslb-188-102-071-027.188.102.pools.vodafone-ip.de) Quit (Quit: Too sexy for his shirt)
[15:23] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[15:29] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) Quit (Quit: Leaving.)
[15:29] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) has joined #ceph
[15:33] * diegows (~diegows@190.190.5.238) has joined #ceph
[15:36] * jluis (~joao@249.38.136.95.rev.vodafone.pt) has joined #ceph
[15:36] * ChanServ sets mode +o jluis
[15:36] * yvkr (~oftc-webi@cable-62-117-16-10.cust.telecolumbus.net) has joined #ceph
[15:38] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[15:39] * cmorandi1 (~cmorandin@213.30.180.34) has joined #ceph
[15:41] * nitti_ (~nitti@162.222.47.218) has joined #ceph
[15:41] * nitti (~nitti@162.222.47.218) Quit (Read error: Connection reset by peer)
[15:41] * cmorandin (~cmorandin@194.206.51.157) Quit (Ping timeout: 480 seconds)
[15:42] * joao (~joao@249.38.136.95.rev.vodafone.pt) Quit (Ping timeout: 480 seconds)
[15:44] <nils_> -1336/321940 objects degraded (-0.415%); 174955/321940 objects misplaced (54.344%) huh?
[15:46] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[15:46] * yvkr (~oftc-webi@cable-62-117-16-10.cust.telecolumbus.net) has left #ceph
[15:51] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[15:54] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[15:57] * nitti (~nitti@173-160-123-93-Minnesota.hfc.comcastbusiness.net) has joined #ceph
[16:00] * saltlake (~saltlake@pool-71-244-62-208.dllstx.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[16:00] * jskinner (~jskinner@host-95-2-129.infobunker.com) has joined #ceph
[16:02] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[16:03] * jskinner (~jskinner@host-95-2-129.infobunker.com) Quit ()
[16:03] * nitti_ (~nitti@162.222.47.218) Quit (Ping timeout: 480 seconds)
[16:06] * lalatenduM (~lalatendu@122.171.68.54) Quit (Quit: Leaving)
[16:07] * sudocat (~davidi@2601:e:2b80:992b:349c:9ecf:3e8a:23d0) has joined #ceph
[16:07] * lalatenduM (~lalatendu@122.171.68.54) has joined #ceph
[16:07] * alram (~alram@LAubervilliers-656-1-17-4.w217-128.abo.wanadoo.fr) Quit (Quit: leaving)
[16:07] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[16:10] * sjm1 (~sjm@pool-98-109-11-113.nwrknj.fios.verizon.net) has joined #ceph
[16:15] * sudocat1 (~davidi@73.166.99.97) has joined #ceph
[16:15] * sudocat (~davidi@2601:e:2b80:992b:349c:9ecf:3e8a:23d0) Quit (Quit: Leaving.)
[16:16] * nhm (~nhm@172.56.39.251) has joined #ceph
[16:16] * ChanServ sets mode +o nhm
[16:17] * fghaas (~florian@213162068098.public.t-mobile.at) has joined #ceph
[16:18] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[16:18] * kefu (~kefu@114.92.100.153) has joined #ceph
[16:20] * cok (~chk@2a02:2350:18:1010:9535:af30:63be:20ef) Quit (Quit: Leaving.)
[16:21] * nitti (~nitti@173-160-123-93-Minnesota.hfc.comcastbusiness.net) Quit (Remote host closed the connection)
[16:22] * nitti (~nitti@162.222.47.218) has joined #ceph
[16:27] * fghaas (~florian@213162068098.public.t-mobile.at) Quit (Read error: Connection reset by peer)
[16:29] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[16:31] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[16:34] * sudocat1 (~davidi@73.166.99.97) Quit (Ping timeout: 480 seconds)
[16:36] <loganlsf1d> i seem to have bricked my cephfs this morning. some things broke and in the process of trying to reset max mds to 1 and get the setup working on 1 mds (there used to be 3) I rmfailed the downed mdses and they still show in the mdsmap as "in". the 1 remaining mds that I am trying to bring up just sits there stuck in resolve.
[16:36] <loganlsf1d> mds dump looks like this: http://paste.gentoolinux.info/dicacotiha.hs
[16:37] <loganlsf1d> I was looking at http://ceph.com/docs/master/cephfs/disaster-recovery/ and it seems like ceph fs reset may be what I need but it says the command does not exist. Running giant.
[16:39] * fghaas (~florian@213162068016.public.t-mobile.at) has joined #ceph
[16:41] * togdon (~togdon@74.121.28.6) has joined #ceph
[16:43] * dmsimard_away is now known as dmsimard
[16:45] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[16:45] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[16:45] * moore (~moore@64.202.160.88) has joined #ceph
[16:46] * moore_ (~moore@64.202.160.88) has joined #ceph
[16:46] * moore (~moore@64.202.160.88) Quit (Read error: Connection reset by peer)
[16:52] * fghaas (~florian@213162068016.public.t-mobile.at) Quit (Quit: Leaving.)
[16:53] * cmorandin (~cmorandin@67.53.158.77.rev.sfr.net) has joined #ceph
[16:55] * cmorandi1 (~cmorandin@213.30.180.34) Quit (Ping timeout: 480 seconds)
[16:56] * jtang (~jtang@109.255.42.21) Quit (Ping timeout: 480 seconds)
[16:56] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[16:57] * carmstrong (sid22558@id-22558.uxbridge.irccloud.com) Quit (Remote host closed the connection)
[16:57] * ipolyzos (sid45277@id-45277.uxbridge.irccloud.com) Quit (Remote host closed the connection)
[16:57] * supay (sid47179@id-47179.uxbridge.irccloud.com) Quit (Remote host closed the connection)
[16:57] * shk (sid33582@id-33582.uxbridge.irccloud.com) Quit (Remote host closed the connection)
[17:00] * karnan (~karnan@27.57.140.230) Quit (Remote host closed the connection)
[17:00] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[17:06] * sudocat1 (~davidi@192.185.1.20) has joined #ceph
[17:07] * Kioob (~Kioob@200.254.0.109.rev.sfr.net) Quit (Quit: Leaving.)
[17:09] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:11] * linjan_ (~linjan@80.179.241.26) Quit (Ping timeout: 480 seconds)
[17:15] * oro (~oro@2001:620:20:16:7c05:6cd8:5140:c1ed) Quit (Ping timeout: 480 seconds)
[17:15] * rljohnsn (~rljohnsn@c-73-15-126-4.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[17:16] * rljohnsn (~rljohnsn@c-73-15-126-4.hsd1.ca.comcast.net) has joined #ceph
[17:19] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[17:21] * dgurtner (~dgurtner@178.197.231.49) Quit (Remote host closed the connection)
[17:21] * dyasny (~dyasny@173.231.115.58) has joined #ceph
[17:21] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:21] * sleinen1 (~Adium@macsl.switch.ch) has joined #ceph
[17:22] * dgurtner (~dgurtner@178.197.231.49) has joined #ceph
[17:23] * carmstrong (sid22558@id-22558.uxbridge.irccloud.com) has joined #ceph
[17:25] * sputnik13 (~sputnik13@74.202.214.170) has joined #ceph
[17:25] * kylehutson (~kylehutso@n117m03.cis.ksu.edu) has joined #ceph
[17:26] * saltlake (~saltlake@12.250.199.170) has joined #ceph
[17:26] * gregsfortytwo (~gregsfort@209.132.181.86) Quit (Ping timeout: 480 seconds)
[17:26] <kylehutson> Anybody have time for a crushmap question?
[17:26] * gregsfortytwo (~gregsfort@209.132.181.86) has joined #ceph
[17:27] * dgurtner (~dgurtner@178.197.231.49) Quit (Remote host closed the connection)
[17:27] * dgurtner (~dgurtner@178.197.231.49) has joined #ceph
[17:27] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) Quit (Read error: Connection reset by peer)
[17:29] * shk (sid33582@id-33582.uxbridge.irccloud.com) has joined #ceph
[17:33] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[17:33] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[17:34] * ipolyzos (sid45277@id-45277.uxbridge.irccloud.com) has joined #ceph
[17:35] * supay (sid47179@id-47179.uxbridge.irccloud.com) has joined #ceph
[17:41] * fretb (frederik@november.openminds.be) Quit (Quit: leaving)
[17:42] * kylehutson (~kylehutso@n117m03.cis.ksu.edu) has left #ceph
[17:42] * fretb (~fretb@pie.frederik.pw) has joined #ceph
[17:42] * trociny (~mgolub@93.183.239.2) Quit (Ping timeout: 480 seconds)
[17:43] * trociny (~mgolub@93.183.239.2) has joined #ceph
[17:52] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[17:52] * cholcombe973 (~chris@73.25.105.99) has joined #ceph
[17:52] * treaki (~treaki@p4FDF6487.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[17:53] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[17:56] * rljohnsn (~rljohnsn@c-73-15-126-4.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:57] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[17:58] * bandrus (~brian@50.23.113.236-static.reverse.softlayer.com) has joined #ceph
[17:59] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[17:59] * rmoe (~quassel@173-228-89-134.dsl.static.fusionbroadband.com) Quit (Ping timeout: 480 seconds)
[18:04] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[18:10] * nardial (~ls@ipservice-092-209-178-132.092.209.pools.vodafone-ip.de) has joined #ceph
[18:10] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[18:12] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[18:12] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[18:13] * treloskilo (~treloskil@77-4-106.static.cyta.gr) has joined #ceph
[18:13] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[18:13] * treloskilo (~treloskil@77-4-106.static.cyta.gr) Quit ()
[18:13] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:13] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:15] * sudocat1 (~davidi@192.185.1.20) Quit (Quit: Leaving.)
[18:15] * sudocat (~davidi@192.185.1.20) has joined #ceph
[18:16] * nardial (~ls@ipservice-092-209-178-132.092.209.pools.vodafone-ip.de) Quit (Quit: Leaving)
[18:19] * saltlake2 (~saltlake@12.250.199.170) has joined #ceph
[18:20] * togdon (~togdon@74.121.28.6) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:20] * rohanm (~rohanm@c-67-168-194-197.hsd1.or.comcast.net) Quit (Quit: Leaving)
[18:22] * saltlake (~saltlake@12.250.199.170) Quit (Ping timeout: 480 seconds)
[18:22] * hellertime (~Adium@a72-246-0-10.deploy.akamaitechnologies.com) has joined #ceph
[18:24] * gregmark (~Adium@68.87.42.115) has joined #ceph
[18:25] * sleinen1 (~Adium@macsl.switch.ch) Quit (Quit: Leaving.)
[18:28] <MentalRay> We are experimenting some issue with OpenStack and Ceph. When we migrate instances or resizing them, Nova try to import the image when doing any of those two but is causing an issue.
[18:28] <MentalRay> Anyone is using Icehouse or Juno and was able to "fix" this situation
[18:31] * togdon (~togdon@74.121.28.6) has joined #ceph
[18:32] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[18:32] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit ()
[18:32] * ksingh (~Adium@2001:708:10:10:b183:8c5a:8411:e370) Quit (Ping timeout: 480 seconds)
[18:33] * PaulC (~paul@209.132.181.86) has joined #ceph
[18:34] * jordanP (~jordan@213.215.2.194) Quit (Quit: Leaving)
[18:36] * saltlake (~saltlake@12.250.199.170) has joined #ceph
[18:38] * delatte (~cdelatte@67.197.3.123) has joined #ceph
[18:39] * saltlake2 (~saltlake@12.250.199.170) Quit (Ping timeout: 480 seconds)
[18:39] * thomnico (~thomnico@2a01:e35:8b41:120:1876:89c2:295c:b56a) Quit (Ping timeout: 480 seconds)
[18:44] * delattec (~cdelatte@204-235-114.167.twcable.com) Quit (Ping timeout: 480 seconds)
[18:44] * derjohn_mob (~aj@b2b-94-79-172-98.unitymedia.biz) Quit (Ping timeout: 480 seconds)
[18:47] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[18:50] * mykola (~Mikolaj@91.225.201.255) has joined #ceph
[18:53] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: This computer has gone to sleep)
[18:54] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[18:55] * VisBits (~textual@8.29.138.28) has joined #ceph
[19:00] * dgurtner (~dgurtner@178.197.231.49) Quit (Ping timeout: 480 seconds)
[19:00] * cookednoodles (~eoin@89-93-153-201.hfc.dyn.abo.bbox.fr) has joined #ceph
[19:03] * kefu (~kefu@114.92.100.153) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:03] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:07] * xarses (~andreww@12.164.168.117) has joined #ceph
[19:10] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[19:10] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[19:17] * ircolle (~Adium@2601:1:a580:145a:7467:b5a:51ef:6d37) has joined #ceph
[19:21] * saltlake2 (~saltlake@12.250.199.170) has joined #ceph
[19:22] * LeaChim (~LeaChim@host86-147-114-247.range86-147.btcentralplus.com) has joined #ceph
[19:22] * saltlake (~saltlake@12.250.199.170) Quit (Ping timeout: 480 seconds)
[19:23] * lalatenduM (~lalatendu@122.171.68.54) Quit (Quit: Leaving)
[19:24] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[19:24] <VisBits> ceph-deploy osd prepare activates the new osd... its not suppose to :\
[19:26] <alfredodeza> VisBits: what distro?
[19:26] <alfredodeza> iirc Ubuntu does that
[19:26] <VisBits> el7
[19:26] <alfredodeza> hrmn
[19:27] <VisBits> its just annoying because every time i want to add a new node to my pool i have to do them staggered because slamming 12 disk in all at once is exciting
[19:27] <VisBits> lol
[19:30] * nwat (~nwat@kyoto.soe.ucsc.edu) Quit (Quit: Leaving)
[19:30] * nwat (~nwat@kyoto.soe.ucsc.edu) has joined #ceph
[19:30] * rljohnsn (~rljohnsn@ns25.8x8.com) Quit (Quit: Leaving.)
[19:30] * hellertime (~Adium@a72-246-0-10.deploy.akamaitechnologies.com) Quit (Quit: Leaving.)
[19:32] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[19:33] * sudocat (~davidi@192.185.1.20) Quit (Ping timeout: 480 seconds)
[19:34] <alfredodeza> VisBits: if you find it reproducible with a test server then this would be a bug
[19:34] <VisBits> yeah its confirmed
[19:34] <VisBits> i submitted an auth bug last night
[19:34] <alfredodeza> would you mind creating a ticket in the tracker (tracker.ceph.com ) with full output when reproducing it?
[19:34] <alfredodeza> oh nice
[19:35] <VisBits> yeah scuttlemonkey had me email the boys at redhat on it directly
[19:37] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[19:37] * angdraug (~angdraug@c-50-174-102-105.hsd1.ca.comcast.net) has joined #ceph
[19:41] * vbellur (~vijay@122.178.205.42) Quit (Ping timeout: 480 seconds)
[19:43] <angdraug> is it possible to replace all your mons (one by one) without disrupting active rbd connections?
[19:43] <angdraug> in other words, do rbd clients keep track of monmap updates?
[19:44] <VisBits> alfredodeza issue 10922
[19:44] <kraken> VisBits might be talking about http://tracker.ceph.com/issues/10922 [ceph-deploy prepare activates the OSD automatically.]
[19:44] * Concubidated (~Adium@2607:f298:b:635:705b:4633:d954:102) has joined #ceph
[19:44] <VisBits> thats slick 6
[19:46] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[19:48] * puffy (~puffy@216.207.42.129) has joined #ceph
[19:51] <angdraug> sage: do rbd clients keep track of monmap updates?
[19:54] * rljohnsn (~rljohnsn@ns25.8x8.com) Quit (Quit: Leaving.)
[19:56] * debian112 (~bcolbert@24.126.201.64) Quit (Quit: Leaving.)
[20:00] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[20:00] <angdraug> all see in librados is MonClient::ping_monitor(), doesn't look like it updates monmap, am I missing something?
[20:01] * kfox (~kfox@96-41-192-212.dhcp.elbg.wa.charter.com) has joined #ceph
[20:02] <kfox> can you start with a simple rados gateway setup and switch to multiple regions later, or must you start with a multiregion setup?
[20:03] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[20:08] <VisBits> based on experience id start with multi-region configuration
[20:08] <VisBits> i cant imagine transitionining that
[20:09] * nils_ (~nils@doomstreet.collins.kg) Quit (Quit: Leaving)
[20:09] * treaki (~treaki@2002:5b03:1148:0:221:6bff:fe3c:a3de) has joined #ceph
[20:10] <kfox> ok.
[20:10] <kfox> So can I follow the directions but just create one region/zone, and skip all the rest?
[20:16] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) has joined #ceph
[20:18] <lxo> is it normal to get frequent scrub errors due to differences in hit_set_archive bytes, for tiered replicated pgs, or should I be concerned?
[20:18] * PaulC (~paul@209.132.181.86) Quit (Read error: Connection reset by peer)
[20:18] <lxo> (running giant here)
[20:22] <lxo> I've introduced an EC tier about a month ago, and it hasn't finished flushing all the data to EC yet, but I've been getting scrub errors every now and then. I wonder if these are still consequence of the initial configuration change (introducing tiering for a non-empty pool). I haven't had much scrubbing since (most of the disk bandwidth is going to the conversion), so I can't tell whether inconsistencies reappear after the first error&repair, or if everyt
[20:22] <lxo> hing will return to normal once all pgs are clean again
[20:22] <lxo> (clean not in the pg dump sense)
[20:23] * nardial (~ls@ipservice-092-209-178-132.092.209.pools.vodafone-ip.de) has joined #ceph
[20:24] * nardial (~ls@ipservice-092-209-178-132.092.209.pools.vodafone-ip.de) Quit ()
[20:24] * linjan_ (~linjan@213.8.240.146) has joined #ceph
[20:27] <kfox> so, do you usually put multiple gateway instances in the same zone and put up a load balancer in front?
[20:29] * rljohnsn1 (~rljohnsn@ns25.8x8.com) has joined #ceph
[20:29] * rljohnsn (~rljohnsn@ns25.8x8.com) Quit (Read error: Connection reset by peer)
[20:30] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) has left #ceph
[20:30] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[20:31] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[20:34] * thomnico (~thomnico@2a01:e35:8b41:120:e487:bf21:c7ed:2fb) has joined #ceph
[20:35] * linjan_ (~linjan@213.8.240.146) Quit (Ping timeout: 480 seconds)
[20:37] * rljohnsn1 (~rljohnsn@ns25.8x8.com) Quit (Quit: Leaving.)
[20:41] <kfox> so you tipically would use zones across datacenters within the same country, and regions over countries?
[20:42] <kfox> would you ever want to use regions for datacenters within the same organization instead?
[20:42] <VisBits> anycast hosting to something like haproxy with the prioritys set for the best locations to that hosted machine
[20:44] * linjan_ (~linjan@176.195.60.70) has joined #ceph
[20:45] <kfox> hmm... so a ceph cluster can be per region or per zone... so in some ways, it doesn't really matter then...
[20:45] * rotbeard (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) has joined #ceph
[20:46] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[20:46] <kfox> so I can do one region, called named after our organization since its geographically in one spot, and then do zones for the datacenters...
[20:47] * evilrob0_ (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) has joined #ceph
[20:47] * evilrob00 (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[20:47] <kfox> do regions/zones have to match up with openstack regions/zones at all?
[20:47] <kfox> some kind of keystone mapping?
[20:49] <kfox> we're talking about having several regions in different datacenters to ensure nova/neutron/cinder can be seperated and managed on seperate time scheduals for different sla's.
[20:50] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[20:50] * aszeszo (~aszeszo@adrj26.neoplus.adsl.tpnet.pl) Quit (Quit: Leaving.)
[20:51] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[20:55] * angdraug (~angdraug@c-50-174-102-105.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[21:02] * cholcombe973 (~chris@73.25.105.99) Quit (Ping timeout: 480 seconds)
[21:03] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[21:09] * togdon (~togdon@74.121.28.6) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[21:10] * danieagle (~Daniel@200-148-39-11.dsl.telesp.net.br) has joined #ceph
[21:10] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) has joined #ceph
[21:13] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[21:16] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[21:16] * puffy (~puffy@216.207.42.129) Quit (Ping timeout: 480 seconds)
[21:16] <kfox> to scale, do you put up multiple instances for the same zone, then dns round robbin them?
[21:17] * eternaleye (~eternaley@50.245.141.77) has joined #ceph
[21:18] * puffy (~puffy@216.207.42.129) has joined #ceph
[21:21] * rljohnsn (~rljohnsn@ns25.8x8.com) Quit (Quit: Leaving.)
[21:25] * ccheng (~ccheng@c-50-165-131-154.hsd1.in.comcast.net) has joined #ceph
[21:27] * saltlake (~saltlake@12.250.199.170) has joined #ceph
[21:31] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[21:31] * saltlake2 (~saltlake@12.250.199.170) Quit (Ping timeout: 480 seconds)
[21:36] * danieagle (~Daniel@200-148-39-11.dsl.telesp.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[21:36] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[21:38] * ccheng (~ccheng@c-50-165-131-154.hsd1.in.comcast.net) Quit ()
[21:39] * rljohnsn (~rljohnsn@ns25.8x8.com) Quit ()
[21:42] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[21:42] * rwheeler (~rwheeler@173.48.208.246) has joined #ceph
[21:44] * infernix (nix@2001:41f0::ffff:ffff:ffff:fffe) Quit (Ping timeout: 480 seconds)
[21:45] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[21:48] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[21:50] * joshd (~jdurgin@38.122.20.226) has joined #ceph
[21:50] * avozza (~avozza@83.162.204.36) Quit (Remote host closed the connection)
[21:52] * thomnico (~thomnico@2a01:e35:8b41:120:e487:bf21:c7ed:2fb) Quit (Ping timeout: 480 seconds)
[21:56] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[21:58] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[21:58] * ccheng (~ccheng@c-50-165-131-154.hsd1.in.comcast.net) has joined #ceph
[21:58] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[22:00] * mjevans (~mje@209.141.34.79) has joined #ceph
[22:02] * georgem (~Adium@fwnat.oicr.on.ca) has left #ceph
[22:02] * rljohnsn (~rljohnsn@ns25.8x8.com) Quit (Read error: Connection reset by peer)
[22:03] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[22:03] * saltlake2 (~saltlake@12.250.199.170) has joined #ceph
[22:05] * tupper_ (~tcole@rtp-isp-nat1.cisco.com) Quit (Ping timeout: 480 seconds)
[22:07] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[22:11] * saltlake (~saltlake@12.250.199.170) Quit (Ping timeout: 480 seconds)
[22:11] * fsimonce (~simon@host217-37-dynamic.30-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[22:11] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) Quit (Quit: Leaving.)
[22:11] <mjevans> I have an OSD that I lost due to not knowing journals could be re-initilized. I fixed several other OSDs after discovering that while trying to re-create the OSD. However now this OSD is suck marked as lost. I'd like to re-add it, and removing/re-creating the OSD per that page's directions does attach the OSD to the cluster, but it's still stuck as lost. How can I tell ceph to add the replacement OSD at an old number address?
[22:19] <kevinkevin> have you tried 'ceph osd out xx', 'ceph crush remove xx', 'ceph auth del xx', 'ceph osd rm xx', and re- ceph-deploy -ed the osd?
[22:19] <mjevans> Out, check, Remove check, auth del check, and rm check. I was unable to use ceph-deploy however as it want's to reconfigure the entire host.
[22:20] <mjevans> Instead I used: ceph-osd -d -i 2 --mkfs --mkkey
[22:20] <mjevans> I'll pastebin it.
[22:20] * linjan_ (~linjan@176.195.60.70) Quit (Ping timeout: 480 seconds)
[22:20] <mjevans> http://pastebin.com/UuayYrj8
[22:22] * moore_ (~moore@64.202.160.88) Quit (Remote host closed the connection)
[22:23] * rljohnsn (~rljohnsn@ns25.8x8.com) Quit (Quit: Leaving.)
[22:23] <mjevans> The cluster has moved on to epoch 900+ while I've been trying to resolve the issues. I still had the data on multiple other places since the rulesets resulted in 2 copies for each of 2 hosts (4 copies total), so there's not been any data loss. However I can't find a way of telling ceph to forget 'ceph osd lost' :: osd.2 down in weight 1 up_from 217 up_thru 237 down_at 239 last_clean_interval [208,213) lost_at 239
[22:24] <kevinkevin> could /var/log/ceph/ceph-osd.2.log be reporting anything meaningful?
[22:25] <VisBits> mjevans you mean remove the osd ?
[22:26] <mjevans> kevinkevin: there are no further messages after opening the journal, on other working OSDs the next line is 0 <cls> cls/hello/cls_hello.cc:271: loading cls_hello followed by osd.x epoch number, dump data, etc
[22:27] <mjevans> VisBits: Yeah, I thought the data on it was gone, outed, lost, etc the OSD and then tried to completely remove and re-create a replacement OSD. However said replacement won't re-add because the cluster still thinks osd.2 is lost and I can't find out to un-mark that.
[22:27] <VisBits> ceph osd out osd.2
[22:27] <VisBits> ceph osd rm osd.2
[22:27] <VisBits> then add it back
[22:27] <mjevans> VisBits: Already in my steps; check he pastebin
[22:27] <VisBits> did you download the crush map and remove it
[22:27] <VisBits> then reupload the crushmap?
[22:28] <mjevans> No, I re-upload a re-compiled from text crush map
[22:28] <kevinkevin> (ceph crush remove does that)
[22:28] <VisBits> well maybe its not
[22:28] * togdon (~togdon@74.121.28.6) has joined #ceph
[22:28] <VisBits> if you remove it from crush it shouldnt ack the existance of osd.2
[22:29] <mjevans> Tell you what, I'll remove it again, and do the ceph osd dump with it removed to see what it says, then look again before re-compiling the crush map.
[22:30] * saltlake2 (~saltlake@12.250.199.170) Quit (Ping timeout: 480 seconds)
[22:30] <mjevans> VisBits: Right; so if I do the remove steps osd.2 is not listed. However THEN I run >> ceph osd create << it returns 2, and now osd.2 exists again with the lost_at state.
[22:31] <VisBits> nuke the drive and recreate it so it doesnt get the osd.2 id
[22:31] <VisBits> ie, leave a placeholder for that id
[22:31] <VisBits> did you try the forget and move on commands?
[22:32] <mjevans> I would really like to keep the old ID. There's a 'naming scheme' based on the numbers which makes it much easier to remember which OSD attaches to which drive.
[22:32] <VisBits> ceph osd lost 1
[22:32] <VisBits> ceph osd lost 2; in your case
[22:32] <mjevans> Yeah, I did the lost thing initially...
[22:33] <mjevans> That's the thing I need to un-do so ceph stops being confused that it now has an osd 2 back.
[22:33] <VisBits> ive done a lot of failure simulation and never had this
[22:33] <VisBits> the worst is when you try to delete a pool in use by mds, good luck getting that fixed lol
[22:35] <mjevans> Worst is actually that ceph osd help doesn't dump //all// the possible commands, so I am having a tough time actually finding a list of possible holes to poke with sticks.
[22:36] * ircolle is now known as ircolle-afk
[22:37] <mjevans> It's also pretty bad that you have to jump through so many hoops to use an external journal, and to reference it with a persistant device identifier. The ceph-deploy scripts (at least that debian's almost stable side has) weren't taking them. The configuration file also HAS a journal device line, but it's one of those 'actually a template value' 'settings' and the real setting is where the journal file/link points.
[22:38] <mjevans> I think it is a naming convention flaw to have template values next to settings without a clear distinction.
[22:39] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[22:40] <VisBits> it uses FSID once it creates
[22:40] <VisBits> or UUIDs
[22:40] <VisBits> the only thing persistent is the create line no?
[22:41] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[22:41] <mjevans> VisBits: Then what tells ceph where block devices with osds on them are when you restart a node?
[22:41] <VisBits> it scans the partitions looking for ceph journal and mounts it i think
[22:42] <VisBits> or ceph data
[22:42] <mjevans> So in one spot you have a /setting/ that does that, with something that looks like a /setting/ but is actually a /template value/
[22:42] <VisBits> the osd configurations are stored on the actual device
[22:42] * linjan_ (~linjan@213.8.240.146) has joined #ceph
[22:42] <VisBits> ive not found any concrete documentation on this, its just things ive learned/noticed from testing
[22:43] <mjevans> VisBits: No, it reads /etc/ceph.conf to figure out which OSDs a host has, and from those OSDs which devices to mount to read further settings (observed in behavior, not code). There is other information stored with the OSDs in that file, but those things are the value of the template setting for creating it, not an actual setting for running it.
[22:43] <VisBits> no it doesnt
[22:43] <VisBits> lol
[22:43] <mjevans> er /etc/ceph/ceph.conf
[22:43] <VisBits> i dont have any osds at all in my ceph.conf files
[22:44] <VisBits> my ceph.conf is monitor configuration and thats it
[22:44] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Ping timeout: 480 seconds)
[22:44] * bandrus (~brian@50.23.113.236-static.reverse.softlayer.com) Quit (Ping timeout: 480 seconds)
[22:44] <mjevans> Wow, that's... so it's all template values for the OSDs then? Crazy.
[22:44] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[22:45] <mjevans> Ok, if that's the case I have an experiment
[22:46] <VisBits> once your cluster is up that running data is in the monitors, if you were to have a total failure i think you would need osd configurations? im looking at the init scripts now
[22:46] <kevinkevin> mjevans: you've got to try ceph-deploy. since I do, my ceph.conf is very short
[22:47] * garphy`aw is now known as garphy
[22:47] * Kingrat (~shiny@cpe-96-29-149-153.swo.res.rr.com) Quit (Ping timeout: 480 seconds)
[22:50] <VisBits> yeah
[22:50] <VisBits> on start it connects to the monitor to figure out what osds that host contains
[22:50] <VisBits> so realistically you should define them on your monitor in case you had a total failure of all pieces
[22:50] <VisBits> but if your monitors are up it wont matter
[22:50] <VisBits> line 356 - 368 on init.d
[22:51] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[22:52] <VisBits> is it normal for osd processes to go down when you add a new host and new osds? i've seen it twice today while adding
[22:53] <kevinkevin> lack of RAM, as first suspect
[22:53] <kevinkevin> is your host under huge load?
[22:53] <VisBits> basically idle
[22:54] <kevinkevin> nevermind, then
[22:54] <VisBits> i have 32gb ram, showing 20gb as cache
[22:54] * bandrus (~brian@212.sub-70-211-65.myvzw.com) has joined #ceph
[22:54] <mjevans> So. I did the remove steps, rebooted, and then did ceph osd create. after all that it STILL resurrected osd.2 from the dead with it's lost state. Removing the OSD is //not// purging it from the cluster.
[22:55] <VisBits> log showing fault with nothing to send going to standby
[22:55] <mjevans> VisBits: Very small network. The monitors ARE the osd hosts. It's literally as small a cluster as can possibly be made.
[22:55] <mjevans> (I don't consider 1 host an actual cluster)
[22:55] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[22:57] <VisBits> kevinkevin http://pastebin.com/ugeBH4cz
[22:58] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: This computer has gone to sleep)
[22:58] <VisBits> the status says the osd daemon is closing
[22:59] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[23:00] * infernix (nix@2001:41f0::ffff:ffff:ffff:fffe) has joined #ceph
[23:01] * bilco105_ (~bilco105@irc.bilco105.com) has left #ceph
[23:01] * stephan (~Adium@dslb-188-102-038-068.188.102.pools.vodafone-ip.de) has joined #ceph
[23:01] * bilco105_ (~bilco105@irc.bilco105.com) has joined #ceph
[23:01] <VisBits> http://tracker.ceph.com/issues/8387
[23:01] <VisBits> i guess the current stable isnt stable??
[23:01] * Kingrat (~shiny@cpe-96-29-149-153.swo.res.rr.com) has joined #ceph
[23:01] * saltlake (~saltlake@12.250.199.170) has joined #ceph
[23:03] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[23:03] * topro_ (~prousa@pD9F85D71.dip0.t-ipconnect.de) has joined #ceph
[23:04] * linjan_ (~linjan@213.8.240.146) Quit (Ping timeout: 480 seconds)
[23:04] * dyasny (~dyasny@173.231.115.58) Quit (Ping timeout: 480 seconds)
[23:05] <topro_> hi, my cluster is giving me some headache, one OSD refuses to come up, corresponding osd process is running though. any ideas?
[23:05] * delatte (~cdelatte@67.197.3.123) Quit (Quit: This computer has gone to sleep)
[23:05] <VisBits> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg09973.html
[23:05] <mjevans> Is it possible to upgrade from 0.80.7 to 0.92 directly, or do I need to do the upgrades between?
[23:06] <kevinkevin> isn't 0.87 the current stable?
[23:06] <mjevans> topro_: what does your /var/log/ceph/ceph-osd.X.log for that OSD say after mounting?
[23:06] <mjevans> kevinkevin: is it? It's difficult to tell.
[23:06] * moore (~moore@97-124-123-201.phnx.qwest.net) has joined #ceph
[23:08] * infernix (nix@2001:41f0::ffff:ffff:ffff:fffe) Quit (Ping timeout: 480 seconds)
[23:08] * nitti (~nitti@162.222.47.218) Quit (Quit: Leaving...)
[23:09] * mykola (~Mikolaj@91.225.201.255) Quit (Quit: away)
[23:09] <mjevans> It seems you are correct, giant == 0.87.x
[23:09] <topro_> mjevans: http://paste.debian.net/153415/ doesn't look that different from what working OSDs are logging
[23:11] <topro_> mjevens: 0nly last start, beginning 23:00:24 is relevant, before I had a journal issue. I completely erased that OSD and created it from scratch, now I can start it, but it won't show UP
[23:12] * nitti (~nitti@162.222.47.218) has joined #ceph
[23:13] * rljohnsn (~rljohnsn@ns25.8x8.com) Quit (Quit: Leaving.)
[23:13] * linjan_ (~linjan@176.195.60.70) has joined #ceph
[23:13] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[23:16] <mjevans> topro_: It looks like the osd came up in to the cluster. ceph osd dump | grep osd.7 ??
[23:16] <mjevans> Ahh you /too/ are on 0.80.7
[23:16] * steveeJ (~junky@virthost3.stefanjunker.de) Quit (Remote host closed the connection)
[23:16] <mjevans> Whoever's doing the debian packages. A 'jessie' target sure would be nice given that Jessie is almost released.
[23:17] <mjevans> It's a giant pita to walk around and poke all of these packages that think they're in the wrong relesae.
[23:17] <topro_> osd.7 down in weight 1 up_from 20283 up_thru 20283 down_at 20285 last_clean_interval [20265,20280) 172.16.0.4:6807/28414 192.168.0.4:6804/28414 192.168.0.4:6805/28414 172.16.0.4:6808/28414 exists 01301ab4-6000-484e-a215-4195b494c002
[23:18] <topro_> manually marked in IN so it shows as IN, but still down
[23:19] * angdraug (~angdraug@mobile-166-171-249-222.mycingular.net) has joined #ceph
[23:19] <topro_> what does the UUID at the end of the line refer to?
[23:19] <mjevans> topro_: I'm not sure, you've reached the end of my knowledge about ceph, and I'm currently bashing my head against trying to get ceph 0.87.x installed on some debian jessie systems to try and work around my own issue.
[23:20] <topro_> are you using packages from official debian repos or from ceph repos?
[23:21] <mjevans> topro_: the ceph repos, since the debian ones seem to lack some kind of fix for what I need
[23:21] * garphy is now known as garphy`aw
[23:21] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[23:21] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[23:22] <topro_> debian repos got 0.87 in experimental. I didn't see any more recent 0.87.x release yet
[23:22] <mjevans> experimental might be too far ahead, these aren't converted to systemd yet.
[23:22] <kevinkevin> as sad as it sounds: ubuntu looks like a better choice than debian, running ceph
[23:23] * angdraug (~angdraug@mobile-166-171-249-222.mycingular.net) Quit ()
[23:25] * cmorandi1 (~cmorandin@213.30.180.34) has joined #ceph
[23:26] * steveeJ (~junky@virthost3.stefanjunker.de) has joined #ceph
[23:27] * cmorandin (~cmorandin@67.53.158.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[23:28] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: This computer has gone to sleep)
[23:30] * sjm1 (~sjm@pool-98-109-11-113.nwrknj.fios.verizon.net) has left #ceph
[23:31] <topro_> hm, completely removed osd from crush and auth, completely created new osd fs and key, added to crush, added auth, started daemon, OSD still down :/
[23:31] * sjm (~sjm@pool-98-109-11-113.nwrknj.fios.verizon.net) has left #ceph
[23:32] * rotbeard (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) Quit (Quit: Leaving)
[23:33] * saltlake (~saltlake@12.250.199.170) Quit (Ping timeout: 480 seconds)
[23:34] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[23:34] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:36] <VisBits> can you not create an rbd on an erasure pool?
[23:38] * OutOfNoWhere (~rpb@199.68.195.102) has joined #ceph
[23:38] <VisBits> wow
[23:38] <VisBits> that sucks
[23:39] * nitti_ (~nitti@173-160-123-93-Minnesota.hfc.comcastbusiness.net) has joined #ceph
[23:39] * cholcombe973 (~chris@73.25.105.99) has joined #ceph
[23:40] * nitti_ (~nitti@173-160-123-93-Minnesota.hfc.comcastbusiness.net) Quit (Remote host closed the connection)
[23:40] * nitti_ (~nitti@162.222.47.218) has joined #ceph
[23:41] * nitti (~nitti@162.222.47.218) Quit (Read error: Connection reset by peer)
[23:42] * CephTestC (~CephTestC@199.91.185.156) Quit ()
[23:45] <VisBits> i feel like cephx updates very very slowly
[23:45] <VisBits> it takes up to 10 minutes for changes to work
[23:45] * saltlake (~saltlake@12.250.199.170) has joined #ceph
[23:45] * linjan_ (~linjan@176.195.60.70) Quit (Ping timeout: 480 seconds)
[23:46] <gregsfortytwo> generally speaking the client needs to reconnect to the cluster to pick up new permissions, if that's what you mean
[23:46] * jwilkins (~jwilkins@c-67-180-123-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[23:46] <VisBits> if i have 2 rbds mapped on a client, i have to disconnect those, then reconnect (unmap, remap) to get the new permissions?
[23:48] <gregsfortytwo> that's my recollection
[23:48] <gregsfortytwo> or you can wait for an auth timeout period, which by default is every hour or so
[23:49] <VisBits> i tried that, none of my volumes will mount now
[23:49] <VisBits> *sigh*
[23:51] <VisBits> i changed the permissions to specify * for osd allow vs allowing specific pools and it works, that code is buggy
[23:52] <gregsfortytwo> probably have the syntax wrong; it's a little finicky
[23:52] <VisBits> "allow * pool=Backups-Hybrid, allow * pool=General-Storage, allow * Backups-DVS"
[23:52] * nitti (~nitti@173-160-123-93-Minnesota.hfc.comcastbusiness.net) has joined #ceph
[23:52] <gregsfortytwo> I don't think you can use the * operator in per-pool perms like that
[23:52] <VisBits> i tried rwx
[23:52] <VisBits> same issues
[23:53] <gregsfortytwo> dunno then, I haven't written a caps string in a while ;)
[23:53] <joshd> the last clause should have a pool= in it probably
[23:53] <VisBits> no point in crreating keyrings for specific pool sets if they only work as a global
[23:54] <VisBits> hmm i see that
[23:54] <VisBits> ill test it in my dev cluster
[23:54] <VisBits> good catch josh
[23:55] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[23:55] * infernix (nix@2001:41f0::ffff:ffff:ffff:fffe) has joined #ceph
[23:57] * saltlake (~saltlake@12.250.199.170) Quit (Ping timeout: 480 seconds)
[23:58] * nitti_ (~nitti@162.222.47.218) Quit (Ping timeout: 480 seconds)
[23:59] * nitti (~nitti@173-160-123-93-Minnesota.hfc.comcastbusiness.net) Quit (Remote host closed the connection)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.