#ceph IRC Log

Index

IRC Log for 2016-08-25

Timestamps are in GMT/BST.

[0:00] <s3an2> not sure how well running osd's with existing open connections will handle the MTU change, maybe worth setting it in the netowkring config files and just doing another reboot to ensure its 'clean'.
[0:00] * Unai1 (~Adium@50-115-70-150.static-ip.telepacific.net) Quit (Quit: Leaving.)
[0:02] * lobstar (~raindog@61TAABJ56.tor-irc.dnsbl.oftc.net) Quit ()
[0:03] <``rawr> :( I figure since it increased the MTU it can't do too much harm
[0:03] <``rawr> (it also takes much longer than it should to reboot)
[0:04] <``rawr> I haven't seen a suicide timeout yet
[0:04] <``rawr> https://www.irccloud.com/pastebin/gtdaA1U1/
[0:05] <``rawr> but I see a lot of things related to hearbeats timing out and slow requests
[0:06] * guerby (~guerby@ip165.tetaneutral.net) Quit (Ping timeout: 480 seconds)
[0:08] <jiffe> anyone see why this PG is stuck inactive? http://nsab.us/public/ceph, I have 37 osds, all are up and in
[0:09] <``rawr> 52/110 in osds are down
[0:09] <``rawr> didn't seem to help much unfortunately :(
[0:10] * guerby (~guerby@ip165.tetaneutral.net) has joined #ceph
[0:11] * baojg (~baojg@61.135.155.34) has joined #ceph
[0:12] * Mikko (~Mikko@109-108-30-118.bb.dnainternet.fi) Quit (Quit: This computer has gone to sleep)
[0:16] * fsimonce (~simon@host203-44-dynamic.183-80-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[0:23] * Unai (~Adium@50-115-70-150.static-ip.telepacific.net) has joined #ceph
[0:26] * Mikko (~Mikko@109-108-30-118.bb.dnainternet.fi) has joined #ceph
[0:27] * xarses (~xarses@64.124.158.32) Quit (Ping timeout: 480 seconds)
[0:28] * gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) Quit (Quit: Textual IRC Client: www.textualapp.com)
[0:31] * rendar (~I@95.235.182.241) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[0:35] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[0:38] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[0:42] * Snowman (~oracular@209.95.50.14) has joined #ceph
[0:43] <``rawr> is there a way to just forget about unfound pieces in placement groups?
[0:43] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[0:46] <jiffe> can I add an existing osd back in after I've completely removed it?
[0:46] <jiffe> I'm not sure how else to fix this PG
[0:46] <jiffe> it is definitely stuck on something
[0:48] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[0:51] * zigo (~quassel@gplhost-3-pt.tunnel.tserv18.fra1.ipv6.he.net) Quit (Quit: http://quassel-irc.org - Chat comfortably. Anywhere.)
[0:53] * zigo (~quassel@gplhost-3-pt.tunnel.tserv18.fra1.ipv6.he.net) has joined #ceph
[0:54] * zigo (~quassel@gplhost-3-pt.tunnel.tserv18.fra1.ipv6.he.net) Quit ()
[0:54] * zigo (~quassel@gplhost-3-pt.tunnel.tserv18.fra1.ipv6.he.net) has joined #ceph
[0:56] * valeech (~valeech@pool-96-247-203-33.clppva.fios.verizon.net) Quit (Quit: valeech)
[0:59] * lcurtis_ (~lcurtis@47.19.105.250) Quit (Remote host closed the connection)
[1:06] <``rawr> I actually shutdown all of my OSDs but now my mon looks like this:
[1:06] <``rawr> https://www.irccloud.com/pastebin/11y0cMDn/
[1:07] * batrick (~batrick@2600:3c00::f03c:91ff:fe96:477b) Quit (Quit: WeeChat 1.5)
[1:12] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[1:12] * oms101 (~oms101@p20030057EA6F3B00C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:12] * Snowman (~oracular@209.95.50.14) Quit ()
[1:13] * ircuser-1 (~Johnny@158.183-62-69.ftth.swbr.surewest.net) Quit (Quit: because)
[1:14] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Read error: Connection reset by peer)
[1:20] * davidz (~davidz@2605:e000:1313:8003:10a1:2bcc:7144:9840) has joined #ceph
[1:21] * oms101 (~oms101@p20030057EA4F7700C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:22] * raphaelsc (~raphaelsc@177.19.29.72) Quit (Remote host closed the connection)
[1:27] <TheSov> does anyone know if ceph will support parallel reads in the future?
[1:27] <TheSov> is that on the roadmap
[1:28] * kuku (~kuku@119.93.91.136) has joined #ceph
[1:30] * batrick (~batrick@2600:3c00::f03c:91ff:fe96:477b) has joined #ceph
[1:30] * KindOne (kindone@h44.149.29.71.dynamic.ip.windstream.net) has joined #ceph
[1:31] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[1:31] * badone (~badone@66.187.239.16) has joined #ceph
[1:32] * corevoid (~corevoid@ip68-5-125-61.oc.oc.cox.net) has joined #ceph
[1:35] * blizzow (~jburns@50-243-148-102-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[1:35] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[1:37] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[1:38] * Unai (~Adium@50-115-70-150.static-ip.telepacific.net) Quit (Quit: Leaving.)
[1:43] * ircuser-1 (~Johnny@158.183-62-69.ftth.swbr.surewest.net) has joined #ceph
[1:52] <wes_dillingham> is there any mechanism that prevents scrubbing from happening during times of rebalance?
[1:52] <TheSov> you can disable scrubbing
[1:52] <TheSov> until your rebalance is complete
[1:53] <TheSov> normal scrubbing is fine to leave on
[1:53] <TheSov> its deep scrubs that you may need to disable
[1:53] * bene2 (~bene@2601:193:4101:f410:ea2a:eaff:fe08:3c7a) Quit (Ping timeout: 480 seconds)
[1:55] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[1:56] * Dr_O (~owen@00012c05.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:58] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[1:58] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit ()
[2:01] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[2:02] * mattbenjamin (~mbenjamin@12.118.3.106) Quit (Ping timeout: 480 seconds)
[2:05] * Dr_O (~owen@00012c05.user.oftc.net) has joined #ceph
[2:07] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[2:07] <mlovell> doh. my problem ended up being a versionlock problem. ignore me.
[2:12] * BrianA1 (~BrianA@fw-rw.shutterfly.com) has left #ceph
[2:13] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[2:27] * oracular (~Kaervan@108.61.123.75) has joined #ceph
[2:35] * ircuser-1 (~Johnny@158.183-62-69.ftth.swbr.surewest.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * \ask (~ask@oz.develooper.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * mhackett (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * wushudoin (~wushudoin@38.140.108.2) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * jdillaman (~jdillaman@pool-108-18-97-95.washdc.fios.verizon.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * destrudo (~destrudo@tomba.sonic.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * dougf (~dougf@75-131-32-223.static.kgpt.tn.charter.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * erice (~eric@c-76-120-53-165.hsd1.co.comcast.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * jmn (~jmn@nat-pool-bos-t.redhat.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * scuttle (~scuttle@nat-pool-rdu-t.redhat.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * carter (~carter@li98-136.members.linode.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * WildyLion (~simba@45.32.185.17) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * gregsfortytwo (~gregsfort@transit-86-181-132-209.redhat.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * DrewBeer (~DrewBeer@216.152.240.203) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * TheSov (~TheSov@108-75-213-57.lightspeed.cicril.sbcglobal.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * brad- (~Brad@TMA-1.brad-x.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * jarrpa (~jarrpa@63.225.131.166) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * LiftedKilt (LiftedKilt@is.in.the.madhacker.biz) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * ceph-ircslackbot3 (~ceph-ircs@ds9536.dreamservers.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * toastydeath (~toast@pool-71-255-253-39.washdc.fios.verizon.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * wer (~wer@216.197.66.124) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * ffilzwin (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * portante (~portante@nat-pool-bos-t.redhat.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * koollman (samson_t@78.47.248.51) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * codice (~toodles@75-128-34-237.static.mtpk.ca.charter.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * kingcu (~kingcu@kona.ridewithgps.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * jamespd (~mucky@mucky.socket7.org) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * marco208 (~root@159.253.7.204) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * EthanL (~lamberet@cce02cs4037-fa12-z.ams.hpecore.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * shubjero (~shubjero@107.155.107.246) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * arbrandes1 (~arbrandes@ec2-54-172-54-135.compute-1.amazonaws.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * Aeso (~aesospade@aesospadez.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * bstillwell (~bryan@bokeoa.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * skarn (skarn@0001f985.user.oftc.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * Realmy (~Realmy@0002243f.user.oftc.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * corevoid (~corevoid@ip68-5-125-61.oc.oc.cox.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * KindOne (kindone@0001a7db.user.oftc.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * kuku (~kuku@119.93.91.136) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * zigo (~quassel@gplhost-3-pt.tunnel.tserv18.fra1.ipv6.he.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * sudocat (~dibarra@192.185.1.20) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * davidz (~davidz@2605:e000:1313:8003:10a1:2bcc:7144:9840) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * cronburg__ (~cronburg@nat-pool-bos-t.redhat.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * nilez (~nilez@ec2-52-37-170-77.us-west-2.compute.amazonaws.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * danieagle (~Daniel@177.138.169.68) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * dnunez (~dnunez@nat-pool-bos-u.redhat.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * spgriffinjr (~spgriffin@66-46-246-206.dedicated.allstream.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * md_ (~john@205.233.53.42) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * natarej (~natarej@101.188.54.14) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * SamYaple (~SamYaple@162.209.126.134) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * diq (~diq@2620:11c:f:2:c23f:d5ff:fe62:112c) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * zeestrat (uid176159@id-176159.brockwell.irccloud.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * beardo_ (~sma310@207-172-244-241.c3-0.atw-ubr5.atw.pa.cable.rcn.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * cyphase (~cyphase@000134f2.user.oftc.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * dcwangmit01 (~dcwangmit@162-245.23-239.PUBLIC.monkeybrains.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * react (~react@retard.io) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * braderhart (sid124863@braderhart.user.oftc.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * scalability-junk (sid6422@id-6422.ealing.irccloud.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * rkeene (1011@oc9.org) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * shaon (~shaon@shaon.me) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * Guest798 (~herrsergi@ec2-107-21-210-136.compute-1.amazonaws.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * SpamapS (~SpamapS@xencbyrum2.srihosting.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * ngoswami (~ngoswami@121.244.87.116) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * post-factum (~post-fact@vulcan.natalenko.name) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * wak-work (~wak-work@2620:15c:202:0:79a1:c6ff:eee0:c5a6) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * mtanski (~mtanski@65.244.82.98) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * ntpttr (~ntpttr@192.55.54.38) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * mlovell (~mlovell@69-195-66-94.unifiedlayer.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * m0zes (~mozes@ns1.beocat.ksu.edu) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * jiffe (~jiffe@nsab.us) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * Psi-Jack (~psi-jack@mx.linux-help.org) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * blynch (~blynch@vm-nat.msi.umn.edu) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * BlaXpirit (~irc@blaxpirit.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * verleihnix (~verleihni@195.12.46.2) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * niknakpa1dywak (~xander.ni@outbound.lax.demandmedia.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * med (~medberry@00012b50.user.oftc.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * beardo (~beardo__@beardo.cc.lehigh.edu) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * JoeJulian (~JoeJulian@108.166.123.190) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * gmmaha (~gmmaha@00021e7e.user.oftc.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * ktdreyer (~kdreyer@polyp.adiemus.org) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * Tene (~tene@173.13.139.236) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * dmanchad (~dmanchad@nat-pool-bos-t.redhat.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * bitshiftr (~scott@rubyi.st) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * skorgu_ (skorgu@pylon.skorgu.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * elder_ (sid70526@id-70526.charlton.irccloud.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * sage (~quassel@2607:f298:5:101d:f816:3eff:fe21:1966) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * diegows (~diegows@main.woitasen.com.ar) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * jnq (sid150909@0001b7cc.user.oftc.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * devicenull (sid4013@id-4013.ealing.irccloud.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * iggy (~iggy@mail.vten.us) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * nolan (~nolan@2001:470:1:41:a800:ff:fe3e:ad08) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * mjevans (~mjevans@li984-246.members.linode.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * scubacuda (sid109325@0001fbab.user.oftc.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * oracular (~Kaervan@26XAABCR4.tor-irc.dnsbl.oftc.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * badone (~badone@66.187.239.16) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * batrick (~batrick@2600:3c00::f03c:91ff:fe96:477b) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * ``rawr (uid23285@id-23285.tooting.irccloud.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * Racpatel (~Racpatel@2601:87:0:24af::313b) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * cathode (~cathode@50.232.215.114) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * oliveiradan2 (~doliveira@67.214.238.80) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * adamcrume (~quassel@2601:647:cb01:f890:a288:69ff:fe70:6caa) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * ndevos (~ndevos@nat-pool-ams2-5.redhat.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * cholcombe (~chris@2001:67c:1562:8007::aac:40f1) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * dis (~dis@00018d20.user.oftc.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * bassam (sid154933@id-154933.brockwell.irccloud.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * jfaj (~jan@p5798303C.dip0.t-ipconnect.de) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * Kingrat (~shiny@cpe-74-129-33-192.kya.res.rr.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * jproulx (~jon@kvas.csail.mit.edu) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * dmonschein (~dmonschei@00020eb4.user.oftc.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * Kruge (~Anus@198.211.99.93) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * evilrob (~evilrob@2600:3c00::f03c:91ff:fedf:1d3d) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * TiCPU (~owrt@2001:470:1c:40::2) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * masber (~masber@129.94.15.152) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * jackhill (~jackhill@bog.hcoop.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * icey (~Chris@0001bbad.user.oftc.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * IvanJobs (~ivanjobs@103.50.11.146) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * LegalResale (~LegalResa@66.165.126.130) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * Sketch (~Sketch@2604:180:2::a506:5c0d) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * pdrakeweb (~pdrakeweb@cpe-71-74-153-111.neo.res.rr.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * jlayton (~jlayton@cpe-2606-A000-1125-405B-14D9-DFF4-8FF1-7DD8.dyn6.twc.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * zerick (~zerick@irc.quassel.zerick.me) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * snelly (~cjs@sable.island.nu) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * eth00 (~eth00@74.81.187.100) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * nathani (~nathani@2607:f2f8:ac88::) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * JohnPreston78 (sid31393@id-31393.ealing.irccloud.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * tserong (~tserong@203-214-92-220.dyn.iinet.net.au) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * Sirenia (~sirenia@454028b1.test.dnsbl.oftc.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * ffilz (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * DeMiNe0 (~DeMiNe0@104.131.119.74) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * jidar (~jidar@104.207.140.225) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * markl (~mark@knm.org) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * joshd (~jdurgin@206.169.83.146) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * arthurh (~arthurh@38.101.34.128) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * hughsaunders (~hughsaund@2001:4800:7817:101:1843:3f8a:80de:df65) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * GooseYArd (~GooseYArd@ec2-52-5-245-183.compute-1.amazonaws.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * ndru (~jawsome@00020819.user.oftc.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * chutz (~chutz@rygel.linuxfreak.ca) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * andrewschoen (~andrewsch@2001:4801:7821:77:be76:4eff:fe10:afc7) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * mfa298 (~mfa298@krikkit.yapd.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * lurbs (user@uber.geek.nz) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * thadood (~thadood@slappy.thunderbutt.org) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * gtrott (sid78444@id-78444.tooting.irccloud.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * `10` (~10@69.169.91.14) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * krogon_ (~krogon@134.134.137.75) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * qman (~rohroh@2600:3c00::f03c:91ff:fe69:92af) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * mnaser (~mnaser@162.253.53.193) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * skullone (~skullone@shell.skull-tech.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * Superdawg (~Superdawg@ec2-54-243-59-20.compute-1.amazonaws.com) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * folivora (~out@devnull.drwxr-xr-x.eu) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * logan- (~logan@63.143.60.136) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * pasties (~pasties@00021c52.user.oftc.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * dustinm` (~dustinm`@68.ip-149-56-14.net) Quit (magnet.oftc.net synthon.oftc.net)
[2:35] * aarontc (~aarontc@2001:470:e893::1:1) Quit (magnet.oftc.net synthon.oftc.net)
[2:36] * IvanJobs_ (~ivanjobs@122.14.140.7) has joined #ceph
[2:36] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[2:36] * ircuser-1 (~Johnny@158.183-62-69.ftth.swbr.surewest.net) has joined #ceph
[2:36] * corevoid (~corevoid@ip68-5-125-61.oc.oc.cox.net) has joined #ceph
[2:36] * badone (~badone@66.187.239.16) has joined #ceph
[2:36] * KindOne (kindone@0001a7db.user.oftc.net) has joined #ceph
[2:36] * batrick (~batrick@2600:3c00::f03c:91ff:fe96:477b) has joined #ceph
[2:36] * kuku (~kuku@119.93.91.136) has joined #ceph
[2:36] * davidz (~davidz@2605:e000:1313:8003:10a1:2bcc:7144:9840) has joined #ceph
[2:36] * zigo (~quassel@gplhost-3-pt.tunnel.tserv18.fra1.ipv6.he.net) has joined #ceph
[2:36] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[2:36] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[2:36] * ``rawr (uid23285@id-23285.tooting.irccloud.com) has joined #ceph
[2:36] * cronburg__ (~cronburg@nat-pool-bos-t.redhat.com) has joined #ceph
[2:36] * nilez (~nilez@ec2-52-37-170-77.us-west-2.compute.amazonaws.com) has joined #ceph
[2:36] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[2:36] * \ask (~ask@oz.develooper.com) has joined #ceph
[2:36] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[2:36] * Racpatel (~Racpatel@2601:87:0:24af::313b) has joined #ceph
[2:36] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) has joined #ceph
[2:36] * mhackett (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) has joined #ceph
[2:36] * cathode (~cathode@50.232.215.114) has joined #ceph
[2:36] * oliveiradan2 (~doliveira@67.214.238.80) has joined #ceph
[2:36] * wushudoin (~wushudoin@38.140.108.2) has joined #ceph
[2:36] * jdillaman (~jdillaman@pool-108-18-97-95.washdc.fios.verizon.net) has joined #ceph
[2:36] * danieagle (~Daniel@177.138.169.68) has joined #ceph
[2:36] * dnunez (~dnunez@nat-pool-bos-u.redhat.com) has joined #ceph
[2:36] * spgriffinjr (~spgriffin@66-46-246-206.dedicated.allstream.net) has joined #ceph
[2:36] * adamcrume (~quassel@2601:647:cb01:f890:a288:69ff:fe70:6caa) has joined #ceph
[2:36] * md_ (~john@205.233.53.42) has joined #ceph
[2:36] * natarej (~natarej@101.188.54.14) has joined #ceph
[2:36] * destrudo (~destrudo@tomba.sonic.net) has joined #ceph
[2:36] * dougf (~dougf@75-131-32-223.static.kgpt.tn.charter.com) has joined #ceph
[2:36] * SamYaple (~SamYaple@162.209.126.134) has joined #ceph
[2:36] * erice (~eric@c-76-120-53-165.hsd1.co.comcast.net) has joined #ceph
[2:36] * jmn (~jmn@nat-pool-bos-t.redhat.com) has joined #ceph
[2:36] * diq (~diq@2620:11c:f:2:c23f:d5ff:fe62:112c) has joined #ceph
[2:36] * zeestrat (uid176159@id-176159.brockwell.irccloud.com) has joined #ceph
[2:36] * beardo_ (~sma310@207-172-244-241.c3-0.atw-ubr5.atw.pa.cable.rcn.com) has joined #ceph
[2:36] * scuttle (~scuttle@nat-pool-rdu-t.redhat.com) has joined #ceph
[2:36] * cyphase (~cyphase@000134f2.user.oftc.net) has joined #ceph
[2:36] * ndevos (~ndevos@nat-pool-ams2-5.redhat.com) has joined #ceph
[2:36] * carter (~carter@li98-136.members.linode.com) has joined #ceph
[2:36] * dcwangmit01 (~dcwangmit@162-245.23-239.PUBLIC.monkeybrains.net) has joined #ceph
[2:36] * cholcombe (~chris@2001:67c:1562:8007::aac:40f1) has joined #ceph
[2:36] * WildyLion (~simba@45.32.185.17) has joined #ceph
[2:36] * react (~react@retard.io) has joined #ceph
[2:36] * gregsfortytwo (~gregsfort@transit-86-181-132-209.redhat.com) has joined #ceph
[2:36] * dis (~dis@00018d20.user.oftc.net) has joined #ceph
[2:36] * bassam (sid154933@id-154933.brockwell.irccloud.com) has joined #ceph
[2:36] * braderhart (sid124863@braderhart.user.oftc.net) has joined #ceph
[2:36] * jfaj (~jan@p5798303C.dip0.t-ipconnect.de) has joined #ceph
[2:36] * scalability-junk (sid6422@id-6422.ealing.irccloud.com) has joined #ceph
[2:36] * Kingrat (~shiny@cpe-74-129-33-192.kya.res.rr.com) has joined #ceph
[2:36] * jproulx (~jon@kvas.csail.mit.edu) has joined #ceph
[2:36] * rkeene (1011@oc9.org) has joined #ceph
[2:36] * dmonschein (~dmonschei@00020eb4.user.oftc.net) has joined #ceph
[2:36] * shaon (~shaon@shaon.me) has joined #ceph
[2:36] * Kruge (~Anus@198.211.99.93) has joined #ceph
[2:36] * evilrob (~evilrob@2600:3c00::f03c:91ff:fedf:1d3d) has joined #ceph
[2:36] * Guest798 (~herrsergi@ec2-107-21-210-136.compute-1.amazonaws.com) has joined #ceph
[2:36] * TiCPU (~owrt@2001:470:1c:40::2) has joined #ceph
[2:36] * masber (~masber@129.94.15.152) has joined #ceph
[2:36] * SpamapS (~SpamapS@xencbyrum2.srihosting.com) has joined #ceph
[2:36] * DrewBeer (~DrewBeer@216.152.240.203) has joined #ceph
[2:36] * jackhill (~jackhill@bog.hcoop.net) has joined #ceph
[2:36] * TheSov (~TheSov@108-75-213-57.lightspeed.cicril.sbcglobal.net) has joined #ceph
[2:36] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[2:36] * icey (~Chris@0001bbad.user.oftc.net) has joined #ceph
[2:36] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[2:36] * post-factum (~post-fact@vulcan.natalenko.name) has joined #ceph
[2:36] * brad- (~Brad@TMA-1.brad-x.com) has joined #ceph
[2:36] * wak-work (~wak-work@2620:15c:202:0:79a1:c6ff:eee0:c5a6) has joined #ceph
[2:36] * mtanski (~mtanski@65.244.82.98) has joined #ceph
[2:36] * LegalResale (~LegalResa@66.165.126.130) has joined #ceph
[2:36] * jarrpa (~jarrpa@63.225.131.166) has joined #ceph
[2:36] * Sketch (~Sketch@2604:180:2::a506:5c0d) has joined #ceph
[2:36] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) has joined #ceph
[2:36] * pdrakeweb (~pdrakeweb@cpe-71-74-153-111.neo.res.rr.com) has joined #ceph
[2:36] * LiftedKilt (LiftedKilt@is.in.the.madhacker.biz) has joined #ceph
[2:36] * jlayton (~jlayton@cpe-2606-A000-1125-405B-14D9-DFF4-8FF1-7DD8.dyn6.twc.com) has joined #ceph
[2:36] * zerick (~zerick@irc.quassel.zerick.me) has joined #ceph
[2:36] * ntpttr (~ntpttr@192.55.54.38) has joined #ceph
[2:36] * mlovell (~mlovell@69-195-66-94.unifiedlayer.com) has joined #ceph
[2:36] * ceph-ircslackbot3 (~ceph-ircs@ds9536.dreamservers.com) has joined #ceph
[2:36] * snelly (~cjs@sable.island.nu) has joined #ceph
[2:36] * eth00 (~eth00@74.81.187.100) has joined #ceph
[2:36] * m0zes (~mozes@ns1.beocat.ksu.edu) has joined #ceph
[2:36] * toastydeath (~toast@pool-71-255-253-39.washdc.fios.verizon.net) has joined #ceph
[2:36] * nathani (~nathani@2607:f2f8:ac88::) has joined #ceph
[2:36] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[2:36] * JohnPreston78 (sid31393@id-31393.ealing.irccloud.com) has joined #ceph
[2:36] * wer (~wer@216.197.66.124) has joined #ceph
[2:36] * tserong (~tserong@203-214-92-220.dyn.iinet.net.au) has joined #ceph
[2:36] * ffilzwin (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) has joined #ceph
[2:36] * portante (~portante@nat-pool-bos-t.redhat.com) has joined #ceph
[2:36] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) has joined #ceph
[2:36] * jiffe (~jiffe@nsab.us) has joined #ceph
[2:36] * koollman (samson_t@78.47.248.51) has joined #ceph
[2:36] * Psi-Jack (~psi-jack@mx.linux-help.org) has joined #ceph
[2:36] * blynch (~blynch@vm-nat.msi.umn.edu) has joined #ceph
[2:36] * BlaXpirit (~irc@blaxpirit.com) has joined #ceph
[2:36] * verleihnix (~verleihni@195.12.46.2) has joined #ceph
[2:36] * codice (~toodles@75-128-34-237.static.mtpk.ca.charter.com) has joined #ceph
[2:36] * kingcu (~kingcu@kona.ridewithgps.com) has joined #ceph
[2:36] * jamespd (~mucky@mucky.socket7.org) has joined #ceph
[2:36] * marco208 (~root@159.253.7.204) has joined #ceph
[2:36] * Sirenia (~sirenia@454028b1.test.dnsbl.oftc.net) has joined #ceph
[2:36] * EthanL (~lamberet@cce02cs4037-fa12-z.ams.hpecore.net) has joined #ceph
[2:36] * shubjero (~shubjero@107.155.107.246) has joined #ceph
[2:36] * ffilz (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) has joined #ceph
[2:36] * arbrandes1 (~arbrandes@ec2-54-172-54-135.compute-1.amazonaws.com) has joined #ceph
[2:36] * DeMiNe0 (~DeMiNe0@104.131.119.74) has joined #ceph
[2:36] * jidar (~jidar@104.207.140.225) has joined #ceph
[2:36] * niknakpa1dywak (~xander.ni@outbound.lax.demandmedia.com) has joined #ceph
[2:36] * skarn (skarn@0001f985.user.oftc.net) has joined #ceph
[2:36] * Realmy (~Realmy@0002243f.user.oftc.net) has joined #ceph
[2:36] * Aeso (~aesospade@aesospadez.com) has joined #ceph
[2:36] * bstillwell (~bryan@bokeoa.com) has joined #ceph
[2:36] * markl (~mark@knm.org) has joined #ceph
[2:36] * med (~medberry@00012b50.user.oftc.net) has joined #ceph
[2:36] * joshd (~jdurgin@206.169.83.146) has joined #ceph
[2:36] * jnq (sid150909@0001b7cc.user.oftc.net) has joined #ceph
[2:36] * devicenull (sid4013@id-4013.ealing.irccloud.com) has joined #ceph
[2:36] * scubacuda (sid109325@0001fbab.user.oftc.net) has joined #ceph
[2:36] * elder_ (sid70526@id-70526.charlton.irccloud.com) has joined #ceph
[2:36] * sage (~quassel@2607:f298:5:101d:f816:3eff:fe21:1966) has joined #ceph
[2:36] * beardo (~beardo__@beardo.cc.lehigh.edu) has joined #ceph
[2:36] * JoeJulian (~JoeJulian@108.166.123.190) has joined #ceph
[2:36] * gmmaha (~gmmaha@00021e7e.user.oftc.net) has joined #ceph
[2:36] * iggy (~iggy@mail.vten.us) has joined #ceph
[2:36] * ktdreyer (~kdreyer@polyp.adiemus.org) has joined #ceph
[2:36] * diegows (~diegows@main.woitasen.com.ar) has joined #ceph
[2:36] * Tene (~tene@173.13.139.236) has joined #ceph
[2:36] * mjevans (~mjevans@li984-246.members.linode.com) has joined #ceph
[2:36] * dmanchad (~dmanchad@nat-pool-bos-t.redhat.com) has joined #ceph
[2:36] * bitshiftr (~scott@rubyi.st) has joined #ceph
[2:36] * nolan (~nolan@2001:470:1:41:a800:ff:fe3e:ad08) has joined #ceph
[2:36] * skorgu_ (skorgu@pylon.skorgu.net) has joined #ceph
[2:36] * Superdawg (~Superdawg@ec2-54-243-59-20.compute-1.amazonaws.com) has joined #ceph
[2:36] * mnaser (~mnaser@162.253.53.193) has joined #ceph
[2:36] * arthurh (~arthurh@38.101.34.128) has joined #ceph
[2:36] * hughsaunders (~hughsaund@2001:4800:7817:101:1843:3f8a:80de:df65) has joined #ceph
[2:36] * GooseYArd (~GooseYArd@ec2-52-5-245-183.compute-1.amazonaws.com) has joined #ceph
[2:36] * folivora (~out@devnull.drwxr-xr-x.eu) has joined #ceph
[2:36] * logan- (~logan@63.143.60.136) has joined #ceph
[2:36] * ndru (~jawsome@00020819.user.oftc.net) has joined #ceph
[2:36] * dustinm` (~dustinm`@68.ip-149-56-14.net) has joined #ceph
[2:36] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[2:36] * andrewschoen (~andrewsch@2001:4801:7821:77:be76:4eff:fe10:afc7) has joined #ceph
[2:36] * mfa298 (~mfa298@krikkit.yapd.net) has joined #ceph
[2:36] * lurbs (user@uber.geek.nz) has joined #ceph
[2:36] * thadood (~thadood@slappy.thunderbutt.org) has joined #ceph
[2:36] * skullone (~skullone@shell.skull-tech.com) has joined #ceph
[2:36] * gtrott (sid78444@id-78444.tooting.irccloud.com) has joined #ceph
[2:36] * pasties (~pasties@00021c52.user.oftc.net) has joined #ceph
[2:36] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) has joined #ceph
[2:36] * `10` (~10@69.169.91.14) has joined #ceph
[2:36] * krogon_ (~krogon@134.134.137.75) has joined #ceph
[2:36] * qman (~rohroh@2600:3c00::f03c:91ff:fe69:92af) has joined #ceph
[2:36] * aarontc (~aarontc@2001:470:e893::1:1) has joined #ceph
[2:36] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[2:38] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Ping timeout: 480 seconds)
[2:39] * mattbenjamin (~mbenjamin@76-206-42-50.lightspeed.livnmi.sbcglobal.net) has joined #ceph
[2:42] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[2:44] * IvanJobs_ (~ivanjobs@122.14.140.7) Quit (Read error: Connection reset by peer)
[2:44] * Racpatel (~Racpatel@2601:87:0:24af::313b) Quit (Quit: Leaving)
[2:45] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[2:55] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[2:55] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[3:01] * ndevos (~ndevos@nat-pool-ams2-5.redhat.com) Quit (Remote host closed the connection)
[3:01] * _ndevos (~ndevos@nat-pool-ams2-5.redhat.com) has joined #ceph
[3:01] * _ndevos is now known as ndevos
[3:04] * yanzheng (~zhyan@125.70.21.51) has joined #ceph
[3:10] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) has joined #ceph
[3:11] * Hemanth (~hkumar_@103.228.221.141) Quit (Ping timeout: 480 seconds)
[3:13] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[3:19] * Mikko (~Mikko@109-108-30-118.bb.dnainternet.fi) Quit (Quit: This computer has gone to sleep)
[3:35] * i_m (~ivan.miro@83.149.37.190) has joined #ceph
[3:35] * kuku_ (~kuku@203.177.235.23) has joined #ceph
[3:36] * kuku (~kuku@119.93.91.136) Quit (Read error: Connection reset by peer)
[3:38] * kuku (~kuku@119.93.91.136) has joined #ceph
[3:38] * jfaj_ (~jan@p4FC5BF57.dip0.t-ipconnect.de) has joined #ceph
[3:41] * sebastian-w (~quassel@212.218.8.139) has joined #ceph
[3:41] * ggarg_ (~Gaurav@x2f2bb39.dyn.telefonica.de) has joined #ceph
[3:42] * sebastian-w_ (~quassel@212.218.8.139) Quit (Read error: Connection reset by peer)
[3:44] * kuku_ (~kuku@203.177.235.23) Quit (Ping timeout: 480 seconds)
[3:44] * davidz (~davidz@2605:e000:1313:8003:10a1:2bcc:7144:9840) Quit (Quit: Leaving.)
[3:45] * jfaj (~jan@p5798303C.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[3:46] * i_m (~ivan.miro@83.149.37.190) Quit (Ping timeout: 480 seconds)
[3:48] * elmo_ (~james@faun.canonical.com) has joined #ceph
[3:49] * ggarg (~Gaurav@x2f2275a.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[3:51] * derjohn_mobi (~aj@x4db292ca.dyn.telefonica.de) has joined #ceph
[3:57] * elmo_ (~james@faun.canonical.com) Quit (Quit: leaving)
[3:58] * elmo_ (~james@faun.canonical.com) has joined #ceph
[3:58] * derjohn_mob (~aj@x4db0eec0.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[4:00] * i_m (~ivan.miro@83.149.37.190) has joined #ceph
[4:02] * elmo_ (~james@faun.canonical.com) Quit ()
[4:03] * Hemanth (~hkumar_@103.228.221.141) has joined #ceph
[4:10] * Nicho1as (~nicho1as@00022427.user.oftc.net) has joined #ceph
[4:10] * elmo (~james@faun.canonical.com) Quit (Quit: leaving)
[4:10] * elmo (~james@faun.canonical.com) has joined #ceph
[4:19] * efirs (~firs@98.207.153.155) has joined #ceph
[4:22] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[4:22] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[4:23] <Nicho1as> anyone know if the default kernel of Debian 8.5(current) is okay with Ceph?
[4:23] * wjw-freebsd3 (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[4:28] * kefu (~kefu@114.92.101.38) has joined #ceph
[4:30] <TheSov> debian has trouble with ceph, best to goto ubuntu
[4:30] <TheSov> or use redhat
[4:30] <TheSov> anyone used lrbd?
[4:37] * valeech (~valeech@166.170.32.17) has joined #ceph
[4:42] <Nicho1as> alright, I think I'm gonna test if the 4.6 kernel from jessie(debian8)-backports repository would work well on a host
[4:51] * vbellur (~vijay@71.234.224.255) has joined #ceph
[5:03] * danieagle (~Daniel@177.138.169.68) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[5:10] * raphaelsc (~raphaelsc@177.19.29.72) has joined #ceph
[5:11] * huangjun (~kvirc@113.57.168.154) has joined #ceph
[5:18] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[5:18] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[5:19] * danieagle (~Daniel@177.138.169.68) has joined #ceph
[5:20] * Hemanth (~hkumar_@103.228.221.141) Quit (Ping timeout: 480 seconds)
[5:23] * nih (~fauxhawk@108.61.99.238) has joined #ceph
[5:29] * valeech (~valeech@166.170.32.17) Quit (Quit: valeech)
[5:29] * KindOne_ (kindone@h61.130.30.71.dynamic.ip.windstream.net) has joined #ceph
[5:35] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[5:35] * KindOne_ is now known as KindOne
[5:36] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (Quit: Leaving)
[5:38] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[5:41] * EinstCrazy (~EinstCraz@222.69.243.130) has joined #ceph
[5:42] * vimal (~vikumar@114.143.165.227) has joined #ceph
[5:42] * Vacuum_ (~Vacuum@88.130.207.104) has joined #ceph
[5:46] * dneary (~dneary@207.236.147.202) has joined #ceph
[5:49] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[5:49] * Vacuum__ (~Vacuum@88.130.214.18) Quit (Ping timeout: 480 seconds)
[5:53] * nih (~fauxhawk@108.61.99.238) Quit ()
[5:56] * mattbenjamin (~mbenjamin@76-206-42-50.lightspeed.livnmi.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[6:01] * walcubi_ (~walcubi@p5795A981.dip0.t-ipconnect.de) has joined #ceph
[6:02] * hoopy (~Kyso@tsn109-201-152-228.dyn.nltelcom.net) has joined #ceph
[6:03] * Effed (Effed@ip5-70.skekraft.riksnet.se) has joined #ceph
[6:06] * nilez (~nilez@ec2-52-37-170-77.us-west-2.compute.amazonaws.com) Quit (Ping timeout: 480 seconds)
[6:06] * nilez (~nilez@96.44.189.66) has joined #ceph
[6:06] * Effed (Effed@ip5-70.skekraft.riksnet.se) has left #ceph
[6:09] * walcubi (~walcubi@p5795A6A7.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[6:12] * [0x4A6F]_ (~ident@p508CD5E0.dip0.t-ipconnect.de) has joined #ceph
[6:12] * [0x4A6F] (~ident@0x4a6f.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:12] * [0x4A6F]_ is now known as [0x4A6F]
[6:18] * huangjun|2 (~kvirc@113.57.168.154) has joined #ceph
[6:18] * huangjun (~kvirc@113.57.168.154) Quit (Read error: Connection reset by peer)
[6:24] * EinstCrazy (~EinstCraz@222.69.243.130) Quit (Remote host closed the connection)
[6:26] * sankarshan (~sankarsha@45.124.141.154) has joined #ceph
[6:29] * sankarshan (~sankarsha@45.124.141.154) Quit ()
[6:31] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) has joined #ceph
[6:32] * hoopy (~Kyso@tsn109-201-152-228.dyn.nltelcom.net) Quit ()
[6:33] * kefu is now known as kefu|afk
[6:34] * vimal (~vikumar@114.143.165.227) Quit (Quit: Leaving)
[6:50] * Mikko (~Mikko@dfs61tybvzkmdp9nnsp5t-3.rev.dnainternet.fi) has joined #ceph
[6:52] * dec (~dec@45.96.198.104.bc.googleusercontent.com) has joined #ceph
[6:53] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[6:54] * vimal (~vikumar@121.244.87.116) has joined #ceph
[6:59] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[6:59] * verbalins (~MonkeyJam@93.115.84.202) has joined #ceph
[7:03] * Mikko (~Mikko@dfs61tybvzkmdp9nnsp5t-3.rev.dnainternet.fi) Quit (Quit: This computer has gone to sleep)
[7:08] * Hemanth (~hkumar_@103.228.221.141) has joined #ceph
[7:09] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) Quit (Quit: doppelgrau)
[7:09] * EinstCrazy (~EinstCraz@222.69.243.130) has joined #ceph
[7:11] * EinstCrazy (~EinstCraz@222.69.243.130) Quit (Remote host closed the connection)
[7:15] * kefu|afk (~kefu@114.92.101.38) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[7:17] * wkennington (~wkenningt@c-71-204-170-241.hsd1.ca.comcast.net) has joined #ceph
[7:29] * verbalins (~MonkeyJam@93.115.84.202) Quit ()
[7:42] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[7:47] * doppelgrau (~doppelgra@132.252.235.172) has joined #ceph
[7:49] * swami1 (~swami@49.38.3.154) has joined #ceph
[7:51] * kefu (~kefu@114.92.101.38) has joined #ceph
[7:56] * dusti (~Kottizen@46.166.188.243) has joined #ceph
[8:03] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[8:11] * Nicho1as (~nicho1as@00022427.user.oftc.net) Quit (Quit: I've gotta go)
[8:12] * karnan (~karnan@121.244.87.117) has joined #ceph
[8:25] * zdzichu (zdzichu@pipebreaker.pl) has joined #ceph
[8:26] <zdzichu> hi, is drop.ceph.com having network difficulties at the moment?
[8:26] * dusti (~Kottizen@46.166.188.243) Quit ()
[8:26] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[8:27] <zdzichu> traceroute http://paste.debian.net/791432/ dies at sixth hop
[8:52] * dennis_ (~dennis@2a00:801:7:1:1a03:73ff:fed6:ffec) has joined #ceph
[8:52] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Ping timeout: 480 seconds)
[8:54] * raphaelsc (~raphaelsc@177.19.29.72) Quit (Remote host closed the connection)
[8:54] * pam (~pam@193.106.183.1) has joined #ceph
[8:54] <pam> Hello
[8:54] * ade (~abradshaw@p4FF7AA05.dip0.t-ipconnect.de) has joined #ceph
[8:55] <pam> I have a quick question about rbd and snapshots and filesystem consistency
[8:56] <IcePic> zdzichu: I get quickly to http://drop.ceph.com, but the nginx there gives me an error page
[8:56] <IcePic> so some box at drop. is very much reachable, but not very happy. =)
[8:57] <pam> if I have an rbd image and on top a filesystem like xfs and create a snapshot using rbd snap create... I assume the FS of the snapshot will not be consistency. Right?
[8:57] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[8:57] <pam> We are using Jewel here on all our systems...
[9:00] <zdzichu> IcePic: thanks for checking
[9:00] * bviktor (~bviktor@213.16.80.50) has joined #ceph
[9:00] <dennis_> Assume I have two computers with 4 osd on each. I would like a system that still work even if one computer is gone. Is that possible? If I have a crush map that state that all data should be replicatred on 2 computers, then what happens if one of them is down? Do I need 3 computers for this to work?
[9:01] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[9:04] <Be-El> pam: if you flush and freeze the filesystem, the snapshot will be consistent
[9:05] <pam> @Be-El: Ok, I see. I hoped that maybe the RBD tools where smart enough for doing that for me :-)
[9:05] <Be-El> pam: otherwise you are right, there may always be some unflushed buffer that will elad to inconsistencies
[9:06] * dlan (~dennis@116.228.88.131) Quit (Ping timeout: 480 seconds)
[9:06] * Mikko (~Mikko@109-108-30-118.bb.dnainternet.fi) has joined #ceph
[9:07] <pam> @Be-El: thanks for info!
[9:07] <Be-El> pam: if you use rbds with qemu, there's support for synchronization within qemu based on a VM agent. for normal rbd mounts, syncing and freezing on the client should be sufficient
[9:07] <Be-El> dennis_: you need to setup the pools to have a size of 2 and a min_size of 1. in that case clients will still be able to use the storage on one system
[9:08] <Be-El> dennis_: but with two hosts ceph maybe not the right choice, since ceph performs better the more hosts you have
[9:08] <pam> @Be-El: we use the later one. So we will create a simple shell script which will flush/freeze/create snapshot/unfreeze the image
[9:09] <dennis_> Be-El: thanks. and if the second host come back up again the data is redistributed so both hosts have all data?
[9:09] * dlan (~dennis@116.228.88.131) has joined #ceph
[9:09] <dennis_> Be-El: right, just trying to understand how it work and what happens in failure cases
[9:10] <Be-El> dennis_: it should be resynchronized. but a two host setup is not recommended, since you may run into problems with split brain situations, especially in cases of network partitions
[9:16] * wjw-freebsd3 (~wjw@smtp.digiware.nl) has joined #ceph
[9:17] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[9:22] * Mikko (~Mikko@109-108-30-118.bb.dnainternet.fi) Quit (Quit: This computer has gone to sleep)
[9:24] * analbeard (~shw@support.memset.com) has joined #ceph
[9:24] * analbeard (~shw@support.memset.com) has left #ceph
[9:24] * analbeard (~shw@support.memset.com) has joined #ceph
[9:24] * doppelgrau (~doppelgra@132.252.235.172) Quit (Read error: Connection reset by peer)
[9:26] <Be-El> does anyone mind to share their setup for filestore_wbthrottle_* for production uses? disabling wbthrottle increases osd benchmarks significantly, but I'm not sure whether it is advisable to disable it (mixed setup, VMs with RBD, cephfs with desire for single thread performance, rgw)
[9:26] * Mikko (~Mikko@109-108-30-118.bb.dnainternet.fi) has joined #ceph
[9:31] * doppelgrau (~doppelgra@132.252.235.172) has joined #ceph
[9:37] * Mikko (~Mikko@109-108-30-118.bb.dnainternet.fi) Quit (Quit: This computer has gone to sleep)
[9:41] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[9:42] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph
[9:42] * ``rawr (uid23285@id-23285.tooting.irccloud.com) Quit (Quit: Connection closed for inactivity)
[9:43] * owasserm_ (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph
[9:48] * fsimonce (~simon@host203-44-dynamic.183-80-r.retail.telecomitalia.it) has joined #ceph
[9:48] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Remote host closed the connection)
[9:49] * corevoid (~corevoid@ip68-5-125-61.oc.oc.cox.net) Quit (Ping timeout: 480 seconds)
[9:50] * gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) has joined #ceph
[9:53] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[9:54] * DanFoster (~Daniel@office.34sp.com) has joined #ceph
[9:55] * kefu (~kefu@114.92.101.38) Quit (Max SendQ exceeded)
[9:56] * kefu (~kefu@114.92.101.38) has joined #ceph
[9:59] * rendar (~I@host212-182-dynamic.1-87-r.retail.telecomitalia.it) has joined #ceph
[10:00] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[10:01] * ivve (~zed@m176-68-133-212.cust.tele2.se) has joined #ceph
[10:04] * Zeis (~utugi____@108.61.122.223) has joined #ceph
[10:04] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:fdc8:c4d5:24bd:956f) has joined #ceph
[10:04] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[10:04] * madkiss (~madkiss@178.165.131.90.wireless.dyn.drei.com) has joined #ceph
[10:07] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[10:10] * bara (~bara@213.175.37.12) has joined #ceph
[10:15] * derjohn_mobi (~aj@x4db292ca.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[10:22] * i_m (~ivan.miro@83.149.37.190) Quit (Ping timeout: 480 seconds)
[10:23] * Mikko (~Mikko@dfs61tyh8zmsdsb8zvwwt-3.rev.dnainternet.fi) has joined #ceph
[10:26] * i_m (~ivan.miro@31.173.101.240) has joined #ceph
[10:29] * techospark (~unknown@nat-23-0.nsk.sibset.net) has joined #ceph
[10:34] * Zeis (~utugi____@5AEAAA8CT.tor-irc.dnsbl.oftc.net) Quit ()
[10:44] * kefu (~kefu@114.92.101.38) Quit (Max SendQ exceeded)
[10:45] * kefu (~kefu@114.92.101.38) has joined #ceph
[10:45] * IvanJobs_ (~ivanjobs@103.50.11.146) has joined #ceph
[10:45] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Read error: Connection reset by peer)
[10:51] <Hatsjoe> Hi all, got a question, what is the best and least impactful way of importing a raw KVM image into ceph? So with the least downtime for the VM? Currently, I destroy (shut off) the VPS, import the raw image using rbd import, and then start the VPS again running on Ceph, but depending on the size of the raw image, this can take a long time. Is there a better way?
[10:52] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[10:53] * Mikko (~Mikko@dfs61tyh8zmsdsb8zvwwt-3.rev.dnainternet.fi) Quit (Quit: This computer has gone to sleep)
[10:54] <IcePic> seems like many of the limiting factors here are far outside of ceph, but rather "can you or can you not have it running in the old place while importing" and so on
[10:58] * bara (~bara@213.175.37.12) Quit (Ping timeout: 480 seconds)
[11:04] * derjohn_mobi (~aj@2001:6f8:1337:0:cd1a:84dc:9a47:b830) has joined #ceph
[11:05] * Hemanth (~hkumar_@103.228.221.141) Quit (Ping timeout: 480 seconds)
[11:07] * Mikko (~Mikko@109-108-30-118.bb.dnainternet.fi) has joined #ceph
[11:08] * pam (~pam@193.106.183.1) Quit (Quit: Textual IRC Client: www.textualapp.com)
[11:08] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[11:11] * IvanJobs_ (~ivanjobs@103.50.11.146) Quit (Remote host closed the connection)
[11:17] * wkennington (~wkenningt@c-71-204-170-241.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[11:18] * zhen (~Thunderbi@130.57.30.250) has joined #ceph
[11:19] * zhen (~Thunderbi@130.57.30.250) Quit ()
[11:21] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[11:28] * F|1nt (~F|1nt@host37-211.lan-isdn.imaginet.fr) has joined #ceph
[11:29] * ivve (~zed@m176-68-133-212.cust.tele2.se) Quit (Ping timeout: 480 seconds)
[11:31] * b0e (~aledermue@213.95.25.82) has joined #ceph
[11:32] <Hatsjoe> IcePic, yes, thats possible, and it would be great to have a way to import the incremental changes to the source, into ceph, so that when you actually switch the VPS to ceph (by editing the XML, and rebooting the VPS), the final sync takes very little time
[11:33] <Hatsjoe> But I couldnt find a way to do this
[11:33] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[11:36] * lmb (~Lars@ip5b41f0a4.dynamic.kabel-deutschland.de) Quit (Quit: Leaving)
[11:46] * thomnico (~thomnico@2a01:e35:8b41:120:58dd:2ae8:e949:4fdb) has joined #ceph
[11:48] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Ping timeout: 480 seconds)
[11:49] * techospark (~unknown@nat-23-0.nsk.sibset.net) Quit (Quit: Leaving)
[11:49] * krypto (~krypto@G68-90-105-197.sbcis.sbc.com) has joined #ceph
[11:54] * wjw-freebsd3 (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[11:56] * madkiss (~madkiss@178.165.131.90.wireless.dyn.drei.com) Quit (Quit: Leaving.)
[11:56] * thomnico (~thomnico@2a01:e35:8b41:120:58dd:2ae8:e949:4fdb) Quit (Quit: Ex-Chat)
[11:59] * SurfMaths1 (~sardonyx@37.203.209.18) has joined #ceph
[12:00] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[12:07] * wjw-freebsd3 (~wjw@smtp.medusa.nl) has joined #ceph
[12:11] * huats (~quassel@stuart.objectif-libre.com) has joined #ceph
[12:12] <Be-El> does 'ceph tell osd.X bench' use the same code path as all other osd commands (queue, execution etc.), or does it only measure the disk setup itself?
[12:26] * dan__ (~Daniel@2a00:1ee0:3:1337:4d98:7b7a:4320:b1a2) has joined #ceph
[12:26] * DanFoster (~Daniel@office.34sp.com) Quit (Read error: Connection reset by peer)
[12:29] * SurfMaths1 (~sardonyx@37.203.209.18) Quit ()
[12:31] * owasserm_ (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit (Quit: Ex-Chat)
[12:31] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit (Quit: Ex-Chat)
[12:31] * huangjun|2 (~kvirc@113.57.168.154) Quit (Ping timeout: 480 seconds)
[12:32] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph
[12:34] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Remote host closed the connection)
[12:40] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Ping timeout: 480 seconds)
[12:44] * Hemanth (~hkumar_@103.228.221.141) has joined #ceph
[12:46] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[12:47] * bene2 (~bene@2601:193:4101:f410:ea2a:eaff:fe08:3c7a) has joined #ceph
[12:48] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[12:48] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Quit: Leaving)
[12:49] <IcePic> Hatsjoe: to me, it sounds like something you would tell the hypervisor, not ceph per se, as it would have to work identically if the new storage was iscsi,nfs or other
[12:54] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Remote host closed the connection)
[12:54] * wjw-freebsd3 (~wjw@smtp.medusa.nl) Quit (Ping timeout: 480 seconds)
[13:04] <Hatsjoe> IcePic, afaik you can only clone within the same pool when doing it on the hypervisor, besides, the only configuration the hypervisor contains is the monitor IPs and rbd image in the domain XML, and the auth secret in libvirt
[13:04] * Gugge_47527 (gugge@92.246.2.105) has joined #ceph
[13:04] <Hatsjoe> So correct me if I'm wrong, but the importing/incremental import is really something that has to be done on the ceph cluster itself, right?
[13:04] * Gugge-47527 (gugge@92.246.2.105) Quit (Read error: Connection reset by peer)
[13:04] * Gugge_47527 is now known as Gugge-47527
[13:04] * Nicho1as (~nicho1as@00022427.user.oftc.net) has joined #ceph
[13:09] * kuku (~kuku@112.203.30.2) has joined #ceph
[13:13] * Kurt (~Adium@2001:628:1:5:70fb:f890:2151:9338) Quit (Quit: Leaving.)
[13:18] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[13:18] * salwasser (~Adium@c-76-118-229-231.hsd1.ma.comcast.net) has joined #ceph
[13:18] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit ()
[13:19] * Zyn (~JamesHarr@46.166.190.214) has joined #ceph
[13:22] * thoht (~oftc-webi@per34-2-78-243-229-39.fbx.proxad.net) has joined #ceph
[13:23] <thoht> Hello. i'm running ceph v 9.2.1 on 3 nodes (with replica 3) and wants to update to latest version JEWEL 10.2.2. what is the best way ? for now, i'm running ubuntu 14 LTS and using repo deb https://download.ceph.com/debian-infernalis/ trusty main
[13:24] <thoht> is it safe to modify apt repo and to link it to jewels instead of infernalis then running apt-get upgrade ?
[13:24] * salwasser (~Adium@c-76-118-229-231.hsd1.ma.comcast.net) Quit (Quit: Leaving.)
[13:27] * _mrp (~mrp@82.117.199.26) has joined #ceph
[13:27] * walcubi_ (~walcubi@p5795A981.dip0.t-ipconnect.de) has left #ceph
[13:28] <thoht> or should i use ceph-deploy install --release jewel mon1 mon2 mon3 ?
[13:30] <Hatsjoe> thoht: http://docs.ceph.com/docs/master/install/upgrading-ceph/
[13:31] <Hatsjoe> Even though on that doc page they talk about doing all of the nodes at once using ceph-deploy, I would advice against that and do one by one by hand, just in case something does go wrong
[13:32] <thoht> Hatsjoe: i was on same page
[13:32] * kefu is now known as kefu|afk
[13:32] <thoht> so i guess i can run ceph-deploy install --release jewel mon1 mon2 mon3
[13:33] <Hatsjoe> Yes
[13:33] <thoht> it is a ceph cluster in production
[13:33] <thoht> so that s why i want to double check
[13:33] * kefu|afk (~kefu@114.92.101.38) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[13:33] <Hatsjoe> :)
[13:33] <thoht> i see that i ve to restart ceph daemon after that
[13:34] <thoht> that means during a short time; mon1 will be jewel and 2 other node will still be infrnalis
[13:34] <thoht> i guess the cluster will still be healthy at this state ?
[13:35] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) Quit (Quit: treenerd_)
[13:37] * Hemanth (~hkumar_@103.228.221.141) Quit (Quit: Leaving)
[13:39] <Hatsjoe> In theory, the cluster should continue to work, but it is never recommended to run different versions on the same type of daemons
[13:39] <Hatsjoe> So you have to restart them one by one after the upgrade, never all at once
[13:42] * dennis_ (~dennis@2a00:801:7:1:1a03:73ff:fed6:ffec) Quit (Quit: L??mnar)
[13:43] * Mikko (~Mikko@109-108-30-118.bb.dnainternet.fi) Quit (Quit: This computer has gone to sleep)
[13:45] * kuku (~kuku@112.203.30.2) Quit (Remote host closed the connection)
[13:47] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[13:48] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[13:49] * Zyn (~JamesHarr@5AEAAA8HT.tor-irc.dnsbl.oftc.net) Quit ()
[13:49] * Mikko (~Mikko@dfs61tybqfzh7zj38wd6y-3.rev.dnainternet.fi) has joined #ceph
[13:49] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) has joined #ceph
[13:50] * mhackett (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) Quit (Remote host closed the connection)
[13:53] <thoht> Hatsjoe: ok then maybe i should do a backup at first of all my VMs inside ceph :D
[13:54] <Hatsjoe> If that makes you feel better, sure, but I don't think thats necessary
[13:54] <Hatsjoe> You really have to do weird stuff to lose data
[13:55] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[14:01] <sep> what can prevent udev from mounting and starting osd on boot ? partition is listed in /dev/disk/by-partuuid/ ; single disk osd's are mounted and started but raid5 md devices are not. ; debian using hammer
[14:01] * Mikko (~Mikko@dfs61tybqfzh7zj38wd6y-3.rev.dnainternet.fi) Quit (Quit: This computer has gone to sleep)
[14:02] <sep> google searching have led me to try partprobe on the device and udev trigger. but no reaction
[14:02] <sep> i can mount and start manually. but i would like the automatic way to work
[14:03] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[14:03] <Hatsjoe> Do you have OSDs running on top of RAID5?
[14:04] <Be-El> sep: do you use gpt partitions with the correct partition type uuid?
[14:04] <sep> Be-El, yes. they are made by ceph-disk
[14:05] <Be-El> sep: are the partitions made by ceph-disk, too?
[14:06] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[14:08] <sep> Be-El, yes ; sgdisk -i 1 /dev/md/osd-1 ; gives me Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D (Unknown)
[14:08] <Be-El> ah, /dev/md....i'm not sure whether the udev way works with md at all
[14:10] * Kioob1 (~Kioob@LMontsouris-656-1-1-206.w80-12.abo.wanadoo.fr) has joined #ceph
[14:13] * Racpatel (~Racpatel@c-69-248-7-12.hsd1.nj.comcast.net) has joined #ceph
[14:15] * F|1nt (~F|1nt@host37-211.lan-isdn.imaginet.fr) Quit (Ping timeout: 480 seconds)
[14:15] <sep> i was 99% sure i did reboot these osd nodes with no issues on debian and ceph 0.94.4 but not on 0.94.7 i have to mount manually.
[14:15] * F|1nt (~F|1nt@host37-211.lan-isdn.imaginet.fr) has joined #ceph
[14:16] * madkiss (~madkiss@178.165.131.90.wireless.dyn.drei.com) has joined #ceph
[14:20] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[14:26] * _mrp_ (~mrp@82.117.199.26) has joined #ceph
[14:27] <sep> running ceph-disk activate-all does not trry the md devices at all
[14:28] <sep> naturally. since it's not listed in /dev/disk/by-parttypeuuid/
[14:30] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[14:30] <sep> and it's perhaps not listed since 60-ceph-partuuid-workaround.rules skip md* devices
[14:32] <sep> Hatsjoe, yes i use 6 x 5 disk raid5 software osd's since this old hardware does not have ram to run as many osd's
[14:32] <sep> i can reliably run 16-18 osd's but the machine have slot for 36 drives.
[14:32] * _mrp (~mrp@82.117.199.26) Quit (Ping timeout: 480 seconds)
[14:33] <sep> so with 6 5 disk osd's i can run 6 easily without having the OOM killer ruin everything
[14:33] <sep> the remaining slots are for journal SSD's and a few spare disks
[14:35] * kuku (~kuku@112.203.30.2) has joined #ceph
[14:36] <Hatsjoe> sep, how much ram does the OSD node have?
[14:36] <sep> ceph-disk activate /dev/md126p1 does mount and start osd with no issue.
[14:37] <sep> 32gb and it's the max for this motherboard :(
[14:37] * kuku (~kuku@112.203.30.2) Quit (Read error: Connection reset by peer)
[14:37] <Hatsjoe> 32GB is enough for 36 OSD daemons
[14:37] * kuku (~kuku@112.203.30.2) has joined #ceph
[14:37] <sep> not when the disks are 3 TB
[14:37] <sep> should ideally have 1GB per TB osd
[14:38] <Hatsjoe> During recovery, yes, but having less only slows down recovery, stuff will still work
[14:38] <sep> when i came to 20 osds i ran into OOM killer with high io
[14:38] <Hatsjoe> Ah okay
[14:39] <sep> with 18 OOM killer was killing osd's when i was in a recovery situation
[14:39] <sep> that does make the recovery even worse....
[14:39] <Hatsjoe> True
[14:39] <sep> osd's was dropping on every node
[14:39] <darkfaded> ah, then you had too little in all nodes :/
[14:39] <sep> 6 identical nodes
[14:40] <darkfaded> like people with zfs who can't zpool import after adding disks :)
[14:40] <Hatsjoe> But then you must have a very old board? Since all server grade boards support way more than 32GB since a long time now
[14:40] <darkfaded> Hatsjoe: e3 xeon only since ~2 years
[14:40] <sep> exactlyu
[14:42] <Hatsjoe> Ah didn't know that
[14:42] <sep> i am using 6 old machines to lab ceph to see if it's something we want to go for. using 32gb ram and 3TB failacuda drives have been a challenge. but ceph have not lost data for me yet.
[14:44] <sep> http://imgur.com/a/jYnNO
[14:45] <T1w> 1GB ram per 1TB data is legacy by now
[14:45] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[14:45] <T1w> during recovery it can go as high as 2GB per 1TB
[14:45] <sep> top image is one without raid5 sets have about 17 osd's
[14:45] <T1w> especially with erasure coding
[14:45] <sep> bottom one is 6 raid5 sets
[14:46] * vimal (~vikumar@121.244.87.116) Quit (Quit: Leaving)
[14:46] <sep> T1w, does it OOM killer if it have less ? or does the recovery slow down ?
[14:47] <sep> there is a huuuge difference :)
[14:47] <T1w> sep: afaik it slows down, but I can easily imagine OOM errors in addition
[14:48] <sep> T1w, so upgrading to jewel will be a bad idea ??
[14:49] <T1w> it depends on how bad a recovery scenario you get into
[14:49] <T1w> ram is cheap
[14:49] <T1w> just add as much as you can
[14:50] <T1w> whats not used during recovery is used as IO cache locally on the node
[14:50] <T1w> (by the OS)
[14:51] <sep> mainboards are maxed out :)
[14:51] <T1w> with 10+ OSDs it can probably become an issue yes..
[14:52] <T1w> note that others have easily run 30+ OSDs in a single node
[14:52] <T1w> with enough cores and enough ram
[14:52] <T1w> it's entirely possible
[14:52] <T1w> but network IO might become a bottleneck during recovery
[14:53] <T1w> (it doesn't take that many faulty OSDs recovering to saturate a 10g interface)
[14:54] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[14:54] * kuku (~kuku@112.203.30.2) Quit (Read error: Connection reset by peer)
[14:55] * kuku (~kuku@112.203.30.2) has joined #ceph
[14:55] <T1w> afk
[14:57] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[15:00] * vbellur (~vijay@71.234.224.255) Quit (Ping timeout: 480 seconds)
[15:02] * Mikko (~Mikko@dfs61tyczbjqg9t13zfjy-3.rev.dnainternet.fi) has joined #ceph
[15:10] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[15:10] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[15:12] * Mikko (~Mikko@dfs61tyczbjqg9t13zfjy-3.rev.dnainternet.fi) Quit (Quit: This computer has gone to sleep)
[15:21] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[15:21] * i_m (~ivan.miro@31.173.101.240) Quit (Read error: Connection reset by peer)
[15:22] <sep> T1w, ofcourse if ceph is something i get the PHB's to go for we would invest in more suited hardware folowing the required hardware reccomendations
[15:23] * i_m (~ivan.miro@31.173.101.242) has joined #ceph
[15:25] * vbellur (~vijay@nat-pool-bos-t.redhat.com) has joined #ceph
[15:27] * wes_dillingham (~wes_dilli@140.247.242.44) has joined #ceph
[15:29] * bara (~bara@213.175.37.12) has joined #ceph
[15:32] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:32] * raphaelsc (~raphaelsc@177.19.29.72) has joined #ceph
[15:32] <sep> compleatly different question... there is some leftover space on the ssd journals. ; do you leave it empty to improve the drive's wear leveling, or is it more sane to use the space for a ssd cache tier ?
[15:33] <T1w> leave it
[15:33] <T1w> a journal device should not have the additional IO load of a cache tier
[15:33] <sep> thanks that's what i assumed
[15:34] <T1w> otherwise you penalize all OSDs that the journal device is journal for
[15:34] <T1w> I've got 1 SSDs in a node - most is used for OS
[15:35] <T1w> and then I've got 2 journal partitions for 2 OSDs in those nodes
[15:35] <T1w> more OSDs in a node would require seperate OS and journal devices
[15:35] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[15:37] <sep> is there a huge advantage to put the os on a ssd ?
[15:37] <T1w> no, but otherwise I had 2 SSDs that had a total of 20GB usage.. ;)
[15:38] <T1w> it's a proof-of-concept cluster based on 3 1U nodes with room for 4 3,5" drives in each
[15:39] <sep> similar to my proof of concept
[15:39] <T1w> .. and since a lost journal device equals a lost OSD I've got 2 mirrored (software based mirroring with md) SSDs for OS and journal
[15:39] <sep> i took 6 old servers was was taken out of production . 36 3tb drives in each
[15:39] * yanzheng (~zhyan@125.70.21.51) Quit (Quit: This computer has gone to sleep)
[15:39] <T1w> .. it's been running production for the last 8+ months without incident
[15:41] <sep> those machines had the failacuda seagate 3tb drives that have 45% failure rate. so i have probably lost 100 drives since i started it.
[15:48] * vbellur (~vijay@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[15:49] * salwasser (~Adium@72.246.3.14) has joined #ceph
[15:54] * rwheeler (~rwheeler@nat-pool-bos-u.redhat.com) has joined #ceph
[15:54] * kuku (~kuku@112.203.30.2) Quit (Read error: Connection reset by peer)
[15:54] * kuku (~kuku@112.203.30.2) has joined #ceph
[15:55] * neurodrone (~neurodron@158.106.193.162) has joined #ceph
[15:57] * Mikko (~Mikko@109-108-30-118.bb.dnainternet.fi) has joined #ceph
[15:58] * thoht (~oftc-webi@per34-2-78-243-229-39.fbx.proxad.net) Quit (Quit: Page closed)
[15:59] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[16:00] * rdias (~rdias@2001:8a0:749a:d01:e50b:a58c:352a:bce9) Quit (Ping timeout: 480 seconds)
[16:00] * rdias (~rdias@2001:8a0:749a:d01:5d38:f3c:2bb:24d8) has joined #ceph
[16:01] * mattbenjamin (~mbenjamin@12.118.3.106) has joined #ceph
[16:02] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[16:06] * Mikko (~Mikko@109-108-30-118.bb.dnainternet.fi) Quit (Quit: This computer has gone to sleep)
[16:07] * Mikko (~Mikko@109-108-30-118.bb.dnainternet.fi) has joined #ceph
[16:07] * Mikko (~Mikko@109-108-30-118.bb.dnainternet.fi) Quit ()
[16:08] * bara (~bara@213.175.37.12) Quit (Ping timeout: 480 seconds)
[16:09] * srk (~Siva@32.97.110.52) has joined #ceph
[16:10] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:11] * Mikko (~Mikko@109-108-30-118.bb.dnainternet.fi) has joined #ceph
[16:12] * squizzi (~squizzi@107.13.237.240) has joined #ceph
[16:14] * kefu (~kefu@114.92.101.38) has joined #ceph
[16:14] <wes_dillingham> I noticed in some testing of rebalancing that my guest vms (100% rbd workload) saw a quite large spike in read write wait time immediately after finishing a rebalance. Im curious as to what this might be. Is there a rush to do some various maintenance tasks on OSDs immediately after a rebalance completes that may be causing this. The vms did absolutely swimmingly however throughout the actual time the cluster was rebalancing.
[16:17] <Be-El> wes_dillingham: just a wild guess......maybe some primary osd association for some of the involved placement groups changed after successful rebalancing, forcing the clients (librbd) to establish new tcp connections
[16:17] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[16:18] <wes_dillingham> Thats a good theory Be-El
[16:18] * dneary (~dneary@207.236.147.202) Quit (Ping timeout: 480 seconds)
[16:21] * Mikko (~Mikko@109-108-30-118.bb.dnainternet.fi) Quit (Quit: This computer has gone to sleep)
[16:24] <T1w> sep: eeeeeek
[16:24] <T1w> We're running 4tb samsung spinpoint drives - so far so good
[16:25] <T1w> (cheapest available some months ago)
[16:27] * salwasser (~Adium@72.246.3.14) Quit (Quit: Leaving.)
[16:35] * salwasser (~Adium@72.246.3.14) has joined #ceph
[16:40] * rdias (~rdias@2001:8a0:749a:d01:5d38:f3c:2bb:24d8) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * i_m (~ivan.miro@31.173.101.242) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * madkiss (~madkiss@178.165.131.90.wireless.dyn.drei.com) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * F|1nt (~F|1nt@host37-211.lan-isdn.imaginet.fr) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * Kioob1 (~Kioob@LMontsouris-656-1-1-206.w80-12.abo.wanadoo.fr) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * derjohn_mobi (~aj@2001:6f8:1337:0:cd1a:84dc:9a47:b830) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * T1w (~jens@node3.survey-it.dk) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:fdc8:c4d5:24bd:956f) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * dlan (~dennis@116.228.88.131) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * bviktor (~bviktor@213.16.80.50) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * ggarg_ (~Gaurav@x2f2bb39.dyn.telefonica.de) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * oms101 (~oms101@p20030057EA4F7700C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * baojg (~baojg@61.135.155.34) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * bjozet (~bjozet@82-183-17-144.customers.ownit.se) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * wgao (~wgao@106.120.101.38) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * GeoTracer (~Geoffrey@41.77.153.99) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * T1 (~the_one@5.186.54.143) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * dosaboy (~dosaboy@33.93.189.91.lcy-02.canonistack.canonical.com) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * jprins (~jprins@bbnat.betterbe.com) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * bla_ (~b.laessig@chimeria.ext.pengutronix.de) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * sw3 (sweaung@2400:6180:0:d0::66:100f) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * coreping (~Michael_G@n1.coreping.org) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * Nebraskka (~Nebraskka@178.62.130.190) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * jpierre03 (~jpierre03@voyage.prunetwork.fr) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * wogri (~wolf@nix.wogri.at) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * kwork (~quassel@bnc.ee) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * seosepa (~sepa@aperture.GLaDOS.info) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * fred`` (fred@earthli.ng) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * alexxy (~alexxy@biod.pnpi.spb.ru) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * Heebie (~thebert@dub-bdtn-office-r1.net.digiweb.ie) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * [arx] (~arx@six.happyforever.com) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * sileht (~sileht@gizmo.sileht.net) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * garphy`aw (~garphy@frank.zone84.net) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * tobiash (~quassel@212.118.206.70) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * hinrikus (~Rikus@db1jc.net) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * sto (~sto@121.red-2-139-229.staticip.rima-tde.net) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * mivaho (~quassel@2001:983:eeb4:1:c0de:69ff:fe2f:5599) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * kiranos (~quassel@109.74.11.233) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * Infected (~Infected@peon.lantrek.fi) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * robbat2 (~robbat2@178.63.9.89) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * Bosse (~bosse@erebus.klykken.com) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * liiwi (liiwi@idle.fi) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * towo (~towo@towo.netrep.oftc.net) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * remix_tj (~remix_tj@bonatti.remixtj.net) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * aiicore (~aiicore@s30.linuxpl.com) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * alexbligh1 (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * etienneme (~arch@69.ip-167-114-227.eu) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * CustosLimen (~CustosLim@2001:41d0:1:ff97::1) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * HauM1 (~HauM1@login.univie.ac.at) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * DrWhax (~DrWhax_@000199fa.user.oftc.net) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * verdurin (~verdurin@2001:8b0:281:78ec:e2cb:4eff:fe01:f767) Quit (magnet.oftc.net helix.oftc.net)
[16:40] * foxxx0 (~fox@mail.nano-srv.net) Quit (magnet.oftc.net helix.oftc.net)
[16:41] * swami1 (~swami@49.38.3.154) Quit (Quit: Leaving.)
[16:44] * ggarg_ (~Gaurav@x2f2bb39.dyn.telefonica.de) has joined #ceph
[16:44] * rdias (~rdias@2001:8a0:749a:d01:5d38:f3c:2bb:24d8) has joined #ceph
[16:44] * i_m (~ivan.miro@31.173.101.242) has joined #ceph
[16:44] * madkiss (~madkiss@178.165.131.90.wireless.dyn.drei.com) has joined #ceph
[16:44] * F|1nt (~F|1nt@host37-211.lan-isdn.imaginet.fr) has joined #ceph
[16:44] * Kioob1 (~Kioob@LMontsouris-656-1-1-206.w80-12.abo.wanadoo.fr) has joined #ceph
[16:44] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[16:44] * derjohn_mobi (~aj@2001:6f8:1337:0:cd1a:84dc:9a47:b830) has joined #ceph
[16:44] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[16:44] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:fdc8:c4d5:24bd:956f) has joined #ceph
[16:44] * dlan (~dennis@116.228.88.131) has joined #ceph
[16:44] * bviktor (~bviktor@213.16.80.50) has joined #ceph
[16:44] * oms101 (~oms101@p20030057EA4F7700C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[16:44] * baojg (~baojg@61.135.155.34) has joined #ceph
[16:44] * bjozet (~bjozet@82-183-17-144.customers.ownit.se) has joined #ceph
[16:44] * wgao (~wgao@106.120.101.38) has joined #ceph
[16:44] * GeoTracer (~Geoffrey@41.77.153.99) has joined #ceph
[16:44] * T1 (~the_one@5.186.54.143) has joined #ceph
[16:44] * dosaboy (~dosaboy@33.93.189.91.lcy-02.canonistack.canonical.com) has joined #ceph
[16:44] * jprins (~jprins@bbnat.betterbe.com) has joined #ceph
[16:44] * bla_ (~b.laessig@chimeria.ext.pengutronix.de) has joined #ceph
[16:44] * towo (~towo@towo.netrep.oftc.net) has joined #ceph
[16:44] * sw3 (sweaung@2400:6180:0:d0::66:100f) has joined #ceph
[16:44] * coreping (~Michael_G@n1.coreping.org) has joined #ceph
[16:44] * Nebraskka (~Nebraskka@178.62.130.190) has joined #ceph
[16:44] * jpierre03 (~jpierre03@voyage.prunetwork.fr) has joined #ceph
[16:44] * wogri (~wolf@nix.wogri.at) has joined #ceph
[16:44] * kwork (~quassel@bnc.ee) has joined #ceph
[16:44] * seosepa (~sepa@aperture.GLaDOS.info) has joined #ceph
[16:44] * fred`` (fred@earthli.ng) has joined #ceph
[16:44] * alexxy (~alexxy@biod.pnpi.spb.ru) has joined #ceph
[16:44] * Heebie (~thebert@dub-bdtn-office-r1.net.digiweb.ie) has joined #ceph
[16:44] * [arx] (~arx@six.happyforever.com) has joined #ceph
[16:44] * sileht (~sileht@gizmo.sileht.net) has joined #ceph
[16:44] * DrWhax (~DrWhax_@000199fa.user.oftc.net) has joined #ceph
[16:44] * garphy`aw (~garphy@frank.zone84.net) has joined #ceph
[16:44] * tobiash (~quassel@212.118.206.70) has joined #ceph
[16:44] * hinrikus (~Rikus@db1jc.net) has joined #ceph
[16:44] * sto (~sto@121.red-2-139-229.staticip.rima-tde.net) has joined #ceph
[16:44] * mivaho (~quassel@2001:983:eeb4:1:c0de:69ff:fe2f:5599) has joined #ceph
[16:44] * kiranos (~quassel@109.74.11.233) has joined #ceph
[16:44] * Infected (~Infected@peon.lantrek.fi) has joined #ceph
[16:44] * robbat2 (~robbat2@178.63.9.89) has joined #ceph
[16:44] * Bosse (~bosse@erebus.klykken.com) has joined #ceph
[16:44] * liiwi (liiwi@idle.fi) has joined #ceph
[16:44] * remix_tj (~remix_tj@bonatti.remixtj.net) has joined #ceph
[16:44] * aiicore (~aiicore@s30.linuxpl.com) has joined #ceph
[16:44] * alexbligh1 (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) has joined #ceph
[16:44] * etienneme (~arch@69.ip-167-114-227.eu) has joined #ceph
[16:44] * CustosLimen (~CustosLim@2001:41d0:1:ff97::1) has joined #ceph
[16:44] * HauM1 (~HauM1@login.univie.ac.at) has joined #ceph
[16:44] * verdurin (~verdurin@2001:8b0:281:78ec:e2cb:4eff:fe01:f767) has joined #ceph
[16:44] * foxxx0 (~fox@mail.nano-srv.net) has joined #ceph
[16:48] * kefu_ (~kefu@114.92.101.38) has joined #ceph
[16:52] * mkfort (~mkfort@mkfort.com) has joined #ceph
[16:54] * kefu (~kefu@114.92.101.38) Quit (Ping timeout: 480 seconds)
[16:56] * thomnico (~thomnico@2a01:e35:8b41:120:58dd:2ae8:e949:4fdb) has joined #ceph
[16:56] * xarses (~xarses@64.124.158.32) has joined #ceph
[17:03] * sudocat1 (~dibarra@192.185.1.20) has joined #ceph
[17:03] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:07] * kefu_ (~kefu@114.92.101.38) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:07] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[17:09] * karnan (~karnan@121.244.87.117) has joined #ceph
[17:10] <snelly> Hi. I'm having trouble bringing up a MDS in my new cluster. Running 'ceph-deploy mds create my-mds-node', it seems to be hanging here:
[17:10] * T1w (~jens@node3.survey-it.dk) Quit (Remote host closed the connection)
[17:10] <snelly> [ceph-mds-01][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.ceph-mds-01 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-ceph-mds-01/keyring
[17:10] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[17:11] <snelly> I did notice this in the logs on the mds server:
[17:11] <snelly> 2016-08-25 15:06:15.489830 7f4eb4af9200 -1 auth: unable to find a keyring on /var/lib/ceph/mds/ceph-ceph-mds-01/keyring: (2) No such file or directory
[17:11] <snelly> 2016-08-25 15:06:15.489856 7f4eb4af9200 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
[17:12] * krypto (~krypto@G68-90-105-197.sbcis.sbc.com) Quit (Read error: Connection reset by peer)
[17:13] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[17:13] * krypto (~krypto@G68-90-105-197.sbcis.sbc.com) has joined #ceph
[17:14] <snelly> cluster is otherwise healthy.
[17:15] * ade (~abradshaw@p4FF7AA05.dip0.t-ipconnect.de) Quit (Quit: Too sexy for his shirt)
[17:15] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:18] <snelly> ehhhh firewall problem, i think
[17:20] * kefu (~kefu@114.92.101.38) has joined #ceph
[17:20] * krypto (~krypto@G68-90-105-197.sbcis.sbc.com) Quit (Read error: Connection reset by peer)
[17:20] * salwasser (~Adium@72.246.3.14) Quit (Quit: Leaving.)
[17:21] * krypto (~krypto@106.51.26.124) has joined #ceph
[17:21] <icey> what cephx permissions does a ceph mds server require?
[17:25] * mattch (~mattch@w5430.see.ed.ac.uk) Quit (Quit: Leaving.)
[17:25] <snelly> aww yeah, it was an iptables issue
[17:29] * mattch (~mattch@w5430.see.ed.ac.uk) has joined #ceph
[17:31] * doppelgrau (~doppelgra@132.252.235.172) Quit (Quit: Leaving.)
[17:31] * i_m (~ivan.miro@31.173.101.242) Quit (Ping timeout: 480 seconds)
[17:42] * ggarg_ is now known as ggarg
[17:48] * rotbeard (~redbeard@2a02:908:df13:bb00:5877:6722:3a78:b5f7) has joined #ceph
[17:54] * rotbeard (~redbeard@2a02:908:df13:bb00:5877:6722:3a78:b5f7) Quit (Quit: Leaving)
[17:54] * rotbeard (~redbeard@2a02:908:df13:bb00:5877:6722:3a78:b5f7) has joined #ceph
[17:58] * F|1nt (~F|1nt@host37-211.lan-isdn.imaginet.fr) Quit (Quit: Be back later ...)
[17:59] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[18:01] * sudocat1 (~dibarra@192.185.1.20) Quit (Quit: Leaving.)
[18:01] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[18:07] * haplo37 (~haplo37@107.190.37.90) has joined #ceph
[18:10] * vbellur (~vijay@nat-pool-bos-t.redhat.com) has joined #ceph
[18:12] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) has joined #ceph
[18:20] * sudocat (~dibarra@192.185.1.20) Quit (Quit: Leaving.)
[18:20] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[18:26] * sudocat (~dibarra@192.185.1.20) Quit (Read error: Connection reset by peer)
[18:26] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[18:29] * blizzow (~jburns@50-243-148-102-static.hfc.comcastbusiness.net) has joined #ceph
[18:31] * Unai (~Adium@50-115-70-150.static-ip.telepacific.net) has joined #ceph
[18:32] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[18:36] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[18:36] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit (Ping timeout: 480 seconds)
[18:41] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) has joined #ceph
[18:45] * sudocat (~dibarra@192.185.1.20) Quit (Remote host closed the connection)
[18:46] * mattch (~mattch@w5430.see.ed.ac.uk) Quit (Ping timeout: 480 seconds)
[18:47] * mhack (~mhack@nat-pool-bos-u.redhat.com) has joined #ceph
[18:47] * blizzow (~jburns@50-243-148-102-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[18:47] * karnan (~karnan@2405:204:5104:9374:3602:86ff:fe56:55ae) has joined #ceph
[18:49] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[18:54] * mhackett (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[18:56] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:57] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[19:00] * blizzow (~jburns@50-243-148-102-static.hfc.comcastbusiness.net) has joined #ceph
[19:01] * mhack (~mhack@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[19:04] * dan__ (~Daniel@2a00:1ee0:3:1337:4d98:7b7a:4320:b1a2) Quit (Quit: Leaving)
[19:06] * kuku (~kuku@112.203.30.2) Quit (Remote host closed the connection)
[19:06] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[19:07] * rotbeard (~redbeard@2a02:908:df13:bb00:5877:6722:3a78:b5f7) Quit (Quit: Leaving)
[19:09] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has left #ceph
[19:11] * thomnico (~thomnico@2a01:e35:8b41:120:58dd:2ae8:e949:4fdb) Quit (Quit: Ex-Chat)
[19:16] * i_m (~ivan.miro@88.206.123.152) has joined #ceph
[19:18] * oliveiradan (~doliveira@137.65.133.10) has joined #ceph
[19:25] * salwasser (~Adium@2601:197:101:5cc1:d1c9:5739:fe90:f1b8) has joined #ceph
[19:27] * oliveiradan (~doliveira@137.65.133.10) Quit (Quit: Leaving)
[19:28] * mykola (~Mikolaj@91.245.79.118) has joined #ceph
[19:29] * oliveiradan (~doliveira@137.65.133.10) has joined #ceph
[19:34] * rwheeler (~rwheeler@nat-pool-bos-u.redhat.com) Quit (Quit: Leaving)
[19:35] * Nicho1as (~nicho1as@00022427.user.oftc.net) Quit (Quit: A man from the Far East; using WeeChat 1.5)
[19:37] * Kioob1 (~Kioob@LMontsouris-656-1-1-206.w80-12.abo.wanadoo.fr) Quit (Quit: Leaving.)
[19:40] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:fdc8:c4d5:24bd:956f) Quit (Ping timeout: 480 seconds)
[19:40] * Unai1 (~Adium@2604:5500:1b:5e2:9c5:49e1:aca7:60c3) has joined #ceph
[19:41] * Unai (~Adium@50-115-70-150.static-ip.telepacific.net) Quit (Ping timeout: 480 seconds)
[19:46] * Jeffrey4l__ (~Jeffrey@110.252.55.17) Quit (Ping timeout: 480 seconds)
[19:49] * doppelgrau_ (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) has joined #ceph
[19:49] * karnan (~karnan@2405:204:5104:9374:3602:86ff:fe56:55ae) Quit (Ping timeout: 480 seconds)
[19:49] * efirs (~firs@98.207.153.155) Quit (Ping timeout: 480 seconds)
[19:51] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[19:51] * doppelgrau_ is now known as doppelgrau
[19:59] * Unai (~Adium@208.80.71.24) has joined #ceph
[20:02] * _mrp_ (~mrp@82.117.199.26) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[20:05] * Unai1 (~Adium@2604:5500:1b:5e2:9c5:49e1:aca7:60c3) Quit (Ping timeout: 480 seconds)
[20:06] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:08] * derjohn_mobi (~aj@2001:6f8:1337:0:cd1a:84dc:9a47:b830) Quit (Ping timeout: 480 seconds)
[20:10] * vbellur (~vijay@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[20:20] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[20:23] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[20:25] * bviktor (~bviktor@213.16.80.50) Quit (Ping timeout: 480 seconds)
[20:29] * georgem (~Adium@206.108.127.16) has joined #ceph
[20:30] * salwasser (~Adium@2601:197:101:5cc1:d1c9:5739:fe90:f1b8) Quit (Quit: Leaving.)
[20:32] * kefu (~kefu@114.92.101.38) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[20:32] * masteroman (~ivan@93-139-205-155.adsl.net.t-com.hr) has joined #ceph
[20:36] * wjw-freebsd3 (~wjw@smtp.digiware.nl) has joined #ceph
[20:38] * masterom1 (~ivan@93-139-159-137.adsl.net.t-com.hr) has joined #ceph
[20:38] * masterom1 (~ivan@93-139-159-137.adsl.net.t-com.hr) Quit ()
[20:44] * masteroman (~ivan@93-139-205-155.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[20:45] * krypto (~krypto@106.51.26.124) Quit (Quit: Leaving)
[20:46] * Nexus (~cmorandin@boc06-4-78-216-15-170.fbx.proxad.net) has joined #ceph
[20:48] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) Quit (Ping timeout: 480 seconds)
[20:52] * Nexus is now known as drnexus
[20:53] * drnexus (~cmorandin@boc06-4-78-216-15-170.fbx.proxad.net) Quit (Quit: Leaving)
[20:53] * masteroman (~ivan@93-139-159-137.adsl.net.t-com.hr) has joined #ceph
[20:53] * Nexus (~cmorandin@boc06-4-78-216-15-170.fbx.proxad.net) has joined #ceph
[20:54] * thomnico (~thomnico@2a01:e35:8b41:120:58dd:2ae8:e949:4fdb) has joined #ceph
[20:57] * thomnico (~thomnico@2a01:e35:8b41:120:58dd:2ae8:e949:4fdb) Quit ()
[20:58] * masteroman (~ivan@93-139-159-137.adsl.net.t-com.hr) Quit (Quit: WeeChat 1.5)
[20:59] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[21:03] * bjozet (~bjozet@82-183-17-144.customers.ownit.se) Quit (Remote host closed the connection)
[21:04] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[21:07] * derjohn_mobi (~aj@x4db292ca.dyn.telefonica.de) has joined #ceph
[21:10] * Neon (~Yopi@108.61.122.225) has joined #ceph
[21:18] * vbellur (~vijay@nat-pool-bos-t.redhat.com) has joined #ceph
[21:20] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[21:23] * mykola (~Mikolaj@91.245.79.118) Quit (Quit: away)
[21:25] * Unai1 (~Adium@208.80.71.24) has joined #ceph
[21:26] * Unai (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[21:27] * Unai (~Adium@208.80.71.24) has joined #ceph
[21:28] * Unai1 (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[21:30] * Unai1 (~Adium@208.80.71.24) has joined #ceph
[21:30] * Unai (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[21:31] * Unai (~Adium@208.80.71.24) has joined #ceph
[21:31] * Unai1 (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[21:32] * bene2 (~bene@2601:193:4101:f410:ea2a:eaff:fe08:3c7a) Quit (Quit: Konversation terminated!)
[21:35] * Unai1 (~Adium@208.80.71.24) has joined #ceph
[21:35] * Unai (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[21:40] * Neon (~Yopi@5AEAAA83V.tor-irc.dnsbl.oftc.net) Quit ()
[21:42] * Unai (~Adium@208.80.71.24) has joined #ceph
[21:42] * Unai1 (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[21:44] * Aeso (~aesospade@aesospadez.com) Quit (Quit: Leaving)
[21:46] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[21:49] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) has joined #ceph
[21:52] * Unai1 (~Adium@208.80.71.24) has joined #ceph
[21:52] * Unai (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[21:53] * blizzow (~jburns@50-243-148-102-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[21:54] * Nexus (~cmorandin@boc06-4-78-216-15-170.fbx.proxad.net) Quit (Quit: Leaving)
[21:55] * singler (~singler@zeta.kirneh.eu) has joined #ceph
[21:56] * Aeso (~aesospade@aesospadez.com) has joined #ceph
[21:57] * Unai1 (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[21:57] <singler> hey guys, can someone check if they are having problems uploading objects containing "@" in the name to rgw via s3 interface on 10.2.2? It fails for me with awscli and s3cmd with SignatureDoesNotMatch
[21:57] * Unai (~Adium@208.80.71.24) has joined #ceph
[22:01] * rendar (~I@host212-182-dynamic.1-87-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[22:01] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[22:05] * Aeso (~aesospade@aesospadez.com) Quit (Quit: Leaving)
[22:05] * Aeso (~aesospade@aesospadez.com) has joined #ceph
[22:07] * blizzow (~jburns@50-243-148-102-static.hfc.comcastbusiness.net) has joined #ceph
[22:16] * Maza (~ZombieTre@46.166.188.230) has joined #ceph
[22:16] * _mrp (~mrp@178.254.148.42) has joined #ceph
[22:17] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) Quit (Quit: doppelgrau)
[22:23] * georgem (~Adium@206.108.127.16) Quit (Ping timeout: 480 seconds)
[22:27] * rendar (~I@host212-182-dynamic.1-87-r.retail.telecomitalia.it) has joined #ceph
[22:28] * srk (~Siva@32.97.110.52) Quit (Ping timeout: 480 seconds)
[22:29] * dneary (~dneary@pool-96-233-46-27.bstnma.fios.verizon.net) has joined #ceph
[22:35] * wes_dillingham (~wes_dilli@140.247.242.44) Quit (Quit: wes_dillingham)
[22:37] * srk (~Siva@32.97.110.52) has joined #ceph
[22:39] * sudocat1 (~dibarra@192.185.1.20) has joined #ceph
[22:40] * sudocat2 (~dibarra@192.185.1.20) has joined #ceph
[22:41] * i_m (~ivan.miro@88.206.123.152) Quit (Ping timeout: 480 seconds)
[22:42] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[22:43] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) has joined #ceph
[22:46] * Maza (~ZombieTre@9J5AAABCY.tor-irc.dnsbl.oftc.net) Quit ()
[22:47] * sudocat1 (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[22:55] * Unai1 (~Adium@208.80.71.24) has joined #ceph
[22:55] * Unai (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[22:56] * davidzlap (~Adium@2605:e000:1313:8003:c8b0:dfac:7df5:6bde) has joined #ceph
[22:59] * Unai (~Adium@208.80.71.24) has joined #ceph
[22:59] * Unai1 (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[23:02] <[arx]> singler: http://sprunge.us/fUVX
[23:04] * Racpatel (~Racpatel@c-69-248-7-12.hsd1.nj.comcast.net) Quit (Ping timeout: 480 seconds)
[23:04] * Unai1 (~Adium@208.80.71.24) has joined #ceph
[23:05] * Unai (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[23:08] * Unai (~Adium@208.80.71.24) has joined #ceph
[23:08] * Unai1 (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[23:10] * blizzow (~jburns@50-243-148-102-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[23:10] * Unai (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[23:10] * Unai (~Adium@208.80.71.24) has joined #ceph
[23:11] * haplo37 (~haplo37@107.190.37.90) Quit (Remote host closed the connection)
[23:12] * mhackett (~mhack@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[23:14] * Unai1 (~Adium@208.80.71.24) has joined #ceph
[23:14] * Unai (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[23:17] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Quit: ...)
[23:19] * fsimonce (~simon@host203-44-dynamic.183-80-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[23:24] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[23:25] * salwasser (~Adium@c-76-118-229-231.hsd1.ma.comcast.net) has joined #ceph
[23:26] * blizzow (~jburns@50-243-148-102-static.hfc.comcastbusiness.net) has joined #ceph
[23:31] * Unai (~Adium@208.80.71.24) has joined #ceph
[23:31] * Unai1 (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[23:33] * Unai1 (~Adium@208.80.71.24) has joined #ceph
[23:33] * Unai (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[23:35] * Unai (~Adium@208.80.71.24) has joined #ceph
[23:35] * Unai1 (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[23:37] * Unai1 (~Adium@208.80.71.24) has joined #ceph
[23:37] * Unai (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[23:40] * Unai (~Adium@208.80.71.24) has joined #ceph
[23:40] * Unai1 (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[23:41] * KindOne (kindone@h61.130.30.71.dynamic.ip.windstream.net) has joined #ceph
[23:42] * Unai (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[23:42] * Unai (~Adium@208.80.71.24) has joined #ceph
[23:44] * Unai1 (~Adium@208.80.71.24) has joined #ceph
[23:44] * Unai (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[23:45] * dnunez (~dnunez@nat-pool-bos-u.redhat.com) Quit (Remote host closed the connection)
[23:45] * nhm (~nhm@c-50-171-139-246.hsd1.mn.comcast.net) has joined #ceph
[23:45] * ChanServ sets mode +o nhm
[23:47] * Unai (~Adium@208.80.71.24) has joined #ceph
[23:47] * Unai1 (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[23:49] * Unai1 (~Adium@208.80.71.24) has joined #ceph
[23:49] * Unai (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[23:51] * Unai (~Adium@208.80.71.24) has joined #ceph
[23:51] * Unai1 (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[23:53] * Unai1 (~Adium@208.80.71.24) has joined #ceph
[23:53] * Unai (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[23:55] * Unai (~Adium@208.80.71.24) has joined #ceph
[23:55] * Unai1 (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[23:58] * Unai1 (~Adium@208.80.71.24) has joined #ceph
[23:58] * Unai (~Adium@208.80.71.24) Quit (Read error: Connection reset by peer)
[23:59] * vbellur (~vijay@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.