#ceph IRC Log

Index

IRC Log for 2016-07-14

Timestamps are in GMT/BST.

[0:09] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[0:09] * HappyLoaf (~HappyLoaf@cpc93928-bolt16-2-0-cust133.10-3.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[0:10] * _28_ria (~kvirc@opfr028.ru) Quit (Ping timeout: 480 seconds)
[0:11] * untoreh (~fra@151.45.246.178) Quit (Remote host closed the connection)
[0:16] * ZombieL (~Roy@61TAAAI6U.tor-irc.dnsbl.oftc.net) Quit ()
[0:24] * debian112 (~bcolbert@64.235.157.198) has joined #ceph
[0:24] * fsimonce (~simon@host238-75-dynamic.11-87-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[0:28] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:354c:9c84:7417:af25) has joined #ceph
[0:37] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:354c:9c84:7417:af25) Quit (Ping timeout: 480 seconds)
[0:42] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:354c:9c84:7417:af25) has joined #ceph
[0:44] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) Quit (Ping timeout: 480 seconds)
[0:45] * praveen (~praveen@122.172.49.124) has joined #ceph
[0:46] * allenmelon (~Catsceo@128.153.145.125) has joined #ceph
[0:46] * djldn (~oftc-webi@195.212.13.101) Quit (Quit: Page closed)
[0:50] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Ping timeout: 480 seconds)
[0:53] * kuku (~kuku@119.93.91.136) has joined #ceph
[0:54] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:354c:9c84:7417:af25) Quit (Ping timeout: 480 seconds)
[1:01] * cathode (~cathode@50.232.215.114) Quit (Quit: Leaving)
[1:06] * lightspeed (~lightspee@2001:8b0:16e:1:8326:6f70:89f:8f9c) has joined #ceph
[1:08] * blizzow (~jburns@50.243.148.102) Quit (Ping timeout: 480 seconds)
[1:09] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[1:12] * ronrib (~boswortr@45.32.242.135) has joined #ceph
[1:14] * joao (~joao@8.184.114.89.rev.vodafone.pt) has joined #ceph
[1:14] * ChanServ sets mode +o joao
[1:15] * noahw (~noahw@eduroam-169-233-233-213.ucsc.edu) Quit (Ping timeout: 480 seconds)
[1:16] * allenmelon (~Catsceo@9YSAAAKO2.tor-irc.dnsbl.oftc.net) Quit ()
[1:17] * oms101 (~oms101@p20030057EA083F00C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:19] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[1:19] * jluis (~joao@8.184.114.89.rev.vodafone.pt) Quit (Ping timeout: 480 seconds)
[1:23] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[1:26] * oms101 (~oms101@p20030057EA2D3E00C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:28] * krypto (~krypto@G68-121-13-109.sbcis.sbc.com) Quit (Quit: Leaving)
[1:32] * rendar (~I@host136-178-dynamic.19-79-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[1:35] * borei (~dan@216.13.217.230) Quit (Ping timeout: 480 seconds)
[1:37] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:42] * vata (~vata@207.96.182.162) Quit (Quit: Leaving.)
[1:46] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[1:46] * Silentkillzr (~Maza@91.109.29.120) has joined #ceph
[1:47] * kuku (~kuku@119.93.91.136) has joined #ceph
[1:47] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:354c:9c84:7417:af25) has joined #ceph
[1:50] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[1:51] * BrianA (~BrianA@fw-rw.shutterfly.com) Quit (Read error: Connection reset by peer)
[1:58] * squizzi_ (~squizzi@nat-pool-rdu-t.redhat.com) Quit (Ping timeout: 480 seconds)
[2:02] * xarses (~xarses@64.124.158.100) Quit (Ping timeout: 480 seconds)
[2:04] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) Quit (Remote host closed the connection)
[2:05] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) has joined #ceph
[2:07] * KindOne (kindone@198.14.192.107) has joined #ceph
[2:16] * Silentkillzr (~Maza@7EXAAAGSP.tor-irc.dnsbl.oftc.net) Quit ()
[2:16] * JWilbur (~Rosenblut@tor1e1.privacyfoundation.ch) has joined #ceph
[2:20] * wushudoin (~wushudoin@2601:646:8281:cfd:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[2:30] * praveen (~praveen@122.172.49.124) Quit (Read error: Connection reset by peer)
[2:37] * yanzheng (~zhyan@125.70.22.67) has joined #ceph
[2:38] * reed (~reed@216.38.134.18) Quit (Ping timeout: 480 seconds)
[2:39] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[2:41] * praveen (~praveen@122.172.49.124) has joined #ceph
[2:46] * JWilbur (~Rosenblut@61TAAAJCG.tor-irc.dnsbl.oftc.net) Quit ()
[2:52] * _28_ria (~kvirc@opfr028.ru) has joined #ceph
[2:56] * debian112 (~bcolbert@64.235.157.198) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * david_ (~david@207.107.71.71) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * danieagle (~Daniel@177.94.139.84) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * skorgu (skorgu@pylon.skorgu.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * rkeene (1011@oc9.org) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * IvanJobs_ (~ivanjobs@103.50.11.146) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * carter (~carter@li98-136.members.linode.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * `10` (~10@69.169.91.14) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * verleihnix (~verleihni@195-202-198-60.dynamic.hispeed.ch) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * thadood (~thadood@slappy.thunderbutt.org) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * lurbs_ (user@uber.geek.nz) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * dis_ (~dis@nat-pool-brq-t.redhat.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * Nats_ (~natscogs@114.31.195.238) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * LiftedKilt (LiftedKilt@is.in.the.madhacker.biz) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * TheSov (~TheSov@108-75-213-57.lightspeed.cicril.sbcglobal.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * pdrakeweb (~pdrakeweb@cpe-71-74-153-111.neo.res.rr.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * joshd (~jdurgin@206.169.83.146) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * yehudasa_ (~yehudasa@206.169.83.146) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * ronrib (~boswortr@45.32.242.135) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * snelly (~cjs@sable.island.nu) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * EthanL (~lamberet@cce02cs4035-fa12-z.ams.hpecore.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * ntpttr (~ntpttr@192.55.54.42) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * davidzlap (~Adium@2605:e000:1313:8003:b514:3b88:b50c:701c) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * jarrpa (~jarrpa@2602:3f:e183:a600:eab1:fcff:fe47:f680) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * ron-slc (~Ron@173-165-129-118-utah.hfc.comcastbusiness.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * rburkholder (~overonthe@199.68.193.54) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * mjevans (~mjevans@li984-246.members.linode.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * penguinRaider (~KiKo@b1.07.01a8.ip4.static.sl-reverse.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * Tene (~tene@173.13.139.236) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * dcwangmit01 (~dcwangmit@162-245.23-239.PUBLIC.monkeybrains.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * icey (~Chris@0001bbad.user.oftc.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * mlovell (~mlovell@69-195-66-94.unifiedlayer.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * Kingrat (~shiny@2605:a000:161a:c0f6:7929:cd91:fc52:9020) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * ircuser-1 (~Johnny@158.183-62-69.ftth.swbr.surewest.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * nhm (~nhm@c-50-171-139-246.hsd1.mn.comcast.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * diegows (~diegows@main.woitasen.com.ar) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * vbellur (~vijay@71.234.224.255) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * Wahmed (~wahmed@206.174.203.195) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * jamespd (~mucky@mucky.socket7.org) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * Pintomatic (sid25118@id-25118.ealing.irccloud.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * getzburg (sid24913@id-24913.ealing.irccloud.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * aarontc (~aarontc@2001:470:e893::1:1) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * ktdreyer (~kdreyer@polyp.adiemus.org) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * iggy (~iggy@mail.vten.us) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * arbrandes (~arbrandes@ec2-54-172-54-135.compute-1.amazonaws.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * jidar (~jidar@r2d2.fap.me) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * ceph-ircslackbot1 (~ceph-ircs@ds9536.dreamservers.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * gmmaha (~gmmaha@00021e7e.user.oftc.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * gregsfortytwo1 (~gregsfort@transit-86-181-132-209.redhat.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * JoeJulian (~JoeJulian@108.166.123.190) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * beardo (~beardo__@beardo.cc.lehigh.edu) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * sage (~quassel@2607:f298:5:101d:f816:3eff:fe21:1966) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * ffilzwin (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * DeMiNe0 (~DeMiNe0@104.131.119.74) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * toastyde1th (~toast@pool-71-255-253-39.washdc.fios.verizon.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * elder_ (sid70526@id-70526.charlton.irccloud.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * scubacuda (sid109325@0001fbab.user.oftc.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * essjayhch (sid79416@id-79416.highgate.irccloud.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * devicenull (sid4013@id-4013.ealing.irccloud.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * jnq (sid150909@id-150909.highgate.irccloud.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * ffilz (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * scheuk (~scheuk@204.246.67.78) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * wgao (~wgao@106.120.101.38) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * jdillaman (~jdillaman@pool-108-18-97-95.washdc.fios.verizon.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * SamYaple (~SamYaple@162.209.126.134) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * trociny (~mgolub@93.183.239.2) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * mdxi (~mdxi@li925-141.members.linode.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * skarn (skarn@0001f985.user.oftc.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * pasties (~pasties@00021c52.user.oftc.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * bassam (sid154933@id-154933.brockwell.irccloud.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * braderhart (sid124863@braderhart.user.oftc.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * scalability-junk (sid6422@2604:8300:100:200b:6667:2:0:1916) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * gtrott (sid78444@2604:8300:100:200b:6667:4:1:326c) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * ElNounch (sid150478@2604:8300:100:200b:6667:2:2:4bce) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * Bosse (~bosse@2a03:b0c0:2:d0::e9:a001) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * batrick (~batrick@2600:3c00::f03c:91ff:fe96:477b) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * skullone (~skullone@107.170.239.224) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * theanalyst (theanalyst@open.source.rocks.my.socks.firrre.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * dmanchad (~dmanchad@66.187.233.206) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * espeer (~quassel@41.78.129.253) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * Georgyo (~georgyo@2600:3c03::f03c:91ff:feae:505c) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * harbie (~notroot@2a01:4f8:211:2344:0:dead:beef:1) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * yebyen (~yebyen@129.21.49.95) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * nolan (~nolan@2001:470:1:41:a800:ff:fe3e:ad08) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * motk (~motk@2600:3c00::f03c:91ff:fe98:51ee) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * lkoranda (~lkoranda@213.175.37.10) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * Animazing (~Wut@94.242.217.235) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * benner (~benner@188.166.111.206) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * Larsen (~andreas@2001:67c:578:2::15) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * med (~medberry@71.74.177.250) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * brians_ (~brianoftc@brian.by) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * react (~react@retard.io) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * Gugge-47527 (gugge@92.246.2.105) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * marcan (marcan@marcansoft.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * cyphase (~cyphase@000134f2.user.oftc.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * mfa298 (~mfa298@krikkit.yapd.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * nathani (~nathani@2607:f2f8:ac88::) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * Pies (~Pies@srv229.opcja.pl) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * andrewschoen (~andrewsch@192.237.167.184) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * Hazelesque (~hazel@phobos.hazelesque.uk) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * Randleman (~jesse@89.105.204.182) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * kingcu (~kingcu@kona.ridewithgps.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * s3an2 (~root@korn.s3an.me.uk) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * jiffe (~jiffe@nsab.us) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * arthurh (~arthurh@38.101.34.128) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * qman__ (~rohroh@2600:3c00::f03c:91ff:fe69:92af) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * remix_tj (~remix_tj@bonatti.remixtj.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * GooseYArd (~GooseYArd@ec2-52-5-245-183.compute-1.amazonaws.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * sw3 (sweaung@2400:6180:0:d0::66:100f) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * ndru_ (~jawsome@104.236.94.35) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * dalegaard-39554 (~dalegaard@vps.devrandom.dk) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * _are__ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * kiranos_ (~quassel@109.74.11.233) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * zirpu (~zirpu@00013c46.user.oftc.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * rossdylan (~rossdylan@2605:6400:1:fed5:22:68c4:af80:cb6e) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * chutz (~chutz@rygel.linuxfreak.ca) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * jklare (~jklare@185.27.181.36) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * JohnPreston78 (sid31393@ealing.irccloud.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * Sketch (~Sketch@2604:180:2::a506:5c0d) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * m0zes (~mozes@ns1.beocat.ksu.edu) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * mnaser (~mnaser@162.253.53.193) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * \ask (~ask@oz.develooper.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * portante (~portante@nat-pool-bos-t.redhat.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * jgornick (~jgornick@2600:3c00::f03c:91ff:fedf:72b4) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * masterpe (~masterpe@2a01:670:400::43) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * logan (~logan@63.143.60.136) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * destrudo (~destrudo@tomba.sonic.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * hughsaunders (~hughsaund@2001:4800:7817:101:1843:3f8a:80de:df65) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * cronburg_ (~cronburg@nat-pool-bos-t.redhat.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * goberle_ (~goberle@mid.ygg.tf) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * abhishekvrshny (~abhishekv@180.179.116.54) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * aiicore (~aiicore@s30.linuxpl.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * Superdawg (~Superdawg@ec2-54-243-59-20.compute-1.amazonaws.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * folivora (~out@devnull.drwxr-xr-x.eu) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * dustinm` (~dustinm`@68.ip-149-56-14.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * zerick_ (~zerick@104.131.101.65) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * Walex (~Walex@72.249.182.114) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * KindOne (kindone@0001a7db.user.oftc.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * acctor (~acctor@208.46.223.218) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * JPGainsborough (~jpg@71.122.174.182) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * shaunm (~shaunm@74.83.215.100) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * neurodrone (~neurodron@158.106.193.162) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * dynamicudpate (~overonthe@199.68.193.62) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * Racpatel (~Racpatel@2601:87:0:24af::4c8f) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * niknakpaddywak (~xander.ni@outbound.lax.demandmedia.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * ndevos (~ndevos@nat-pool-ams2-5.redhat.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * jmn (~jmn@nat-pool-bos-t.redhat.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * codice (~toodles@75-128-34-237.static.mtpk.ca.charter.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * jlayton (~jlayton@cpe-2606-A000-1125-405B-C5-7FF-FE41-3227.dyn6.twc.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * shaon (~shaon@shaon.me) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * rektide (~rektide@eldergods.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * Kruge (~Anus@198.211.99.93) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * evilrob (~evilrob@2600:3c00::f03c:91ff:fedf:1d3d) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * markl (~mark@knm.org) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * Guest2759 (~herrsergi@ec2-107-21-210-136.compute-1.amazonaws.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * oliveiradan (~doliveira@137.65.133.10) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * badone (~badone@66.187.239.16) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * TiCPU (~owrt@c216.218.54-96.clta.globetrotter.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * marco208 (~root@159.253.7.204) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * Psi-Jack (~psi-jack@mx.linux-help.org) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * SpamapS (~SpamapS@xencbyrum2.srihosting.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * DrewBeer (~DrewBeer@216.152.240.203) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * Aeso (~aesospade@aesospadez.com) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * shubjero (~shubjero@107.155.107.246) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * spgriffinjr (~spgriffin@66.46.246.206) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * jackhill (~jackhill@bog.hcoop.net) Quit (magnet.oftc.net charon.oftc.net)
[2:56] * mischief (~mischief@iota.offblast.org) Quit (magnet.oftc.net charon.oftc.net)
[2:58] * KindOne (kindone@0001a7db.user.oftc.net) has joined #ceph
[2:58] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) has joined #ceph
[2:58] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[2:58] * ronrib (~boswortr@45.32.242.135) has joined #ceph
[2:58] * snelly (~cjs@sable.island.nu) has joined #ceph
[2:58] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) has joined #ceph
[2:58] * yehudasa_ (~yehudasa@206.169.83.146) has joined #ceph
[2:58] * EthanL (~lamberet@cce02cs4035-fa12-z.ams.hpecore.net) has joined #ceph
[2:58] * david_ (~david@207.107.71.71) has joined #ceph
[2:58] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[2:58] * ntpttr (~ntpttr@192.55.54.42) has joined #ceph
[2:58] * davidzlap (~Adium@2605:e000:1313:8003:b514:3b88:b50c:701c) has joined #ceph
[2:58] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[2:58] * acctor (~acctor@208.46.223.218) has joined #ceph
[2:58] * JPGainsborough (~jpg@71.122.174.182) has joined #ceph
[2:58] * shaunm (~shaunm@74.83.215.100) has joined #ceph
[2:58] * neurodrone (~neurodron@158.106.193.162) has joined #ceph
[2:58] * jarrpa (~jarrpa@2602:3f:e183:a600:eab1:fcff:fe47:f680) has joined #ceph
[2:58] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) has joined #ceph
[2:58] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) has joined #ceph
[2:58] * ron-slc (~Ron@173-165-129-118-utah.hfc.comcastbusiness.net) has joined #ceph
[2:58] * danieagle (~Daniel@177.94.139.84) has joined #ceph
[2:58] * dynamicudpate (~overonthe@199.68.193.62) has joined #ceph
[2:58] * rburkholder (~overonthe@199.68.193.54) has joined #ceph
[2:58] * Racpatel (~Racpatel@2601:87:0:24af::4c8f) has joined #ceph
[2:58] * niknakpaddywak (~xander.ni@outbound.lax.demandmedia.com) has joined #ceph
[2:58] * skorgu (skorgu@pylon.skorgu.net) has joined #ceph
[2:58] * rkeene (1011@oc9.org) has joined #ceph
[2:58] * mjevans (~mjevans@li984-246.members.linode.com) has joined #ceph
[2:58] * penguinRaider (~KiKo@b1.07.01a8.ip4.static.sl-reverse.com) has joined #ceph
[2:58] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[2:58] * ndevos (~ndevos@nat-pool-ams2-5.redhat.com) has joined #ceph
[2:58] * Tene (~tene@173.13.139.236) has joined #ceph
[2:58] * dcwangmit01 (~dcwangmit@162-245.23-239.PUBLIC.monkeybrains.net) has joined #ceph
[2:58] * jmn (~jmn@nat-pool-bos-t.redhat.com) has joined #ceph
[2:58] * IvanJobs_ (~ivanjobs@103.50.11.146) has joined #ceph
[2:58] * icey (~Chris@0001bbad.user.oftc.net) has joined #ceph
[2:58] * carter (~carter@li98-136.members.linode.com) has joined #ceph
[2:58] * mlovell (~mlovell@69-195-66-94.unifiedlayer.com) has joined #ceph
[2:58] * Kingrat (~shiny@2605:a000:161a:c0f6:7929:cd91:fc52:9020) has joined #ceph
[2:58] * ircuser-1 (~Johnny@158.183-62-69.ftth.swbr.surewest.net) has joined #ceph
[2:58] * nhm (~nhm@c-50-171-139-246.hsd1.mn.comcast.net) has joined #ceph
[2:58] * codice (~toodles@75-128-34-237.static.mtpk.ca.charter.com) has joined #ceph
[2:58] * diegows (~diegows@main.woitasen.com.ar) has joined #ceph
[2:58] * jlayton (~jlayton@cpe-2606-A000-1125-405B-C5-7FF-FE41-3227.dyn6.twc.com) has joined #ceph
[2:58] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) has joined #ceph
[2:58] * shaon (~shaon@shaon.me) has joined #ceph
[2:58] * rektide (~rektide@eldergods.com) has joined #ceph
[2:58] * Kruge (~Anus@198.211.99.93) has joined #ceph
[2:58] * vbellur (~vijay@71.234.224.255) has joined #ceph
[2:58] * `10` (~10@69.169.91.14) has joined #ceph
[2:58] * evilrob (~evilrob@2600:3c00::f03c:91ff:fedf:1d3d) has joined #ceph
[2:58] * Wahmed (~wahmed@206.174.203.195) has joined #ceph
[2:58] * markl (~mark@knm.org) has joined #ceph
[2:58] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) has joined #ceph
[2:58] * verleihnix (~verleihni@195-202-198-60.dynamic.hispeed.ch) has joined #ceph
[2:58] * jamespd (~mucky@mucky.socket7.org) has joined #ceph
[2:58] * Guest2759 (~herrsergi@ec2-107-21-210-136.compute-1.amazonaws.com) has joined #ceph
[2:58] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph
[2:58] * Pintomatic (sid25118@id-25118.ealing.irccloud.com) has joined #ceph
[2:58] * getzburg (sid24913@id-24913.ealing.irccloud.com) has joined #ceph
[2:58] * aarontc (~aarontc@2001:470:e893::1:1) has joined #ceph
[2:58] * ktdreyer (~kdreyer@polyp.adiemus.org) has joined #ceph
[2:58] * iggy (~iggy@mail.vten.us) has joined #ceph
[2:58] * arbrandes (~arbrandes@ec2-54-172-54-135.compute-1.amazonaws.com) has joined #ceph
[2:58] * oliveiradan (~doliveira@137.65.133.10) has joined #ceph
[2:58] * badone (~badone@66.187.239.16) has joined #ceph
[2:58] * jidar (~jidar@r2d2.fap.me) has joined #ceph
[2:58] * TiCPU (~owrt@c216.218.54-96.clta.globetrotter.net) has joined #ceph
[2:58] * thadood (~thadood@slappy.thunderbutt.org) has joined #ceph
[2:58] * ffilz (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) has joined #ceph
[2:58] * scheuk (~scheuk@204.246.67.78) has joined #ceph
[2:58] * wgao (~wgao@106.120.101.38) has joined #ceph
[2:58] * lurbs_ (user@uber.geek.nz) has joined #ceph
[2:58] * dis_ (~dis@nat-pool-brq-t.redhat.com) has joined #ceph
[2:58] * Nats_ (~natscogs@114.31.195.238) has joined #ceph
[2:58] * ceph-ircslackbot1 (~ceph-ircs@ds9536.dreamservers.com) has joined #ceph
[2:58] * gmmaha (~gmmaha@00021e7e.user.oftc.net) has joined #ceph
[2:58] * LiftedKilt (LiftedKilt@is.in.the.madhacker.biz) has joined #ceph
[2:58] * gregsfortytwo1 (~gregsfort@transit-86-181-132-209.redhat.com) has joined #ceph
[2:58] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) has joined #ceph
[2:58] * marco208 (~root@159.253.7.204) has joined #ceph
[2:58] * JoeJulian (~JoeJulian@108.166.123.190) has joined #ceph
[2:58] * beardo (~beardo__@beardo.cc.lehigh.edu) has joined #ceph
[2:58] * TheSov (~TheSov@108-75-213-57.lightspeed.cicril.sbcglobal.net) has joined #ceph
[2:58] * Psi-Jack (~psi-jack@mx.linux-help.org) has joined #ceph
[2:58] * SpamapS (~SpamapS@xencbyrum2.srihosting.com) has joined #ceph
[2:58] * DrewBeer (~DrewBeer@216.152.240.203) has joined #ceph
[2:58] * pdrakeweb (~pdrakeweb@cpe-71-74-153-111.neo.res.rr.com) has joined #ceph
[2:58] * sage (~quassel@2607:f298:5:101d:f816:3eff:fe21:1966) has joined #ceph
[2:58] * ffilzwin (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) has joined #ceph
[2:58] * DeMiNe0 (~DeMiNe0@104.131.119.74) has joined #ceph
[2:58] * joshd (~jdurgin@206.169.83.146) has joined #ceph
[2:58] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) has joined #ceph
[2:58] * toastyde1th (~toast@pool-71-255-253-39.washdc.fios.verizon.net) has joined #ceph
[2:58] * Aeso (~aesospade@aesospadez.com) has joined #ceph
[2:58] * elder_ (sid70526@id-70526.charlton.irccloud.com) has joined #ceph
[2:58] * scubacuda (sid109325@0001fbab.user.oftc.net) has joined #ceph
[2:58] * essjayhch (sid79416@id-79416.highgate.irccloud.com) has joined #ceph
[2:58] * devicenull (sid4013@id-4013.ealing.irccloud.com) has joined #ceph
[2:58] * jnq (sid150909@id-150909.highgate.irccloud.com) has joined #ceph
[2:58] * shubjero (~shubjero@107.155.107.246) has joined #ceph
[2:58] * jdillaman (~jdillaman@pool-108-18-97-95.washdc.fios.verizon.net) has joined #ceph
[2:58] * SamYaple (~SamYaple@162.209.126.134) has joined #ceph
[2:58] * spgriffinjr (~spgriffin@66.46.246.206) has joined #ceph
[2:58] * trociny (~mgolub@93.183.239.2) has joined #ceph
[2:58] * mdxi (~mdxi@li925-141.members.linode.com) has joined #ceph
[2:58] * jackhill (~jackhill@bog.hcoop.net) has joined #ceph
[2:58] * mischief (~mischief@iota.offblast.org) has joined #ceph
[2:58] * skarn (skarn@0001f985.user.oftc.net) has joined #ceph
[2:58] * pasties (~pasties@00021c52.user.oftc.net) has joined #ceph
[2:58] * braderhart (sid124863@braderhart.user.oftc.net) has joined #ceph
[2:58] * bassam (sid154933@id-154933.brockwell.irccloud.com) has joined #ceph
[2:58] * scalability-junk (sid6422@2604:8300:100:200b:6667:2:0:1916) has joined #ceph
[2:58] * ElNounch (sid150478@2604:8300:100:200b:6667:2:2:4bce) has joined #ceph
[2:58] * gtrott (sid78444@2604:8300:100:200b:6667:4:1:326c) has joined #ceph
[2:58] * harbie (~notroot@2a01:4f8:211:2344:0:dead:beef:1) has joined #ceph
[2:58] * Bosse (~bosse@2a03:b0c0:2:d0::e9:a001) has joined #ceph
[2:58] * theanalyst (theanalyst@open.source.rocks.my.socks.firrre.com) has joined #ceph
[2:58] * nolan (~nolan@2001:470:1:41:a800:ff:fe3e:ad08) has joined #ceph
[2:58] * skullone (~skullone@107.170.239.224) has joined #ceph
[2:58] * batrick (~batrick@2600:3c00::f03c:91ff:fe96:477b) has joined #ceph
[2:58] * espeer (~quassel@41.78.129.253) has joined #ceph
[2:58] * dmanchad (~dmanchad@66.187.233.206) has joined #ceph
[2:58] * Georgyo (~georgyo@2600:3c03::f03c:91ff:feae:505c) has joined #ceph
[2:58] * Larsen (~andreas@2001:67c:578:2::15) has joined #ceph
[2:58] * yebyen (~yebyen@129.21.49.95) has joined #ceph
[2:58] * lkoranda (~lkoranda@213.175.37.10) has joined #ceph
[2:58] * motk (~motk@2600:3c00::f03c:91ff:fe98:51ee) has joined #ceph
[2:58] * Animazing (~Wut@94.242.217.235) has joined #ceph
[2:58] * benner (~benner@188.166.111.206) has joined #ceph
[2:58] * med (~medberry@71.74.177.250) has joined #ceph
[2:58] * brians_ (~brianoftc@brian.by) has joined #ceph
[2:58] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[2:58] * react (~react@retard.io) has joined #ceph
[2:58] * Gugge-47527 (gugge@92.246.2.105) has joined #ceph
[2:58] * marcan (marcan@marcansoft.com) has joined #ceph
[2:58] * cyphase (~cyphase@000134f2.user.oftc.net) has joined #ceph
[2:58] * mfa298 (~mfa298@krikkit.yapd.net) has joined #ceph
[2:58] * nathani (~nathani@2607:f2f8:ac88::) has joined #ceph
[2:58] * Pies (~Pies@srv229.opcja.pl) has joined #ceph
[2:58] * andrewschoen (~andrewsch@192.237.167.184) has joined #ceph
[2:58] * Hazelesque (~hazel@phobos.hazelesque.uk) has joined #ceph
[2:58] * Randleman (~jesse@89.105.204.182) has joined #ceph
[2:58] * kingcu (~kingcu@kona.ridewithgps.com) has joined #ceph
[2:58] * Superdawg (~Superdawg@ec2-54-243-59-20.compute-1.amazonaws.com) has joined #ceph
[2:58] * Walex (~Walex@72.249.182.114) has joined #ceph
[2:58] * s3an2 (~root@korn.s3an.me.uk) has joined #ceph
[2:58] * jiffe (~jiffe@nsab.us) has joined #ceph
[2:58] * mnaser (~mnaser@162.253.53.193) has joined #ceph
[2:58] * arthurh (~arthurh@38.101.34.128) has joined #ceph
[2:58] * qman__ (~rohroh@2600:3c00::f03c:91ff:fe69:92af) has joined #ceph
[2:58] * remix_tj (~remix_tj@bonatti.remixtj.net) has joined #ceph
[2:58] * GooseYArd (~GooseYArd@ec2-52-5-245-183.compute-1.amazonaws.com) has joined #ceph
[2:58] * portante (~portante@nat-pool-bos-t.redhat.com) has joined #ceph
[2:58] * folivora (~out@devnull.drwxr-xr-x.eu) has joined #ceph
[2:58] * hughsaunders (~hughsaund@2001:4800:7817:101:1843:3f8a:80de:df65) has joined #ceph
[2:58] * logan (~logan@63.143.60.136) has joined #ceph
[2:58] * jgornick (~jgornick@2600:3c00::f03c:91ff:fedf:72b4) has joined #ceph
[2:58] * sw3 (sweaung@2400:6180:0:d0::66:100f) has joined #ceph
[2:58] * ndru_ (~jawsome@104.236.94.35) has joined #ceph
[2:58] * dalegaard-39554 (~dalegaard@vps.devrandom.dk) has joined #ceph
[2:58] * cronburg_ (~cronburg@nat-pool-bos-t.redhat.com) has joined #ceph
[2:58] * _are__ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[2:58] * kiranos_ (~quassel@109.74.11.233) has joined #ceph
[2:58] * goberle_ (~goberle@mid.ygg.tf) has joined #ceph
[2:58] * abhishekvrshny (~abhishekv@180.179.116.54) has joined #ceph
[2:58] * zirpu (~zirpu@00013c46.user.oftc.net) has joined #ceph
[2:58] * zerick_ (~zerick@104.131.101.65) has joined #ceph
[2:58] * dustinm` (~dustinm`@68.ip-149-56-14.net) has joined #ceph
[2:58] * aiicore (~aiicore@s30.linuxpl.com) has joined #ceph
[2:58] * rossdylan (~rossdylan@2605:6400:1:fed5:22:68c4:af80:cb6e) has joined #ceph
[2:58] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[2:58] * destrudo (~destrudo@tomba.sonic.net) has joined #ceph
[2:58] * jklare (~jklare@185.27.181.36) has joined #ceph
[2:58] * masterpe (~masterpe@2a01:670:400::43) has joined #ceph
[2:58] * JohnPreston78 (sid31393@ealing.irccloud.com) has joined #ceph
[2:58] * Sketch (~Sketch@2604:180:2::a506:5c0d) has joined #ceph
[2:58] * \ask (~ask@oz.develooper.com) has joined #ceph
[2:58] * m0zes (~mozes@ns1.beocat.ksu.edu) has joined #ceph
[2:59] * ChanServ sets mode +v joao
[3:00] * _28_ria (~kvirc@opfr028.ru) Quit (Ping timeout: 480 seconds)
[3:01] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:354c:9c84:7417:af25) Quit (Ping timeout: 480 seconds)
[3:05] * squizzi_ (~squizzi@107.13.31.195) has joined #ceph
[3:08] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) Quit (Ping timeout: 480 seconds)
[3:09] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) has joined #ceph
[3:16] * MKoR (~nastidon@27.50.94.251) has joined #ceph
[3:22] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) Quit (Ping timeout: 480 seconds)
[3:26] * EinstCrazy (~EinstCraz@203.79.187.188) has joined #ceph
[3:31] * linuxkidd (~linuxkidd@mobile-166-171-123-101.mycingular.net) has joined #ceph
[3:32] * linuxkidd (~linuxkidd@mobile-166-171-123-101.mycingular.net) Quit ()
[3:45] * derjohn_mobi (~aj@x590c247c.dyn.telefonica.de) has joined #ceph
[3:46] * MKoR (~nastidon@61TAAAJD9.tor-irc.dnsbl.oftc.net) Quit ()
[3:46] * DJComet (~Kurimus@chulak.enn.lu) has joined #ceph
[3:49] * _28_ria (~kvirc@opfr028.ru) has joined #ceph
[3:52] * MentalRay (~MentalRay@LPRRPQ1401W-LP130-02-1242363207.dsl.bell.ca) has joined #ceph
[3:53] * derjohn_mob (~aj@x4db05127.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[3:57] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[3:59] * acctor (~acctor@208.46.223.218) Quit (Ping timeout: 480 seconds)
[4:00] <chengpeng_> someone here?
[4:02] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[4:04] * Kingrat (~shiny@2605:a000:161a:c0f6:7929:cd91:fc52:9020) Quit (Remote host closed the connection)
[4:05] <chengpeng_> filestore(/var/lib/ceph/osd/ceph-3) error (24) Too many open files not handled on operation 0x1089b318 (14312419.0.11, or op 11, counting from 0)
[4:06] <chengpeng_> and my system setting: [root@SJ-6-Cloud121 ~]# ulimit -n
[4:06] <chengpeng_> 65535
[4:06] <chengpeng_> someone know it ?
[4:06] * jermudgeon (~jhaustin@199.200.6.48) has joined #ceph
[4:08] * Kingrat (~shiny@2605:a000:161a:c0f6:8582:8694:fe4c:1c91) has joined #ceph
[4:09] * jermudgeon (~jhaustin@199.200.6.48) Quit ()
[4:16] * DJComet (~Kurimus@61TAAAJFH.tor-irc.dnsbl.oftc.net) Quit ()
[4:16] * spate (~Random@185.65.134.78) has joined #ceph
[4:19] <badone> chengpeng_: use "lsof -n|wc -l" to work out how many are being used
[4:20] <chengpeng_> ok
[4:20] * Vacuum_ (~Vacuum@88.130.218.182) has joined #ceph
[4:27] * Vacuum__ (~Vacuum@88.130.220.95) Quit (Ping timeout: 480 seconds)
[4:29] * sankarshan (~sankarsha@121.244.87.117) has joined #ceph
[4:30] * sankarshan (~sankarsha@121.244.87.117) Quit ()
[4:33] <chengpeng_> badone: it show "34128" when I use "lsof -p 74741 |wc -l" and the pid is osd's pid
[4:34] <badone> chengpeng_: ok, what are your sysctl's set to?
[4:34] <chengpeng_> 65535
[4:35] <badone> chengpeng_: so fs.file-max is set to 65535 ?
[4:36] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[4:37] <chengpeng_> [root@SJ-6-Cloud100 ceph]# cat /proc/sys/fs/file-max
[4:37] <chengpeng_> 13100230
[4:37] <chengpeng_> * hard nofile 65535
[4:37] <chengpeng_> * soft nofile 65535 in /etc/security/limits.conf
[4:37] * acctor (~acctor@c-73-170-8-35.hsd1.ca.comcast.net) has joined #ceph
[4:39] <badone> chengpeng_: okay, I'd recommend doubling the "-n" limit and seeing if that stops the errors
[4:40] * kefu (~kefu@114.92.96.253) has joined #ceph
[4:42] <chengpeng_> file-max size ?
[4:42] <chengpeng_> doubling it ?
[4:42] <chengpeng_> I have 10 osd in one host
[4:42] * debian112 (~bcolbert@173-164-167-198-SFBA.hfc.comcastbusiness.net) has joined #ceph
[4:43] * kefu (~kefu@114.92.96.253) Quit (Max SendQ exceeded)
[4:43] * kefu (~kefu@114.92.96.253) has joined #ceph
[4:43] <badone> chengpeng_: lift all the limits, see if that fixes it, then you can bring each one back down until you find which one is causing the issue
[4:44] * squizzi_ (~squizzi@107.13.31.195) Quit (Quit: bye)
[4:44] * davidzlap (~Adium@2605:e000:1313:8003:b514:3b88:b50c:701c) Quit (Quit: Leaving.)
[4:46] <chengpeng_> too bad, The frequency of the issue is once a day
[4:46] * spate (~Random@61TAAAJGK.tor-irc.dnsbl.oftc.net) Quit ()
[4:46] * dontron (~x303@61TAAAJHN.tor-irc.dnsbl.oftc.net) has joined #ceph
[4:46] <badone> chengpeng_: I'd say you are hitting some sort of peak. Does it correspond with any event in the cluster?
[4:47] <badone> or heightened client traffic?
[4:48] <chengpeng_> I try it
[4:50] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[4:55] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[4:56] <IvanJobs_> hi cephers, recently I've been study CRUSH algorithm. I found two core functions in crush: crush_choose_firstn and crush_choose_indep, I didn't get it. Can anyone explain this two functions to me? thx in advance
[4:58] <IvanJobs_> the comment line before crush_choose_indep says its a "breadth-first positionally stable mapping", I guess it will use some queue to do BFS, but I was wrong, it's a recursive func.
[5:01] * kuku (~kuku@119.93.91.136) has joined #ceph
[5:12] * LiftedKilt (LiftedKilt@is.in.the.madhacker.biz) Quit (Quit: Kilted Southern)
[5:16] * dontron (~x303@61TAAAJHN.tor-irc.dnsbl.oftc.net) Quit ()
[5:19] * brians (~brian@80.111.114.175) has joined #ceph
[5:19] * LiftedKilt (LiftedKilt@is.in.the.madhacker.biz) has joined #ceph
[5:21] * zdzichu (zdzichu@pipebreaker.pl) Quit (Remote host closed the connection)
[5:21] * zdzichu (zdzichu@pipebreaker.pl) has joined #ceph
[5:25] * brians__ (~brian@80.111.114.175) Quit (Ping timeout: 480 seconds)
[5:30] * zdzichu (zdzichu@pipebreaker.pl) Quit (Remote host closed the connection)
[5:30] * zdzichu (zdzichu@pipebreaker.pl) has joined #ceph
[5:34] * adamcrume (~quassel@2601:647:cb01:f890:98c8:3af6:7a0b:cf5) has joined #ceph
[5:34] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[5:42] * vimal (~vikumar@114.143.164.99) has joined #ceph
[5:42] * jermudgeon (~jhaustin@tab.mdu.whitestone.link) has joined #ceph
[5:45] * swami1 (~swami@27.7.165.149) has joined #ceph
[5:46] * osuka_ (~hgjhgjh@Relay-J.tor-exit.network) has joined #ceph
[5:48] <TheSov> wtb ceph for freenas
[5:49] <TheSov> I'm gonna keep saying until some dev decides he will take up that mantle
[5:50] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[5:52] <[arx]> why not take a swing yourself?
[5:53] * kuku (~kuku@119.93.91.136) has joined #ceph
[5:54] * Vacuum__ (~Vacuum@i59F79C3E.versanet.de) has joined #ceph
[5:56] <[arx]> seems active: https://github.com/wjwithagen/ceph/branches
[5:58] * jermudgeon (~jhaustin@tab.mdu.whitestone.link) Quit (Read error: No route to host)
[5:58] * jermudgeon (~jhaustin@tab.mdu.whitestone.link) has joined #ceph
[6:00] * swami1 (~swami@27.7.165.149) Quit (Quit: Leaving.)
[6:01] * Vacuum_ (~Vacuum@88.130.218.182) Quit (Ping timeout: 480 seconds)
[6:01] * walcubi__ (~walcubi@p5795A823.dip0.t-ipconnect.de) has joined #ceph
[6:04] * MentalRay (~MentalRay@LPRRPQ1401W-LP130-02-1242363207.dsl.bell.ca) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:06] * vimal (~vikumar@114.143.164.99) Quit (Quit: Leaving)
[6:07] * jermudgeon_ (~jhaustin@southend.mdu.whitestone.link) has joined #ceph
[6:08] * walcubi_ (~walcubi@p5797A19D.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[6:11] * jermudgeon (~jhaustin@tab.mdu.whitestone.link) Quit (Ping timeout: 480 seconds)
[6:11] * jermudgeon_ is now known as jermudgeon
[6:16] * osuka_ (~hgjhgjh@7EXAAAG0G.tor-irc.dnsbl.oftc.net) Quit ()
[6:30] * vimal (~vikumar@121.244.87.116) has joined #ceph
[6:43] <TheSov> [arx], im terrible at that shit
[6:43] <TheSov> besides im sure the IX guys would be far better at integration. and IX could finally compete with large san vendors with TrueNAS
[6:43] * praveen (~praveen@122.172.49.124) Quit (Read error: Connection reset by peer)
[6:47] * praveen (~praveen@122.171.81.192) has joined #ceph
[6:58] * vimal (~vikumar@121.244.87.116) Quit (Quit: Leaving)
[6:59] * vimal (~vikumar@121.244.87.116) has joined #ceph
[7:00] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) Quit (Quit: jermudgeon)
[7:01] * TMM (~hp@92.69.233.129) has joined #ceph
[7:14] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[7:15] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:16] * Grum (~homosaur@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[7:25] * TMM (~hp@92.69.233.129) Quit (Ping timeout: 480 seconds)
[7:46] * Grum (~homosaur@9YSAAAKZ3.tor-irc.dnsbl.oftc.net) Quit ()
[7:48] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[8:09] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[8:15] * davidzlap (~Adium@rrcs-74-87-213-28.west.biz.rr.com) has joined #ceph
[8:15] * davidzlap (~Adium@rrcs-74-87-213-28.west.biz.rr.com) Quit ()
[8:17] * karnan (~karnan@121.244.87.117) has joined #ceph
[8:26] * moon (~moon@217-19-26-201.dsl.cambrium.nl) has joined #ceph
[8:29] * praveen (~praveen@122.171.81.192) Quit (Remote host closed the connection)
[8:29] * rwheeler (~rwheeler@bzq-82-81-161-50.red.bezeqint.net) has joined #ceph
[8:32] * acctor (~acctor@c-73-170-8-35.hsd1.ca.comcast.net) Quit (Quit: acctor)
[8:35] * danieagle (~Daniel@177.94.139.84) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[8:35] * Jeffrey4l (~Jeffrey@221.192.178.56) has joined #ceph
[8:37] * wr (~Mutter@ip-2-206-0-120.web.vodafone.de) has joined #ceph
[8:37] <wr> Hallo @all
[8:39] <wr> Is anyone here at the Moment ?
[8:40] * acctor (~acctor@c-73-170-8-35.hsd1.ca.comcast.net) has joined #ceph
[8:41] * acctor (~acctor@c-73-170-8-35.hsd1.ca.comcast.net) Quit ()
[8:41] <wr> Hallo
[8:46] * nih (~darks@198.100.155.54) has joined #ceph
[8:47] * wr (~Mutter@ip-2-206-0-120.web.vodafone.de) Quit (Remote host closed the connection)
[8:47] * ade (~abradshaw@212.77.58.61) has joined #ceph
[8:48] <badone> we live in the age of instant gratification I guess....
[8:50] * wr (~Mutter@ip-2-206-0-120.web.vodafone.de) has joined #ceph
[8:51] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[8:52] * moon (~moon@217-19-26-201.dsl.cambrium.nl) Quit (Ping timeout: 480 seconds)
[8:53] * Jeffrey4l (~Jeffrey@221.192.178.56) Quit (Ping timeout: 480 seconds)
[8:56] * wr (~Mutter@ip-2-206-0-120.web.vodafone.de) Quit (Remote host closed the connection)
[8:57] * Jeffrey4l (~Jeffrey@221.192.178.56) has joined #ceph
[9:01] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) has joined #ceph
[9:03] * moon (~moon@217-19-26-201.dsl.cambrium.nl) has joined #ceph
[9:10] * kellyer (~Thunderbi@dub-bdtn-office-r1.net.digiweb.ie) Quit (Quit: kellyer)
[9:15] * swami1 (~swami@223.227.123.43) has joined #ceph
[9:16] * nih (~darks@7EXAAAG4H.tor-irc.dnsbl.oftc.net) Quit ()
[9:20] * swami2 (~swami@223.227.123.43) has joined #ceph
[9:20] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:354c:9c84:7417:af25) has joined #ceph
[9:23] * swami1 (~swami@223.227.123.43) Quit (Ping timeout: 480 seconds)
[9:29] * analbeard (~shw@support.memset.com) has joined #ceph
[9:32] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) Quit (Quit: treenerd_)
[9:40] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[9:42] * fsimonce (~simon@host99-64-dynamic.27-79-r.retail.telecomitalia.it) has joined #ceph
[9:43] * moon (~moon@217-19-26-201.dsl.cambrium.nl) Quit (Ping timeout: 480 seconds)
[9:44] * derjohn_mobi (~aj@x590c247c.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[9:44] * Jeffrey4l (~Jeffrey@221.192.178.56) Quit (Ping timeout: 480 seconds)
[9:45] * rendar (~I@host83-178-dynamic.251-95-r.retail.telecomitalia.it) has joined #ceph
[9:57] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[10:03] * willi (~willi@2a00:1050:4:0:59e5:2d7f:9b30:9c03) has joined #ceph
[10:03] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[10:03] <willi> hi
[10:04] <Hatsjoe> Hi
[10:06] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[10:06] <willi> i have a problem with my ceph cluster
[10:06] <willi> can you help?
[10:07] * derjohn_mobi (~aj@2001:6f8:1337:0:7589:a5d2:87d2:36e3) has joined #ceph
[10:08] <Hatsjoe> https://workaround.org/getting-help-on-irc/
[10:09] * HappyLoaf (~HappyLoaf@cpc93928-bolt16-2-0-cust133.10-3.cable.virginm.net) has joined #ceph
[10:10] * wr (~Mutter@2a00:1050:0:51:f0a8:2033:56db:c7f2) has joined #ceph
[10:10] <kefu> willi, please just shoot your q, no need to ask to ask.
[10:13] <willi> i have a ceph cluster, 18 nodes data, 3 nodes mon, ubuntu 16.04, ceph jewel. if i power down hardly one host ceph is inaccessible for about 900 seconds
[10:13] <willi> pg_stats timeout
[10:13] * wr (~Mutter@2a00:1050:0:51:f0a8:2033:56db:c7f2) Quit (Remote host closed the connection)
[10:13] <willi> i have set:
[10:13] <willi> ceph tell osd.* injectargs '--mon_osd_adjust_heartbeat_grace=false'
[10:13] <willi> ceph tell osd.* injectargs '--mon_osd_adjust_down_out_interval=false'
[10:13] <willi> because i want not to adjust automatically
[10:14] <willi> but now it seems that heartbeats dont work
[10:14] <willi> what can i do?
[10:14] <willi> and in jewel i think realtime injects are not working properly
[10:14] <willi> i get
[10:14] <willi> osd.0: mon_osd_adjust_heartbeat_grace = 'false' (unchangeable)
[10:14] <willi> osd.1: mon_osd_adjust_heartbeat_grace = 'false' (unchangeable)
[10:14] <willi> osd.2: mon_osd_adjust_heartbeat_grace = 'false' (unchangeable)
[10:14] <willi> osd.3: mon_osd_adjust_heartbeat_grace = 'false' (unchangeable)
[10:15] <willi> osd.4: mon_osd_adjust_heartbeat_grace = 'false' (unchangeable)
[10:15] <willi> osd.5: mon_osd_adjust_heartbeat_grace = 'false' (unchangeable)
[10:15] <willi> and so on
[10:15] <willi> if i set the variable in ceph.conf in global it seems to work
[10:15] <willi> after restart of all nodes
[10:15] <willi> the cluster is at the moment testing i can do whatever i want
[10:16] <willi> i have set replica count to 3
[10:16] <willi> i have 3 racks
[10:16] <willi> each raCK 6 NODES and one mon per rack
[10:17] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[10:17] <willi> the goal is that i can down one rack completely
[10:17] <willi> without or with short disruption
[10:17] <willi> 20-30 seconds
[10:17] <willi> here is my crush map
[10:17] <willi> # begin crush map
[10:17] <willi> tunable choose_local_tries 0
[10:17] <willi> tunable choose_local_fallback_tries 0
[10:17] <willi> tunable choose_total_tries 50
[10:18] <willi> tunable chooseleaf_descend_once 1
[10:18] <willi> tunable chooseleaf_vary_r 1
[10:18] <willi> tunable chooseleaf_stable 1
[10:18] <willi> tunable straw_calc_version 1
[10:18] <willi> tunable allowed_bucket_algs 54
[10:18] <Hatsjoe> Oh no you didnt... please use pastebin...
[10:18] <willi> # devices
[10:18] <willi> device 0 osd.0
[10:18] <willi> device 1 osd.1
[10:18] <willi> device 2 osd.2
[10:18] <willi> device 3 osd.3
[10:18] <willi> device 4 osd.4
[10:18] <willi> device 5 osd.5
[10:18] <willi> device 6 osd.6
[10:18] <willi> device 7 osd.7
[10:18] <willi> device 8 osd.8
[10:18] <willi> device 9 osd.9
[10:18] <willi> device 10 osd.10
[10:18] <willi> device 11 osd.11
[10:18] <willi> device 12 osd.12
[10:18] <willi> device 13 osd.13
[10:18] <willi> device 14 osd.14
[10:18] <willi> device 15 osd.15
[10:18] <willi> device 16 osd.16
[10:18] <willi> device 17 osd.17
[10:18] <willi> device 18 osd.18
[10:18] <willi> device 19 osd.19
[10:18] <willi> device 20 osd.20
[10:18] <willi> device 21 osd.21
[10:18] <willi> device 22 osd.22
[10:18] <willi> device 23 osd.23
[10:19] <willi> device 24 osd.24
[10:19] <willi> device 25 osd.25
[10:19] <willi> device 26 osd.26
[10:19] <willi> device 27 osd.27
[10:19] <willi> device 28 osd.28
[10:19] <willi> device 29 osd.29
[10:19] <willi> device 30 osd.30
[10:19] <willi> device 31 osd.31
[10:19] <willi> device 32 osd.32
[10:19] <willi> device 33 osd.33
[10:19] <willi> device 34 osd.34
[10:19] <willi> device 35 osd.35
[10:19] <willi> device 36 osd.36
[10:19] <willi> device 37 osd.37
[10:19] <willi> device 38 osd.38
[10:19] <willi> device 39 osd.39
[10:19] <willi> device 40 osd.40
[10:19] <willi> device 41 osd.41
[10:19] <willi> device 42 osd.42
[10:19] <willi> device 43 osd.43
[10:19] <willi> device 44 osd.44
[10:19] <willi> device 45 osd.45
[10:19] <willi> device 46 osd.46
[10:19] <willi> device 47 osd.47
[10:19] <willi> device 48 osd.48
[10:19] <willi> device 49 osd.49
[10:19] <willi> device 50 osd.50
[10:19] <willi> device 51 osd.51
[10:19] <willi> device 52 osd.52
[10:19] <willi> device 53 osd.53
[10:20] <willi> device 54 osd.54
[10:20] <willi> device 55 osd.55
[10:20] <willi> device 56 osd.56
[10:20] <willi> device 57 osd.57
[10:20] <willi> device 58 osd.58
[10:20] <willi> device 59 osd.59
[10:20] <willi> device 60 osd.60
[10:20] <willi> device 61 osd.61
[10:20] <willi> device 62 osd.62
[10:20] <willi> device 63 osd.63
[10:20] <willi> device 64 osd.64
[10:20] <willi> device 65 osd.65
[10:20] <willi> device 66 osd.66
[10:20] <willi> device 67 osd.67
[10:20] <willi> device 68 osd.68
[10:20] <willi> device 69 osd.69
[10:20] <willi> device 70 osd.70
[10:20] <willi> device 71 osd.71
[10:20] <willi> device 72 osd.72
[10:20] <willi> device 73 osd.73
[10:20] <willi> device 74 osd.74
[10:20] <willi> device 75 osd.75
[10:20] <willi> device 76 osd.76
[10:20] <willi> device 77 osd.77
[10:20] <willi> device 78 osd.78
[10:20] <willi> device 79 osd.79
[10:20] <willi> device 80 osd.80
[10:20] <willi> device 81 osd.81
[10:20] <willi> device 82 osd.82
[10:20] <willi> device 83 osd.83
[10:21] <willi> device 84 osd.84
[10:21] <willi> device 85 osd.85
[10:21] <willi> device 86 osd.86
[10:21] <willi> device 87 osd.87
[10:21] <willi> device 88 osd.88
[10:21] <willi> device 89 osd.89
[10:21] <willi> # types
[10:21] * Aal (~mollstam@tor1.mysec-arch.net) has joined #ceph
[10:21] <willi> type 0 osd
[10:21] <willi> type 1 host
[10:21] <willi> type 2 chassis
[10:21] <willi> type 3 rack
[10:21] <willi> type 4 row
[10:21] <willi> type 5 pdu
[10:21] <willi> type 6 pod
[10:21] <willi> type 7 room
[10:21] <willi> type 8 datacenter
[10:21] <willi> type 9 region
[10:21] <willi> type 10 root
[10:21] <willi> # buckets
[10:21] <willi> host ceph1 {
[10:21] <willi> id -5 # do not change unnecessarily
[10:21] <willi> # weight 4.545
[10:21] <willi> alg straw
[10:21] <willi> hash 0 # rjenkins1
[10:21] <dvahlin> willi: come on, use a pastebin
[10:21] <willi> item osd.0 weight 0.909
[10:21] <willi> item osd.2 weight 0.909
[10:22] <willi> item osd.1 weight 0.909
[10:22] <willi> item osd.3 weight 0.909
[10:22] <willi> item osd.4 weight 0.909
[10:22] <willi> }
[10:22] <willi> host ceph2 {
[10:22] <willi> id -6 # do not change unnecessarily
[10:22] <willi> # weight 4.545
[10:22] <willi> alg straw
[10:22] <vicente> willi: if you want to paste many information for debugging, you can use http://paste2.org/
[10:22] <willi> hash 0 # rjenkins1
[10:22] <willi> item osd.5 weight 0.909
[10:22] <willi> item osd.6 weight 0.909
[10:22] <willi> item osd.7 weight 0.909
[10:22] <vicente> willi: :)
[10:22] <willi> item osd.8 weight 0.909
[10:22] <willi> item osd.9 weight 0.909
[10:22] <willi> }
[10:22] <willi> host ceph3 {
[10:22] <willi> id -7 # do not change unnecessarily
[10:22] <willi> # weight 4.545
[10:22] <willi> alg straw
[10:22] <willi> hash 0 # rjenkins1
[10:22] <willi> item osd.10 weight 0.909
[10:22] <willi> item osd.11 weight 0.909
[10:23] <willi> item osd.12 weight 0.909
[10:23] <willi> item osd.13 weight 0.909
[10:23] <willi> item osd.14 weight 0.909
[10:23] <Hatsjoe> When pasted, its too late, due to the line limiter, it keeps going on, he should disconnect and reconnect, or be kicked from the channel
[10:23] <willi> }
[10:23] <willi> host ceph4 {
[10:23] <willi> id -8 # do not change unnecessarily
[10:23] <willi> # weight 4.545
[10:23] <willi> alg straw
[10:23] <willi> hash 0 # rjenkins1
[10:23] <willi> item osd.15 weight 0.909
[10:23] <willi> item osd.16 weight 0.909
[10:23] <willi> item osd.17 weight 0.909
[10:23] * willi (~willi@2a00:1050:4:0:59e5:2d7f:9b30:9c03) Quit (Remote host closed the connection)
[10:23] * willi (~willi@212.124.32.5) has joined #ceph
[10:23] * willi (~willi@212.124.32.5) Quit ()
[10:23] * willi (~willi@2a00:1050:4:0:59e5:2d7f:9b30:9c03) has joined #ceph
[10:23] <willi> hi
[10:23] <willi> http://pastebin.com/SHQx7VQ1
[10:23] <willi> sorry
[10:23] <willi> i dont know that
[10:24] <Hatsjoe> New to IRC eh?
[10:24] <willi> do you see my messages
[10:24] <willi> yes
[10:24] <willi> sorry
[10:24] <willi> do you get my pastebin?
[10:24] <Hatsjoe> Yes
[10:25] <Hatsjoe> What is the size and min_size of your pool?
[10:25] <willi> here is my ceph.conf
[10:25] <willi> http://pastebin.com/ufG9u6gE
[10:25] <willi> ceph osd dump | grep -i rbd
[10:25] <willi> pool 0 'rbd' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 2500 pgp_num 2500 last_change 1164 flags hashpspool stripe_width 0
[10:26] <Hatsjoe> Why is your pg_num not a power of 2?
[10:27] <willi> i have read that pg_num and pgp_num must be the same
[10:27] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[10:27] <Hatsjoe> Yes, but they should also be a power of 2
[10:28] <willi> ok i change the pg_num to 5000 ?
[10:28] <willi> ceph daemon osd.0 config show | grep adjust
[10:28] <willi> "mon_osd_adjust_heartbeat_grace": "false",
[10:28] <willi> "mon_osd_adjust_down_out_interval": "false",
[10:28] <Hatsjoe> No no no, 5000 is also not a power of 2...
[10:28] <Hatsjoe> https://en.wikipedia.org/wiki/Power_of_two
[10:29] * rraja (~rraja@121.244.87.117) has joined #ceph
[10:29] * swami2 (~swami@223.227.123.43) Quit (Read error: Connection reset by peer)
[10:30] <Hatsjoe> And why exactly did you put those 2 config values to false, whereas the default is true?
[10:31] <willi> ok firstly
[10:31] <willi> you mean
[10:31] <willi> 2n90
[10:31] <willi> =8192 ?
[10:31] * praveen (~praveen@121.244.155.9) has joined #ceph
[10:32] <willi> and then
[10:32] <Hatsjoe> http://docs.ceph.com/docs/master/rados/operations/placement-groups/#choosing-the-number-of-placement-groups
[10:32] <willi> mon osd adjust heartbeat grace
[10:32] <willi> Description: If set to true, Ceph will scale based on laggy estimations.
[10:32] <willi> Type: Boolean
[10:32] <willi> Default: true
[10:33] <willi> ceph calculates laggy estimations
[10:33] <willi> you can test
[10:33] <willi> sometimes the heartbeat is 20 secondes
[10:33] <willi> sometimes 180 seconds
[10:33] <willi> but my netowrk is lag free
[10:33] * Linkmark (~Linkmark@78-23-211-163.access.telenet.be) has joined #ceph
[10:34] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Read error: Connection reset by peer)
[10:34] <willi> ah okay.. i understand... power of 2 you mean... i must use for my situation 8192 pg_num
[10:34] <Hatsjoe> Exactly
[10:35] <sickology> hello, i have a hung connection on one of my servers, i was testing ceph one node server, but now this vm is deleted...but i still have some connection trying to establish to that machine: tcp 0 1 192.168.14.6:52955 192.168.12.12:6789 SYN_SENT -
[10:35] <sickology> how can i figure out which process is it?
[10:36] <sickology> i have ceph-watc 28157 root txt unknown /proc/28157/exe
[10:36] <sickology> which i found with lsof
[10:36] <BranchPredictor> sickology: netstat -anp
[10:36] <sickology> but i can't kill that process
[10:36] <willi> sorry
[10:36] <willi> 4096
[10:37] <willi> 90osd * 100/3 =3000
[10:37] <willi> power of 2 = 4096
[10:37] <willi> okay i have set it
[10:37] <Hatsjoe> Sounds about right
[10:38] <willi> cluster is backfilling
[10:38] <sickology> BranchPredictor: tcp 0 1 192.168.14.6:53188 192.168.12.12:6789 SYN_SENT -
[10:38] <sickology> no process name here
[10:38] <BranchPredictor> sickology: sudo or su root?
[10:38] <sickology> i am the root
[10:39] * swami1 (~swami@49.38.2.171) has joined #ceph
[10:39] <willi> let me test after the backfill to power of one data node
[10:39] <sickology> i can't remember what have i been doing with this server towards that ceph test vm, but something is still hanging
[10:39] <willi> after that i give you more information
[10:39] <Hatsjoe> Alright willi
[10:40] * praveen (~praveen@121.244.155.9) Quit (Ping timeout: 480 seconds)
[10:40] <Hatsjoe> sickology, what does strace say about that process/PID
[10:40] <willi> pgmap v107335: 4096 pgs: 1 activating, 3 peering, 1252 active+remapped+wait_backfill, 56 active+remapped+backfilling, 2784 active+clean; 1077 GB data, 3272 GB used, 80519 GB / 83792 GB avail; 1131960/2410014 objects misplaced (46.969%); 1635 MB/s, 1002 objects/s recovering
[10:42] <sickology> Hatsjoe: strace: attach: ptrace(PTRACE_ATTACH, ...): Operation not permitted
[10:42] <sickology> :/
[10:42] <Hatsjoe> What is the output of `id`?
[10:43] <sickology> uid=0(root) gid=0(root) groups=0(root)
[10:44] * mashwo00 (~textual@51.179.162.234) has joined #ceph
[10:44] <sickology> Hatsjoe: looks like it is operation not permitted only on this process :S
[10:45] <sickology> i'm getting a lots of messages in my syslog...
[10:45] <Hatsjoe> Is it a zombie process? (check ps output)
[10:45] <sickology> ibceph: mon0 192.168.12.12More Information:6789 socket closed (con state CONNECTING)
[10:45] <sickology> no zombies, i have checked
[10:46] * mashwo00 (~textual@51.179.162.234) Quit ()
[10:46] <Hatsjoe> Hmm, I dont think this is Ceph related, but try a reboot, the process/connection should be gone after
[10:47] * Linkmark (~Linkmark@78-23-211-163.access.telenet.be) Quit (Quit: Leaving)
[10:47] * DanFoster (~Daniel@2a00:1ee0:3:1337:e812:5901:c2ff:e4af) has joined #ceph
[10:47] * wr (~Mutter@2a00:1050:0:51:f0a8:2033:56db:c7f2) has joined #ceph
[10:47] <sickology> yes, i kind of wanted to avoid a reboot, but if i have to...
[10:48] * Linkmark (~Linkmark@78-23-211-163.access.telenet.be) has joined #ceph
[10:48] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) Quit (Remote host closed the connection)
[10:48] <Hatsjoe> I dont really know if any other way to get rid of these persistent little buggers
[10:49] <sickology> ok, thanks for your help ;)
[10:50] * wr (~Mutter@2a00:1050:0:51:f0a8:2033:56db:c7f2) Quit (Remote host closed the connection)
[10:50] <Hatsjoe> np
[10:50] <willi> pgmap v108423: 4096 pgs: 1 activating, 1 peering, 197 active+remapped+wait_backfill, 50 active+remapped+backfilling, 3847 active+clean; 1077 GB data, 3273 GB used, 80519 GB / 83792 GB avail; 205670/1946366 objects misplaced (10.567%); 800 MB/s, 502 objects/s recovering
[10:50] <willi> 2016-07-14 10:50:30.385074 mon.0 [INF] osdmap e8884: 90 osds: 90 up, 90 in
[10:51] * Aal (~mollstam@61TAAAJRZ.tor-irc.dnsbl.oftc.net) Quit ()
[10:53] * TMM (~hp@185.5.121.201) has joined #ceph
[10:55] * blank (~Sketchfil@93.115.95.206) has joined #ceph
[10:55] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) has joined #ceph
[10:57] <willi> hatsjoe can you explain me why i must use type host if i replicas over 3 racks?: step chooseleaf firstn 0 type host
[10:57] <willi> why not: step chooseleaf firstn 0 type rack?
[10:58] <willi> or: step chooseleaf firstn 4 type rack
[10:58] <willi> ?
[11:00] <willi> my goal is that i want to start backfill if a host or osd's go down but not to backfill if a rack goes down
[11:00] <willi> 2016-07-14 11:00:31.487400 mon.0 [INF] pgmap v109172: 4096 pgs: 4096 active+clean; 1077 GB data, 3252 GB used, 80539 GB / 83792 GB avail
[11:00] <willi> now i start to power down node1
[11:01] <Hatsjoe> Not really sure on that one willi, I am just getting into crush tuning myself
[11:01] <Hatsjoe> But by default, failure domain is on host/node level
[11:01] <Hatsjoe> So if you want it to be on rack level, you would need to modify your rules
[11:01] * mashwo00 (~textual@51.179.162.234) has joined #ceph
[11:01] * pdrakeweb (~pdrakeweb@cpe-71-74-153-111.neo.res.rr.com) Quit (Ping timeout: 480 seconds)
[11:02] <willi> okay node pwoer downed
[11:02] <willi> rbd inaccessible
[11:02] <willi> via iscsi gateway
[11:02] <willi> downed at 11:01:00 am
[11:03] <willi> no messages at ceph -w
[11:03] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[11:03] <willi> 2016-07-14 11:02:52.105479 mon.0 [INF] pgmap v109204: 4096 pgs: 4096 active+clean; 1077 GB data, 3252 GB used, 80539 GB / 83792 GB avail
[11:03] <willi> 2016-07-14 11:02:53.110647 mon.0 [INF] pgmap v109205: 4096 pgs: 4096 active+clean; 1077 GB data, 3252 GB used, 80539 GB / 83792 GB avail
[11:03] <willi> 2016-07-14 11:02:55.128627 mon.0 [INF] pgmap v109206: 4096 pgs: 4096 active+clean; 1077 GB data, 3252 GB used, 80539 GB / 83792 GB avail
[11:03] <Hatsjoe> Not really sure what goes on at your setup...
[11:04] * TMM (~hp@185.5.121.201) Quit (Quit: Ex-Chat)
[11:04] * TMM (~hp@185.5.121.201) has joined #ceph
[11:04] <willi> as i said...
[11:04] <willi> ceph tell osd.* injectargs '--mon_osd_adjust_heartbeat_grace=false'
[11:04] <willi> ceph tell osd.* injectargs '--mon_osd_adjust_down_out_interval=false'
[11:05] <willi> and nothing happens
[11:05] <willi> if i wait 900 seconds
[11:05] <Hatsjoe> Have you tried putting those config values back to their defaults?
[11:05] <willi> rbd becomes accessible
[11:05] <willi> let us wait 900 seconds
[11:05] <willi> after that i paste you the ceph -w
[11:06] <willi> after that i re enable the defaults
[11:06] <willi> the i reboot the whole cluster
[11:06] <willi> and then the test again
[11:07] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[11:07] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[11:14] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[11:14] * praveen (~praveen@121.244.155.9) has joined #ceph
[11:14] * bara (~bara@213.175.37.12) has joined #ceph
[11:16] <willi> okay
[11:16] <willi> here is the output
[11:16] <willi> 2016-07-14 11:16:00.582147 mon.0 [INF] osd.0 marked down after no pg stats for 901.405308seconds
[11:16] <willi> 2016-07-14 11:16:00.582205 mon.0 [INF] osd.1 marked down after no pg stats for 902.265341seconds
[11:16] <willi> 2016-07-14 11:16:00.582257 mon.0 [INF] osd.4 marked down after no pg stats for 900.932334seconds
[11:16] <willi> 2016-07-14 11:16:00.623642 mon.0 [INF] osdmap e9115: 90 osds: 87 up, 90 in
[11:16] <willi> 2016-07-14 11:16:00.628850 mon.0 [INF] pgmap v109217: 4096 pgs: 4096 active+clean; 1077 GB data, 3252 GB used, 80539 GB / 83792 GB avail
[11:16] <willi> 2016-07-14 11:16:01.637496 mon.0 [INF] osdmap e9116: 90 osds: 87 up, 90 in
[11:16] <willi> rbd is now accessible
[11:17] <willi> 2016-07-14 11:16:58.003157 mon.0 [INF] pgmap v109249: 4096 pgs: 715 active+undersized+degraded, 3381 active+clean; 1077 GB data, 3251 GB used, 80540 GB / 83792 GB avail; 107060/1837794 objects degraded (5.825%)
[11:17] <willi> eph osd tree
[11:17] <willi> ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
[11:17] <willi> -1 81.80997 root default
[11:17] <willi> -2 27.26999 rack rack1
[11:17] <willi> -5 4.54500 host ceph1
[11:17] <willi> 0 0.90900 osd.0 down 1.00000 1.00000
[11:17] <willi> 2 0.90900 osd.2 down 1.00000 1.00000
[11:17] <willi> 1 0.90900 osd.1 down 1.00000 1.00000
[11:17] * evelu (~erwan@37.163.46.253) has joined #ceph
[11:17] <willi> 3 0.90900 osd.3 down 1.00000 1.00000
[11:17] <willi> 4 0.90900 osd.4 down 1.00000 1.00000
[11:17] <willi> thats correct
[11:17] <willi> but wait 900 seconds?
[11:17] <willi> now i change the ceph.conf
[11:18] <willi> can you tell how to restart the ceph services on the ubuntu nodes so that i must not reboot the wohle servers????
[11:18] <willi> after
[11:18] <willi> ceph-deploy --overwrite-conf config push ceph-iscsi-1 ceph-iscsi-2 ceph-iscsi-3 ceph-mon-1 ceph-mon-2 ceph-mon-3 ceph1 ceph2 ceph3 ceph4 ceph5 ceph6 ceph7 ceph8 ceph9 ceph10 ceph11 ceph12 ceph13 ceph14 ceph15 ceph16 ceph17 ceph18
[11:25] * blank (~Sketchfil@7EXAAAG8L.tor-irc.dnsbl.oftc.net) Quit ()
[11:25] * LorenXo (~AGaW@tor-exit.dhalgren.org) has joined #ceph
[11:25] * mashwo00 (~textual@51.179.162.234) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[11:28] <willi> okay
[11:28] <willi> after reboot of the wohle cluster
[11:28] <willi> ceph daemon osd.0 config show | grep adjust
[11:28] <willi> "mon_osd_adjust_heartbeat_grace": "true",
[11:28] <willi> "mon_osd_adjust_down_out_interval": "true",
[11:28] <willi> now i power down one node
[11:29] <willi> 2016-07-14 11:29:33.663135 mon.0 [INF] osd.0 10.250.250.7:6816/3248 failed (17 reporters from different host after 54.853070 >= grace 53.713208)
[11:29] <willi> 2016-07-14 11:29:33.663442 mon.0 [INF] osd.1 10.250.250.7:6804/2881 failed (16 reporters from different host after 54.470697 >= grace 53.639533)
[11:29] <willi> 2016-07-14 11:29:33.663695 mon.0 [INF] osd.2 10.250.250.7:6808/3007 failed (17 reporters from different host after 54.470619 >= grace 53.236229)
[11:29] <willi> 2016-07-14 11:29:33.663947 mon.0 [INF] osd.3 10.250.250.7:6800/2768 failed (17 reporters from different host after 54.470571 >= grace 53.216885)
[11:29] <willi> 2016-07-14 11:29:33.664190 mon.0 [INF] osd.4 10.250.250.7:6812/3127 failed (17 reporters from different host after 54.904858 >= grace 53.641541)
[11:30] * kefu (~kefu@114.92.96.253) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[11:30] * willi (~willi@2a00:1050:4:0:59e5:2d7f:9b30:9c03) Quit ()
[11:30] * willi (~willi@212.124.32.5) has joined #ceph
[11:31] <willi> hi
[11:31] <willi> ??
[11:31] <willi> after defaults i get... mon.0 [INF] osd.0 10.250.250.7:6816/3248 failed (17 reporters from different host after 54.853070 >= grace 53.713208)
[11:31] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[11:31] <DanFoster> willi: We all saw it. I can't help you but be patient. If someone can help you they will.
[11:31] <willi> but rbd inaccessible
[11:32] <willi> okay thank you
[11:32] <boolman> willi: what are you trying to do?
[11:32] <willi> check out my ceph.conf
[11:32] <willi> http://pastebin.com/ufG9u6gE
[11:32] <willi> and my crush map
[11:32] <willi> http://pastebin.com/SHQx7VQ1
[11:33] <willi> i have a 18 data node ceph cluster plus 3 mon nodes plus 3 iscsi gateway nodes
[11:33] <willi> ubuntu 16.04
[11:33] <willi> ceph jewel
[11:33] <willi> if i shut down one node
[11:33] <willi> rbd becomes inaccessible
[11:33] <willi> f??r 900 seconds
[11:33] <willi> after
[11:33] <boolman> ceph osd pool ls detail
[11:33] * KindOne (kindone@198.14.192.107) has joined #ceph
[11:33] <willi> 2016-07-14 11:16:00.582147 mon.0 [INF] osd.0 marked down after no pg stats for 901.405308seconds
[11:34] <willi> ceph osd pool ls detail
[11:34] <willi> pool 0 'rbd' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 4096 pgp_num 4096 last_change 8101 flags hashpspool stripe_width 0
[11:34] <willi> removed_snaps [1~3]
[11:34] <willi> 90 osd / 5 per node = 18 nodes
[11:34] <willi> 6 nodes per rack...
[11:34] <willi> the goal is that i can power down one wohle rack
[11:35] <willi> without or with short disrutoption
[11:35] <willi> disruption
[11:35] <boolman> well your crushmap is wrong
[11:35] <willi> hui
[11:35] <willi> why
[11:35] * evelu (~erwan@37.163.46.253) Quit (Ping timeout: 480 seconds)
[11:35] * mashwo00 (~textual@51.179.162.234) has joined #ceph
[11:35] <boolman> you start of in the "root default", then youre choosing based on host
[11:36] <boolman> should either be "step chooseleaf firstn 0 type rack" OR if you change your root default to contain hosts instead of racks
[11:36] <willi> okay i change it to
[11:36] <willi> step chooseleaf firstn 0 type rack
[11:36] <willi> could you give me the crushtool syntax for that
[11:36] <willi> why must i not change to
[11:37] <willi> step chooseleaf firstn 4 type rack
[11:37] <willi> ???
[11:37] <boolman> you first have to export it, decode, edit with vim/vi/nano/emacs, then decode in and inject
[11:37] * ggarg_afk (~ggarg@nat.nue.novell.com) Quit (Read error: Connection reset by peer)
[11:37] <boolman> firstn 0 means the number of replica you have on your bucket ( in your case 3 )
[11:37] <boolman> firstn 4 means 4 replica ( which you dont have on your bucket )
[11:37] <boolman> i mean pool not bucket
[11:37] <willi> okay
[11:37] <willi> so
[11:37] <willi> step chooseleaf firstn 3 type rack
[11:37] <willi> is correct?
[11:37] * ggarg (~ggarg@nat.nue.novell.com) has joined #ceph
[11:38] <willi> howto decode and encode i know....
[11:38] <boolman> nah you should have 0 if you dont plan on doing some more advance crushmap settings
[11:38] <boolman> https://access.redhat.com/documentation/en/red-hat-ceph-storage/1.3/storage-strategies/chapter-10-editing-a-crush-map
[11:39] <willi> i change it to rack...
[11:39] <willi> after that i do my test againa
[11:39] <willi> power down only node 1
[11:39] <boolman> yeah, bbl lunch
[11:40] <willi> crushmap changed...
[11:40] <willi> backfill started
[11:48] * ggarg (~ggarg@nat.nue.novell.com) Quit (Ping timeout: 480 seconds)
[11:48] * ggarg (~ggarg@nat.nue.novell.com) has joined #ceph
[11:55] * LorenXo (~AGaW@7EXAAAG9I.tor-irc.dnsbl.oftc.net) Quit ()
[11:59] <willi> ok backfill completed
[11:59] <willi> now powered down ceph node 1
[11:59] <willi> and now rbd is offline
[11:59] <willi> 2016-07-14 11:58:39.564996 mon.0 [INF] osd.2 10.250.250.7:6808/3007 failed (14 reporters from different host after 48.860331 >= grace 48.835300)
[11:59] <willi> 2016-07-14 11:58:39.565229 mon.0 [INF] osd.3 10.250.250.7:6800/2768 failed (14 reporters from different host after 48.860239 >= grace 48.828432)
[11:59] <willi> 2016-07-14 11:58:44.565987 mon.0 [INF] osd.0 10.250.250.7:6816/3248 failed (15 reporters from different host after 54.374833 >= grace 49.552053)
[11:59] <willi> 2016-07-14 11:58:44.566211 mon.0 [INF] osd.1 10.250.250.7:6804/2881 failed (15 reporters from different host after 53.861898 >= grace 49.468378)
[11:59] <willi> 2016-07-14 11:58:44.566465 mon.0 [INF] osd.4 10.250.250.7:6812/3127 failed (15 reporters from different host after 53.861539 >= grace 49.470015)
[12:00] * offer (~w2k@65.19.167.131) has joined #ceph
[12:00] * i_m (~ivan.miro@deibp9eh1--blueice4n0.emea.ibm.com) has joined #ceph
[12:01] <willi> crush map
[12:01] <willi> http://pastebin.com/3LWefLHN
[12:01] <willi> ceph.conf
[12:01] <willi> http://pastebin.com/rLN49AZQ
[12:02] * vikhyat (~vumrao@121.244.87.116) Quit (Read error: Connection reset by peer)
[12:02] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[12:30] * offer (~w2k@9YSAAAK61.tor-irc.dnsbl.oftc.net) Quit ()
[12:31] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) Quit (Remote host closed the connection)
[12:31] <willi> can anyone tell me what thies problem is ???
[12:31] <willi> [ 14.841849] libceph: mon1 10.250.250.5:6789 socket closed (con state CONNECTING)
[12:31] <willi> [ 21.911543] libceph: mon0 10.250.250.4:6789 feature set mismatch, my 106b84a842a42 < server's 40106b84a842a42, missing 400000000000000
[12:31] <willi> on the ceph iscsi rbd gateway
[12:32] <willi> [ 21.911551] libceph: mon0 10.250.250.4:6789 missing required protocol features
[12:32] <willi> [ 31.895863] libceph: mon0 10.250.250.4:6789 feature set mismatch, my 106b84a842a42 < server's 40106b84a842a42, missing 400000000000000
[12:34] <boolman> feature set mismatch I recentrly encountered, I needed to upgrade the client and change the crush tunables
[12:35] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[12:39] * SaneSmith (~Azerothia@109.236.90.209) has joined #ceph
[12:41] <Hatsjoe> willi whats the output of `uname -a` on both the client and server?
[12:41] <willi> root@ceph-mon-1:~# uname -a
[12:41] <willi> Linux ceph-mon-1 4.4.0-28-lowlatency #47-Ubuntu SMP PREEMPT Fri Jun 24 10:57:55 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[12:42] <willi> root@ceph1:~# uname -a
[12:42] <willi> Linux ceph1 4.4.0-28-lowlatency #47-Ubuntu SMP PREEMPT Fri Jun 24 10:57:55 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[12:42] <willi> root@ceph-iscsi-1:~# uname -a
[12:42] <willi> Linux ceph-iscsi-1 4.4.0-28-lowlatency #47-Ubuntu SMP PREEMPT Fri Jun 24 10:57:55 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[12:42] <Hatsjoe> Your crush tunables should be set to hammer
[12:42] <Hatsjoe> docs.ceph.com/docs/master/rados/operations/crush-map/#tunables
[12:42] <Hatsjoe> http://docs.ceph.com/docs/master/rados/operations/crush-map/#tunables
[12:42] * bara (~bara@213.175.37.12) Quit (Ping timeout: 480 seconds)
[12:43] <willi> why should i set the crush tunables to hammer?
[12:43] <willi> i have jewel
[12:43] <boolman> willi: its just a profile
[12:43] <Hatsjoe> Because the jewel tunables are only supported on kernel 4.5 and above
[12:44] <boolman> i'm guessing you have bobtail like I had before I changed to hammer
[12:44] <Hatsjoe> So you can either upgrade your kernel, or change your tunables
[12:44] <boolman> ceph osd crush show-tunables
[12:44] <willi> http://pastebin.com/VAmn8ca3
[12:45] <Hatsjoe> willi if you do `ceph osd crush tunables hammer`, the error should disappear.. Or upgrade your kernel to 4.5+
[12:46] <willi> okay why not ceph osd crush tunables infernalis ???
[12:47] <Hatsjoe> Because the hammer tunables is the latest one your kernel supports, read the link I pasted to the docs..
[12:47] * maxx2042 (~m.vernimm@mta.comparegroup.eu) has joined #ceph
[12:47] <willi> ok i have changed it
[12:48] <willi> now i test it again
[12:48] <maxx2042> Guys, we???ve been running a ceph RBD cluster for 2 years and now I???m trying to add a RadosGW but running into an issue here when using ceph-deploy:
[12:49] <maxx2042> [ceph_deploy][ERROR ] RuntimeError: bootstrap-rgw keyring not found; run ???gatherkeys???
[12:49] <maxx2042> but the bootstrap-rgw folder contains a keyring just fine and it???s the same as on the other ceph nodes
[12:49] <maxx2042> is there a way to get some more information from ceph as to why it???s failing?
[12:51] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) has joined #ceph
[12:52] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[12:54] * shylesh__ (~shylesh@121.244.87.118) has joined #ceph
[12:57] * willi (~willi@212.124.32.5) Quit (Remote host closed the connection)
[12:59] <maxx2042> or does anyone have any clues?
[13:00] * gregmark (~Adium@68.87.42.115) has joined #ceph
[13:03] * Xmd (~Xmd@78.85.35.236) Quit (Read error: Connection reset by peer)
[13:04] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Quit: Leaving)
[13:04] * dan__ (~Daniel@office.34sp.com) has joined #ceph
[13:09] * SaneSmith (~Azerothia@61TAAAJWG.tor-irc.dnsbl.oftc.net) Quit ()
[13:11] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:354c:9c84:7417:af25) Quit (Ping timeout: 480 seconds)
[13:11] <s3an2> When increasing 'mds_cache_size' on a MDS server - other than the extra RAM required are there any other things I should be thinking about?
[13:11] * DanFoster (~Daniel@2a00:1ee0:3:1337:e812:5901:c2ff:e4af) Quit (Ping timeout: 480 seconds)
[13:14] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:b1dd:1c39:b635:bd88) has joined #ceph
[13:22] * vimal (~vikumar@121.244.87.116) Quit (Quit: Leaving)
[13:25] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[13:26] * kmajk (~kmajk@nat-hq.ext.getresponse.com) has joined #ceph
[13:26] <kmajk> hello
[13:27] <kmajk> how to add multiple instances in one cluster in one zone of radosgw in new active / active jewel multi site setup ?
[13:27] <kmajk> multiple instances of rgw
[13:28] * c0dice (~toodles@75-128-34-237.static.mtpk.ca.charter.com) has joined #ceph
[13:28] <kmajk> active/active works fine between two zones, but i want to add more rgw in each zone (loadbalancing)
[13:30] * codice (~toodles@75-128-34-237.static.mtpk.ca.charter.com) Quit (Ping timeout: 480 seconds)
[13:30] * willi (~willi@p200300774E3708FC9D08C6E0F20E68B7.dip0.t-ipconnect.de) has joined #ceph
[13:38] * willi (~willi@p200300774E3708FC9D08C6E0F20E68B7.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[13:39] * kmajk (~kmajk@nat-hq.ext.getresponse.com) Quit (Ping timeout: 480 seconds)
[13:43] * willi (~willi@p200300774E3708FC9D08C6E0F20E68B7.dip0.t-ipconnect.de) has joined #ceph
[13:43] * pdrakeweb (~pdrakeweb@oh-76-5-106-72.dhcp.embarqhsd.net) has joined #ceph
[13:47] * Racpatel (~Racpatel@2601:87:0:24af::4c8f) Quit (Quit: Leaving)
[13:50] * EinstCrazy (~EinstCraz@203.79.187.188) Quit (Read error: Connection reset by peer)
[13:51] * EinstCrazy (~EinstCraz@203.79.187.188) has joined #ceph
[13:52] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[13:58] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[14:05] * Racpatel (~Racpatel@2601:87:0:24af::4c8f) has joined #ceph
[14:09] * natarej (~natarej@101.188.54.14) has joined #ceph
[14:13] * dontron (~Salamande@82.94.251.227) has joined #ceph
[14:20] * mattbenjamin (~mbenjamin@12.118.3.106) has joined #ceph
[14:20] <willi> hi
[14:20] <willi> after set ceph osd crush tunables hammer
[14:20] <willi> i powered down the node1
[14:20] <willi> on my iscsi node the rbd hangs
[14:20] <willi> root@ceph-iscsi-1:~# rbd bench-write vmware --pool=rbd --io-total 1G
[14:20] <willi> bench-write io_size 4096 io_threads 16 bytes 1073741824 pattern sequential
[14:20] <willi> SEC OPS OPS/SEC BYTES/SEC
[14:21] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[14:21] <willi> hatsjoe / boolman are you out there?
[14:21] <willi> anyone else?
[14:22] <Hatsjoe> What does ceph -s/-w say? And what about the log files?
[14:22] * Kurt (~Adium@2001:628:1:5:dd3b:9448:619b:12cf) Quit (Quit: Leaving.)
[14:23] <willi> root@ceph-mon-1:~# ceph -s
[14:23] <willi> cluster adb9fb36-5986-4409-8c90-514a67305695
[14:23] <willi> health HEALTH_OK
[14:23] <willi> monmap e1: 3 mons at {ceph-mon-1=10.250.250.4:6789/0,ceph-mon-2=10.250.250.5:6789/0,ceph-mon-3=10.250.250.6:6789/0}
[14:23] <willi> election epoch 464, quorum 0,1,2 ceph-mon-1,ceph-mon-2,ceph-mon-3
[14:23] <willi> osdmap e12479: 90 osds: 90 up, 90 in
[14:23] <willi> flags sortbitwise
[14:23] <willi> pgmap v116439: 4096 pgs, 1 pools, 1077 GB data, 598 kobjects
[14:23] <willi> 3262 GB used, 80529 GB / 83792 GB avail
[14:23] <willi> 4096 active+clean
[14:23] <Hatsjoe> Please use pastebin if you paste more than 2 lines
[14:23] <willi> oh okay no problem
[14:24] <Hatsjoe> Have you checked the log files?
[14:24] <willi> which log files on which server
[14:24] <Hatsjoe> Ehm, the ceph log files on all of them?
[14:24] <willi> hmm
[14:25] <willi> 24 servers...
[14:25] <Hatsjoe> Step 1 in troubleshooting: log files...
[14:25] <Hatsjoe> Well, start at one server where it goes wrong
[14:25] <willi> i give you from mon1/iscsi1/ceoh1(Powered down)/ceph2
[14:25] <Hatsjoe> And be smart and lazy, write scripts and what not
[14:25] <Hatsjoe> No I want you to look for yourself first, we are not here to hold your hand
[14:25] * kefu (~kefu@183.193.119.183) has joined #ceph
[14:25] <willi> thats clear
[14:25] <Hatsjoe> Check the log files for errors, google the errors, and if you cannot figure it out, we'
[14:25] <Hatsjoe> we're more than happy to help
[14:26] <willi> thats why iam here
[14:26] <willi> i know were the log files are
[14:26] <Hatsjoe> But you havent checked them
[14:30] <willi> i think there is anything wrong with the heartbeat / dont know what
[14:30] <willi> 2016-07-14 14:28:08.316317 7feb55e2e700 -1 osd.0 12479 heartbeat_check: no reply from osd.89 since back 2016-07-14 14:22:16.843393 front 2016-07-14 14:22:16.843393 (cutoff 2016-07-14 14:27:48.315981)
[14:33] <willi> no other errors in the log files
[14:33] <willi> http://pastebin.com/NbR0zDai
[14:33] * kefu_ (~kefu@114.92.96.253) has joined #ceph
[14:34] * kmajk (~kmajk@nat-hq.ext.getresponse.com) has joined #ceph
[14:37] * kefu (~kefu@183.193.119.183) Quit (Ping timeout: 480 seconds)
[14:38] <willi> could it be a problem with libjemalloc ??
[14:39] <willi> i had enabled it in /etc/default
[14:39] <Hatsjoe> Not sure, I'm not using it
[14:42] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[14:42] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[14:43] * dontron (~Salamande@9YSAAALA5.tor-irc.dnsbl.oftc.net) Quit ()
[14:43] * Sigma (~vegas3@snowfall.relay.coldhak.com) has joined #ceph
[14:45] * mjeanson_ (~mjeanson@bell.multivax.ca) has joined #ceph
[14:46] * rwheeler (~rwheeler@bzq-82-81-161-50.red.bezeqint.net) Quit (Remote host closed the connection)
[14:46] <kmajk> how to add multiple rgw instances in one ceph cluster in one zone in new active / active jewel multi site setup ?
[14:46] <kmajk> active/active works fine between two zones, but i want to add more rgw in each zone (loadbalancing)
[14:48] * mjeanson_ (~mjeanson@bell.multivax.ca) Quit (Remote host closed the connection)
[14:49] * penguinRaider (~KiKo@b1.07.01a8.ip4.static.sl-reverse.com) Quit (Ping timeout: 480 seconds)
[14:55] * IvanJobs_ (~ivanjobs@103.50.11.146) Quit (Remote host closed the connection)
[14:59] * penguinRaider (~KiKo@204.152.207.173) has joined #ceph
[15:02] <willi> do you know about this problem?
[15:02] <willi> ceph2 kernel: [ 487.326243] perf interrupt took too long (2546 > 2500), lowering kernel.perf_event_max_sample_rate to 50000
[15:07] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[15:09] * johnavp19891 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[15:09] <- *johnavp19891* To prove that you are human, please enter the result of 8+3
[15:09] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[15:10] * nils_____ (~nils_@doomstreet.collins.kg) has joined #ceph
[15:11] * nils_ (~nils_@doomstreet.collins.kg) Quit (Ping timeout: 480 seconds)
[15:11] <willi> i get strange things in my syslog when i power down node1. syslog on all other servers the same things
[15:11] <willi> Jul 14 15:10:02 ceph4 systemd[1]: dev-disk-by\x2dpartlabel-ceph\x5cx20data.device: Dev dev-disk-by\x2dpartlabel-ceph\x5cx20data.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:07.0/0000:06:00.0/host4/target4:1:0/4:1:0:6/block/sdg/sdg1 and /sys/devices/pci0000:00/0000:00:07.0/0000:06:00.0/host4/target4:1:0/4:1:0:4/block/sde/sde1
[15:11] * kmajk (~kmajk@nat-hq.ext.getresponse.com) Quit (Ping timeout: 480 seconds)
[15:12] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[15:13] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[15:13] * Sigma (~vegas3@9YSAAALCG.tor-irc.dnsbl.oftc.net) Quit ()
[15:13] * TMM (~hp@185.5.121.201) Quit (Ping timeout: 480 seconds)
[15:13] <willi> 2016-07-14 14:53:12.837602 7fdf2cc7c700 0 -- 10.250.250.8:0/3188 >> 10.250.250.9:6811/2950 pipe(0x55aba1574000 sd=294 :41040 s=1 pgs=0 cs=0 l=1 c=0x55aba1559080).connect claims to be 10.250.250.9:6811/3074 not 10.250.250.9:6811/2950 - wrong node!
[15:15] * johnavp19891 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Quit: Leaving.)
[15:15] * mjeanson_ (~mjeanson@bell.multivax.ca) has joined #ceph
[15:16] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Read error: No route to host)
[15:17] * nils_____ (~nils_@doomstreet.collins.kg) Quit (Quit: Leaving)
[15:18] * mjeanson__ (~mjeanson@bell.multivax.ca) has joined #ceph
[15:18] * mjeanson_ (~mjeanson@bell.multivax.ca) Quit (Read error: Connection reset by peer)
[15:19] * mjeanson__ (~mjeanson@bell.multivax.ca) Quit (Remote host closed the connection)
[15:19] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[15:20] * georgem (~Adium@45.72.209.18) has joined #ceph
[15:20] * georgem (~Adium@45.72.209.18) Quit ()
[15:20] * georgem (~Adium@206.108.127.16) has joined #ceph
[15:22] * nils_ (~nils_@doomstreet.collins.kg) Quit (Read error: Connection reset by peer)
[15:22] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) has joined #ceph
[15:23] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[15:23] * karnan (~karnan@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:26] * EinstCrazy (~EinstCraz@203.79.187.188) Quit (Ping timeout: 480 seconds)
[15:29] * EinstCrazy (~EinstCraz@60-249-152-164.HINET-IP.hinet.net) has joined #ceph
[15:31] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) Quit (Ping timeout: 480 seconds)
[15:31] * Jeffrey4l (~Jeffrey@45.32.12.91) has joined #ceph
[15:36] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[15:36] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[15:39] * pdrakewe_ (~pdrakeweb@oh-76-5-106-72.dhcp.embarqhsd.net) has joined #ceph
[15:40] * TMM (~hp@185.5.121.201) has joined #ceph
[15:42] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[15:42] * vbellur (~vijay@71.234.224.255) Quit (Ping timeout: 480 seconds)
[15:43] * EinstCrazy (~EinstCraz@60-249-152-164.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[15:43] * dnunez (~dnunez@nat-pool-bos-u.redhat.com) has joined #ceph
[15:43] * gregmark (~Adium@68.87.42.115) has joined #ceph
[15:46] * pdrakeweb (~pdrakeweb@oh-76-5-106-72.dhcp.embarqhsd.net) Quit (Ping timeout: 480 seconds)
[15:53] * squizzi (~squizzi@107.13.31.195) has joined #ceph
[15:55] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[15:58] * dnunez (~dnunez@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[16:00] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[16:01] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[16:02] * yanzheng (~zhyan@125.70.22.67) Quit (Quit: This computer has gone to sleep)
[16:03] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Ping timeout: 480 seconds)
[16:04] * jargonmonk (jargonmonk@00022354.user.oftc.net) Quit (Ping timeout: 480 seconds)
[16:04] * kmajk (~kmajk@nat-hq.ext.getresponse.com) has joined #ceph
[16:04] * maxx2042 (~m.vernimm@mta.comparegroup.eu) Quit (Quit: maxx2042)
[16:05] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[16:08] * penguinRaider (~KiKo@204.152.207.173) Quit (Ping timeout: 480 seconds)
[16:09] * dnunez (~dnunez@nat-pool-bos-t.redhat.com) has joined #ceph
[16:12] * TMM (~hp@185.5.121.201) Quit (Ping timeout: 480 seconds)
[16:13] * Epi (~Sami345@91.108.183.10) has joined #ceph
[16:14] * Jeffrey4l (~Jeffrey@45.32.12.91) Quit (Ping timeout: 480 seconds)
[16:18] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[16:20] * TMM (~hp@185.5.121.201) has joined #ceph
[16:20] * penguinRaider (~KiKo@204.152.207.173) has joined #ceph
[16:25] * pdrakeweb (~pdrakeweb@oh-76-5-108-60.dhcp.embarqhsd.net) has joined #ceph
[16:31] * pdrakewe_ (~pdrakeweb@oh-76-5-106-72.dhcp.embarqhsd.net) Quit (Ping timeout: 480 seconds)
[16:32] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) has joined #ceph
[16:35] * overload (~oc-lram@79.108.113.172.dyn.user.ono.com) Quit (Remote host closed the connection)
[16:40] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[16:40] <willi> hatsjoe are you there?
[16:40] <willi> i have found the problem
[16:40] <willi> and solved
[16:41] <willi> it was a network problem
[16:41] <willi> each rack has 2 hp procurve switches
[16:41] <willi> each node hast a double port 10 gig lan card
[16:41] <willi> configured as bond 802.3ad
[16:41] <willi> the procurve switches were configured as distributed trunk
[16:41] <willi> bond mode 4 in linux
[16:42] <willi> was the problem
[16:42] <willi> bond mode 1 works fine
[16:42] <Hatsjoe> Nice
[16:42] <Hatsjoe> The network can do weird stuff to ceph
[16:43] <willi> and on the procurve switches i must kill the dt-lacp trunk
[16:43] <willi> on each switch
[16:43] * Epi (~Sami345@91.108.183.10) Quit ()
[16:43] * swami1 (~swami@49.38.2.171) Quit (Quit: Leaving.)
[16:43] * Maza (~allenmelo@chomsky.torservers.net) has joined #ceph
[16:43] <willi> dmesg in linux told me the problem
[16:43] <willi> one moment
[16:44] <willi> these here
[16:44] <willi> Jul 14 16:28:58 ceph-mon-1 kernel: [ 1931.324703] bond0: An illegal loopback occurred on adapter (ens1f1)
[16:44] <willi> on all servers
[16:44] <willi> mon/iscsi/data
[16:45] <Hatsjoe> Alright
[16:45] <willi> now i test it to shut down a whole rack
[16:45] * kmajk (~kmajk@nat-hq.ext.getresponse.com) Quit (Ping timeout: 480 seconds)
[16:45] <willi> hope it works although like one server
[16:45] <Hatsjoe> Indeed
[16:46] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[16:46] * swami1 (~swami@49.38.2.171) has joined #ceph
[16:47] * swami1 (~swami@49.38.2.171) Quit ()
[16:48] * joshd1 (~jdurgin@2602:30a:c089:2b0:c1ed:ae1b:566c:f926) has joined #ceph
[16:50] * jargonmonk (jargonmonk@00022354.user.oftc.net) has joined #ceph
[16:51] * kmajk (~kmajk@nat-hq.ext.getresponse.com) has joined #ceph
[16:52] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[16:56] * noahw (~noahw@eduroam-169-233-234-163.ucsc.edu) has joined #ceph
[17:04] * jargonmonk (jargonmonk@00022354.user.oftc.net) Quit (Quit: jargonmonk)
[17:07] * sankarshan (~sankarsha@121.244.87.117) has joined #ceph
[17:08] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:08] * wushudoin (~wushudoin@38.99.12.237) has joined #ceph
[17:09] * jargonmonk (jargonmonk@00022354.user.oftc.net) has joined #ceph
[17:13] * scg (~zscg@valis.gnu.org) has joined #ceph
[17:13] * Maza (~allenmelo@7EXAAAHKH.tor-irc.dnsbl.oftc.net) Quit ()
[17:13] * Dinnerbone (~danielsj@ns316491.ip-37-187-129.eu) has joined #ceph
[17:19] * kefu_ is now known as kefu|afk
[17:20] * ade (~abradshaw@212.77.58.61) Quit (Ping timeout: 480 seconds)
[17:21] * debian112 (~bcolbert@173-164-167-198-SFBA.hfc.comcastbusiness.net) Quit (Quit: Leaving.)
[17:25] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[17:31] * gillesMo (~gillesMo@00012912.user.oftc.net) has joined #ceph
[17:36] * Uniqqqq (~oftc-webi@host86-130-221-20.range86-130.btcentralplus.com) has joined #ceph
[17:36] <Uniqqqq> hey all
[17:37] <Uniqqqq> sorry to spam, but i figured this would be the best place to check..
[17:37] * shylesh__ (~shylesh@121.244.87.118) Quit (Remote host closed the connection)
[17:37] <Uniqqqq> i just made this post https://www.reddit.com/r/ceph/comments/4stubn/zfs_dedup_and_ceph/ would massively appreciate it if anyone took a look over it
[17:37] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[17:39] * kefu|afk (~kefu@114.92.96.253) Quit (Max SendQ exceeded)
[17:39] <Sketch> i have no experience with zfs on ceph or vice versa, but zfs dedup has pretty high resource requirements, using compression is generally considered a much better idea in most circumstances
[17:39] * vata (~vata@207.96.182.162) has joined #ceph
[17:40] <snelly> hi all. Can anybody briefly explain the difference between a Ceph "metadata server" and a "monitor server"?
[17:40] <snelly> Are they the same thing?
[17:40] <Sketch> the general recommendation is 1gb per 1tb of disk space, though the zol devs say that is actually higher than you really need
[17:40] <Sketch> er, 1gb of ram
[17:41] <snelly> n/m....found a doc that explains
[17:41] * i_m (~ivan.miro@deibp9eh1--blueice4n0.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[17:42] <Uniqqqq> cheers Sketch, the data is compressed the encrypted in this instance before it gets to the server in this instance, so i was hoping that dedup might be more appropriate, just in case the same "chunk" is uploaded multiple times (or on the off-chance that an encrypted block is duplicated somewhere in amongst the mess of files)
[17:42] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:42] <Sketch> dedup is likely to do nothing for encrypted data
[17:43] <Sketch> maybe unless you're using some sort of convergent encryption
[17:43] <Uniqqqq> that's correct, but multiple copies of the same encrypted data would be a perfect candidate for de dup
[17:43] * Dinnerbone (~danielsj@7EXAAAHL4.tor-irc.dnsbl.oftc.net) Quit ()
[17:43] * n0x1d (~Rens2Sea@torrelay6.tomhek.net) has joined #ceph
[17:43] <Sketch> true
[17:43] <Uniqqqq> (i guess?) i'm not so familiar with it
[17:44] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[17:44] <Sketch> http://constantin.glez.de/blog/2011/07/zfs-dedupe-or-not-dedupe
[17:46] * kefu (~kefu@114.92.96.253) has joined #ceph
[17:46] <Uniqqqq> didn't know about -S, that's very cool thanks
[17:48] * davidz (~davidz@2605:e000:1313:8003:f85b:1c82:1c0c:9737) has joined #ceph
[17:54] * debian112 (~bcolbert@64.235.157.198) has joined #ceph
[17:58] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[17:59] <TheSov> did you just post on reddit?
[17:59] <TheSov> cuz i responded to you
[17:59] <TheSov> I was saying, if you use zfs, then perhaps its best to utilize larger than 1 disk per osd zpools
[18:00] <TheSov> because dedup for every disk would require a catastrophic use of ram
[18:00] <TheSov> for each osd host, setup a large raidz or raidz2, use a NVME slog
[18:00] <Uniqqqq> gotcha thanks very much mate
[18:00] <TheSov> pack the system with as much ram as possible and run ceph with size 2 or even maybe, maybe 1
[18:00] <Uniqqqq> any ideas on the dedicated monitor layouts and which one is generally preferred for this style of deployment?
[18:00] <TheSov> use 3 monitors
[18:01] <TheSov> on 3 different power sources
[18:01] <Uniqqqq> (ie. the one link having 1 monitor dedicated, the other having 1 mon per osd)
[18:01] <TheSov> connected to 3 different switches
[18:01] <Uniqqqq> yep that was my gut feeling too, thanks
[18:01] <TheSov> dont mix monitors and osds
[18:01] <TheSov> is this for a corporate environment?
[18:01] <Uniqqqq> is there any drawback to having a monitor being put on the same node as the file server OS? or would you recommend 3 dedicated monitor instances?
[18:01] <Uniqqqq> haha nvm you answered my question before i asked it haha
[18:02] <Uniqqqq> it will eventually be for a corp environment yeah
[18:02] <TheSov> then get dedicated monitors
[18:02] <TheSov> use a SSD for /var
[18:02] <Uniqqqq> fantastic, thanks so much for your time, really helped me out
[18:02] <TheSov> no probs
[18:02] <Uniqqqq> i will go and update the reddit post now so googlers get some answers too
[18:02] <TheSov> im on reddit and usually in here
[18:02] <TheSov> under the same name
[18:03] <TheSov> I think i will try ubuntu with zfs and ceph
[18:03] <TheSov> see how well that works
[18:03] * TMM (~hp@185.5.121.201) Quit (Ping timeout: 480 seconds)
[18:03] * karnan (~karnan@106.51.143.240) has joined #ceph
[18:05] * Linkmark (~Linkmark@78-23-211-163.access.telenet.be) Quit (Quit: Leaving)
[18:05] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) has joined #ceph
[18:06] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[18:06] * pdrakeweb (~pdrakeweb@oh-76-5-108-60.dhcp.embarqhsd.net) Quit (Read error: Connection reset by peer)
[18:07] * pdrakeweb (~pdrakeweb@oh-76-5-108-60.dhcp.embarqhsd.net) has joined #ceph
[18:07] * sudocat (~dibarra@192.185.1.20) Quit (Remote host closed the connection)
[18:08] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[18:08] * mashwo00 (~textual@51.179.162.234) Quit (Quit: Textual IRC Client: www.textualapp.com)
[18:10] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) has joined #ceph
[18:11] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:12] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[18:13] * n0x1d (~Rens2Sea@7EXAAAHNH.tor-irc.dnsbl.oftc.net) Quit ()
[18:13] * Deiz (~Bromine@static-ip-85-25-103-119.inaddr.ip-pool.com) has joined #ceph
[18:13] * sudocat (~dibarra@192.185.1.20) Quit (Read error: Connection reset by peer)
[18:13] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[18:14] * sankarshan (~sankarsha@121.244.87.117) Quit (Quit: Are you sure you want to quit this channel (Cancel/Ok) ?)
[18:16] * blizzow (~jburns@50.243.148.102) has joined #ceph
[18:16] <kmajk> Maybe someone will help me, I can't find in docs how to add more rgw instances in one ceph cluster (one zone) - new active / active jewel multisite setup?
[18:18] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) Quit (Quit: Leaving)
[18:21] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) Quit (Quit: Leaving.)
[18:23] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[18:24] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) has joined #ceph
[18:26] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[18:27] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[18:31] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[18:32] <ceph-ircslackbot1> <vdb> What do you guys use to get an accurate value of rbd provisioned space? `ceph df` doesn???t seem to work as well.
[18:35] * mykola (~Mikolaj@91.245.72.134) has joined #ceph
[18:35] * mgolub (~Mikolaj@91.245.72.134) has joined #ceph
[18:35] * mgolub (~Mikolaj@91.245.72.134) Quit (Remote host closed the connection)
[18:43] * Deiz (~Bromine@7EXAAAHOV.tor-irc.dnsbl.oftc.net) Quit ()
[18:43] * Diablodoct0r (~Xerati@212.7.192.148) has joined #ceph
[18:45] * kmajk (~kmajk@nat-hq.ext.getresponse.com) Quit (Ping timeout: 480 seconds)
[18:45] * bene2 (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[18:45] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[18:48] * danieagle (~Daniel@191.8.24.193) has joined #ceph
[18:49] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:50] <- *i_m* sea
[18:52] <TheSov> ceph-ircslackbot1, what do you mean it doesnt seem to work well?
[18:52] <TheSov> your ceph -s will tell you how much space you have
[18:53] * xarses (~xarses@172.56.15.177) has joined #ceph
[18:55] <ceph-ircslackbot1> <vdb> @TheSov: Talking about provisioned space not allocated.
[18:55] <ceph-ircslackbot1> <vdb> `ceph -s` and `ceph df` both show actual used space.
[18:55] <ceph-ircslackbot1> <vdb> `ceph df` is just an extension over `rados df` anyways.
[18:55] * praveen (~praveen@121.244.155.9) Quit (Remote host closed the connection)
[18:57] <TheSov> provisioned space is difficult because everything in ceph is thin provisioned
[18:57] <TheSov> but let me see what i can get you
[18:57] <ceph-ircslackbot1> <vdb> There???s `rbd du`.
[18:58] <ceph-ircslackbot1> <vdb> Which was recently pushed out.
[18:58] <ceph-ircslackbot1> <vdb> But it does scan all the images.
[18:58] * Discovery (~Discovery@109.235.52.9) has joined #ceph
[18:58] <ceph-ircslackbot1> <vdb> Maybe that???s the best way to do it?
[18:58] <ceph-ircslackbot1> <vdb> It???s faster with fast-diff on, but we are not using that feature yet because of the issues it has.
[19:01] <TheSov> ok so u can script it
[19:01] * xarses (~xarses@172.56.15.177) Quit (Ping timeout: 480 seconds)
[19:02] <TheSov> use rbd -p yourrbdpool ls, take the output of that and put it in a while loop using this command rbd -p yourrbdpool info loopoutput
[19:02] <ceph-ircslackbot1> <vdb> Isn???t `rbd du` easier if I go the scripting route?
[19:02] <TheSov> gives you this out for each image. rbd image 'vm-103-disk-1':
[19:02] <TheSov> size 1200 GB in 307200 objects
[19:02] <TheSov> order 22 (4096 kB objects)
[19:02] <TheSov> block_name_prefix: rbd_data.9d2402ae8944a
[19:02] <TheSov> format: 2
[19:02] <TheSov> features: layering
[19:02] <TheSov> flags:
[19:02] <ceph-ircslackbot1> <vdb> Don???t need two commands anymore.
[19:03] <TheSov> let me see
[19:03] * kefu (~kefu@114.92.96.253) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:03] <TheSov> hmmm
[19:03] <ceph-ircslackbot1> <vdb> But yeah I guess at the end of the day we do need to scan all the images since this value isn???t stored in some counter anywhere.
[19:05] * squizzi_ (~squizzi@2001:420:2240:1268:a0b7:f4b7:490:2105) has joined #ceph
[19:07] * swami1 (~swami@27.7.165.149) has joined #ceph
[19:08] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[19:11] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[19:13] * dcwangmit01 (~dcwangmit@162-245.23-239.PUBLIC.monkeybrains.net) Quit (Quit: leaving)
[19:13] * dcwangmit01 (~dcwangmit@162-245.23-239.PUBLIC.monkeybrains.net) has joined #ceph
[19:13] * jargonmonk (jargonmonk@00022354.user.oftc.net) Quit (Quit: jargonmonk)
[19:13] * Diablodoct0r (~Xerati@61TAAAKCM.tor-irc.dnsbl.oftc.net) Quit ()
[19:13] * narthollis (~Zeis@65.19.167.130) has joined #ceph
[19:15] * jargonmonk (jargonmonk@00022354.user.oftc.net) has joined #ceph
[19:18] * dan__ (~Daniel@office.34sp.com) Quit (Quit: Leaving)
[19:18] * jargonmonk (jargonmonk@00022354.user.oftc.net) Quit ()
[19:20] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:21] * jargonmonk (jargonmonk@00022354.user.oftc.net) has joined #ceph
[19:21] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[19:22] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Quit: Leaving)
[19:24] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[19:24] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) has joined #ceph
[19:26] * krypto (~krypto@G68-121-13-32.sbcis.sbc.com) has joined #ceph
[19:31] * willi (~willi@p200300774E3708FC9D08C6E0F20E68B7.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[19:31] * willi (~willi@p200300774E3708FC9D08C6E0F20E68B7.dip0.t-ipconnect.de) has joined #ceph
[19:33] * jargonmonk (jargonmonk@00022354.user.oftc.net) has left #ceph
[19:34] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[19:36] * sudocat (~dibarra@192.185.1.20) Quit (Quit: Leaving.)
[19:37] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[19:39] * willi (~willi@p200300774E3708FC9D08C6E0F20E68B7.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[19:41] * rraja (~rraja@121.244.87.117) Quit (Quit: Leaving)
[19:41] * ntpttr_ (~ntpttr@192.55.55.41) has joined #ceph
[19:42] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[19:43] <snelly> is it reasonable to run monitor daemons and metadata daemons on the same server?
[19:43] * narthollis (~Zeis@61TAAAKDW.tor-irc.dnsbl.oftc.net) Quit ()
[19:43] <snelly> (for production)
[19:43] * Hidendra (~Xa@chulak.enn.lu) has joined #ceph
[19:43] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[19:47] * shaunm (~shaunm@74.83.215.100) Quit (Ping timeout: 480 seconds)
[19:47] * praveen (~praveen@122.171.81.192) has joined #ceph
[19:50] * karnan (~karnan@106.51.143.240) Quit (Quit: Leaving)
[19:51] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[19:52] * reed (~reed@216.38.134.18) has joined #ceph
[19:54] * Uniqqqq (~oftc-webi@host86-130-221-20.range86-130.btcentralplus.com) Quit (Quit: Page closed)
[19:59] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[20:02] * georgem (~Adium@45.72.209.18) has joined #ceph
[20:04] * shaunm (~shaunm@74.83.215.100) has joined #ceph
[20:07] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Ping timeout: 480 seconds)
[20:11] * swami1 (~swami@27.7.165.149) Quit (Quit: Leaving.)
[20:13] * Hidendra (~Xa@61TAAAKFI.tor-irc.dnsbl.oftc.net) Quit ()
[20:13] * Kottizen (~jacoo@178.162.216.42) has joined #ceph
[20:14] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[20:23] * mgolub (~Mikolaj@91.245.74.217) has joined #ceph
[20:27] <TheSov> no, keep your metadata server seperate
[20:27] * bene2 (~bene@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[20:28] * borei (~dan@216.13.217.230) has joined #ceph
[20:28] <borei> hi all
[20:29] * mykola (~Mikolaj@91.245.72.134) Quit (Ping timeout: 480 seconds)
[20:32] <borei> contiue to learn ceph. Built my second cluster it's more or less in proper way now (not perfect yet, but better then the first one). need some heads up from community in regards to performance tunning. So what i have now - 2 nodes, 4 OSD per node (1TB, 7200 SATA, 1.5Gbps - pretty old gears), journals on 15k SAS, 3Gbps disks, 8G of RAM.
[20:32] <borei> cluster is running bunch of VMs
[20:32] * willi (~willi@p200300774E3708FC9D08C6E0F20E68B7.dip0.t-ipconnect.de) has joined #ceph
[20:32] <borei> i did simple "dd" from within VM, got less then 10Mb/s
[20:33] <borei> that is very low from any prospective
[20:33] <borei> oh, forgot, pool in replication mode, poole size = 2
[20:34] <borei> pool ^^^
[20:34] <borei> so question is what is the critical cluster parameters should i consider first
[20:37] <TheSov> do you have journal disks?
[20:37] <borei> yep
[20:38] <borei> journals are not on th OSD
[20:38] <TheSov> ok
[20:38] <TheSov> your size=2
[20:38] <borei> OSD - sata 7.2k, journal - SAS, 15k
[20:38] <borei> size is 2, yes
[20:39] <TheSov> you have 1 gig networks?
[20:39] <borei> port bonded, 2x1G, LACP
[20:40] <TheSov> and the private link?
[20:40] * xarses (~xarses@64.124.158.100) has joined #ceph
[20:40] <borei> yes, dedicated link for ceph traffic
[20:40] <TheSov> well that all seems good.
[20:40] <borei> node has 6 NICs
[20:40] <TheSov> it seems your setup is slow
[20:41] <TheSov> i have honestly never seen a cluster with less than 3 osd hosts though
[20:41] <borei> 2 bonded for ceph, 2 bonded for user traffic (access to VMs), 2 bonded for mgmt, monitoring etc
[20:41] <TheSov> are your crush rules conflicting?
[20:41] <borei> node has 4 OSDs
[20:41] <TheSov> yes u said that
[20:41] <TheSov> im asking if your crush rules show that
[20:42] <TheSov> the default one is designed for size=3
[20:42] <borei> that the questions that im not familiar with yet
[20:42] <TheSov> oh i c
[20:43] * joshd1 (~jdurgin@2602:30a:c089:2b0:c1ed:ae1b:566c:f926) Quit (Quit: Leaving.)
[20:43] <borei> crush rules are all default, i read through docs, but for sure will be reading 10 more times
[20:43] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[20:43] * Kottizen (~jacoo@7EXAAAHT2.tor-irc.dnsbl.oftc.net) Quit ()
[20:45] * pdrakeweb (~pdrakeweb@oh-76-5-108-60.dhcp.embarqhsd.net) Quit (Remote host closed the connection)
[20:45] <borei> so need some direction where to start to look
[20:49] * derjohn_mobi (~aj@2001:6f8:1337:0:7589:a5d2:87d2:36e3) Quit (Ping timeout: 480 seconds)
[20:51] <TheSov> in all honesty I have never dealt with 2 nodes
[20:51] <TheSov> the minimum i have ever seen is 3
[20:52] <borei> well,i have 3 node on the way, it will be absolutely the same like first 2
[20:52] <TheSov> i have a 3 node cluster i guess i should test the vm write speed
[20:53] <ceph-ircslackbot1> <scuttlemonkey> @leseb around?
[20:54] <TheSov> 230MBps
[20:56] <TheSov> borei
[20:56] <TheSov> borei, i responded to your pm. im getting the same rados bench as you are
[20:56] <TheSov> so ceph health wise, it looks good
[20:57] <TheSov> it must be the way your client is connecting or something
[20:57] <borei> there is one thing i noticed, and it can be related
[20:57] <borei> i have one monitor dead
[20:57] <TheSov> that will hurt you
[20:58] <borei> but it's in the configuration
[20:58] <TheSov> if you only have 2 monitors
[20:58] <TheSov> and 1 is down
[20:58] <TheSov> how does the cluster know its not a split brain
[20:58] <borei> no no, 3 total, 1 down
[20:58] <borei> so 2 alive
[20:59] <TheSov> ok
[20:59] <TheSov> that should be fine
[21:00] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[21:02] <TheSov> so for some reason my "RBD" pool has 880 objects in it and no images
[21:02] <TheSov> s
[21:02] <TheSov> so im trying to figure out how to delete those orphaned objects
[21:04] <Kvisle> with the rados command?
[21:04] <TheSov> i tried
[21:04] <TheSov> rados -p rbd cleanup --prefix *
[21:04] <TheSov> says it removed 0 objects
[21:05] <TheSov> yet ceph df shows 882 objects
[21:05] <TheSov> sorry 889
[21:05] <TheSov> root@ceph-1:/home/user# ceph df
[21:05] <TheSov> GLOBAL:
[21:05] <TheSov> SIZE AVAIL RAW USED %RAW USED
[21:05] <TheSov> 8379G 7584G 794G 9.49
[21:05] <TheSov> POOLS:
[21:05] <TheSov> NAME ID USED %USED MAX AVAIL OBJECTS
[21:05] <TheSov> rbd 0 3552M 0.12 2484G 889
[21:05] <TheSov> pmx-storage 1 52499M 1.84 2484G 14519
[21:05] <TheSov> pmx-dual 2 315G 7.52 3727G 87170
[21:05] <TheSov> root@ceph-1:/home/user#
[21:05] <TheSov> .
[21:06] <TheSov> there are no images in RBD
[21:07] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[21:08] * haplo37 (~haplo37@199.91.185.156) Quit (Read error: Connection reset by peer)
[21:09] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[21:09] <Kvisle> TheSov: cleanup is for removing benchmark data
[21:09] <Kvisle> rados -p rbd ls
[21:09] <TheSov> i know
[21:09] <TheSov> thats all ive ever done to the RBD pool
[21:10] <TheSov> enchmark_data_ceph-1_13061_object834
[21:10] <TheSov> benchmark_data_ceph-1_13061_object405
[21:10] <TheSov> benchmark_data_ceph-1_13061_object127
[21:10] <TheSov> benchmark_data_ceph-1_13061_object506
[21:10] <TheSov> benchmark_data_ceph-1_13061_object447
[21:10] <TheSov> benchmark_data_ceph-1_13061_object463
[21:10] <TheSov> benchmark_data_ceph-1_13061_object66.
[21:10] <TheSov> see its all benchmark
[21:10] <Kvisle> wait, you ran --prefix * ... you know what * does, right?
[21:10] <TheSov> it means everything
[21:10] <Kvisle> type echo *
[21:11] <Kvisle> it means _THAT_
[21:11] <TheSov> the list of all files inside?
[21:11] <TheSov> ok
[21:11] <Kvisle> the shell will glob * to match all the files in the current working directory
[21:11] <TheSov> ok
[21:11] <TheSov> i misunderstood how rados interperates that
[21:12] <Kvisle> rados never gets the *
[21:12] <Kvisle> this is enforced by your shell
[21:12] <TheSov> i see
[21:12] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Ping timeout: 480 seconds)
[21:12] <TheSov> well that did it
[21:12] <TheSov> i did the rm with --prefix benchmark
[21:13] <Kvisle> :)
[21:13] <TheSov> removed all but 1 object
[21:13] <TheSov> rbd_directory
[21:13] * willi (~willi@p200300774E3708FC9D08C6E0F20E68B7.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[21:18] * hifi2 (~Random@7EXAAAHWO.tor-irc.dnsbl.oftc.net) has joined #ceph
[21:20] * ntpttr_ (~ntpttr@192.55.55.41) Quit (Remote host closed the connection)
[21:27] * derjohn_mobi (~aj@x590c247c.dyn.telefonica.de) has joined #ceph
[21:29] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[21:33] * jermudgeon_ (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[21:38] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Ping timeout: 480 seconds)
[21:38] * jermudgeon_ is now known as jermudgeon
[21:44] * squizzi (~squizzi@107.13.31.195) Quit (Quit: bye)
[21:48] * hifi2 (~Random@7EXAAAHWO.tor-irc.dnsbl.oftc.net) Quit ()
[21:48] * KungFuHamster (~notarima@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[21:50] * rendar (~I@host83-178-dynamic.251-95-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[21:50] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[21:54] * moon (~moon@217-19-26-201.dsl.cambrium.nl) has joined #ceph
[22:01] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[22:03] * moon (~moon@217-19-26-201.dsl.cambrium.nl) Quit (Ping timeout: 480 seconds)
[22:09] * sudocat (~dibarra@192.185.1.20) has left #ceph
[22:10] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[22:16] * rendar (~I@host83-178-dynamic.251-95-r.retail.telecomitalia.it) has joined #ceph
[22:18] * KungFuHamster (~notarima@7EXAAAHX0.tor-irc.dnsbl.oftc.net) Quit ()
[22:18] * Vale (~clarjon1@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[22:18] * moon (~moon@217-19-26-201.dsl.cambrium.nl) has joined #ceph
[22:19] * ircolle (~Adium@mobile-166-171-057-141.mycingular.net) has joined #ceph
[22:19] * mgolub (~Mikolaj@91.245.74.217) Quit (Quit: away)
[22:22] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) has joined #ceph
[22:29] * moon (~moon@217-19-26-201.dsl.cambrium.nl) Quit (Ping timeout: 480 seconds)
[22:33] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) Quit (Ping timeout: 480 seconds)
[22:33] * Loopie (~Johnny@12.124.18.126) has joined #ceph
[22:38] <Loopie> Where can I get info about the "test lab channel"? Topics, Auth, etc..
[22:38] <Loopie> Is the channel more geared about Jewel, or something else?
[22:38] <zdzichu> /join #sepia
[22:44] * ntpttr_ (~ntpttr@134.134.139.83) has joined #ceph
[22:48] * Vale (~clarjon1@7EXAAAHZB.tor-irc.dnsbl.oftc.net) Quit ()
[22:48] * dontron (~luigiman@tor-exit.talyn.se) has joined #ceph
[22:48] * danieagle (~Daniel@191.8.24.193) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[22:50] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[22:50] * TheSov (~TheSov@108-75-213-57.lightspeed.cicril.sbcglobal.net) Quit (Quit: Leaving)
[22:51] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[22:51] * penguinRaider (~KiKo@204.152.207.173) Quit (Ping timeout: 480 seconds)
[22:54] * TheSov (~TheSov@108-75-213-57.lightspeed.cicril.sbcglobal.net) has joined #ceph
[22:58] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[23:02] * georgem (~Adium@45.72.209.18) Quit (Quit: Leaving.)
[23:04] * penguinRaider (~KiKo@204.152.207.173) has joined #ceph
[23:06] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Remote host closed the connection)
[23:11] * scg (~zscg@valis.gnu.org) Quit (Ping timeout: 480 seconds)
[23:14] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[23:15] * ntpttr_ (~ntpttr@134.134.139.83) Quit (Quit: Leaving)
[23:18] * dontron (~luigiman@61TAAAKNK.tor-irc.dnsbl.oftc.net) Quit ()
[23:18] * dug (~Qiasfah@tor-exit-4.all.de) has joined #ceph
[23:19] * FierceForm (~Rosenblut@kunstler.tor-exit.calyxinstitute.org) has joined #ceph
[23:19] * dug (~Qiasfah@9YSAAALY9.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[23:22] * ircolle (~Adium@mobile-166-171-057-141.mycingular.net) Quit (Quit: Leaving.)
[23:23] * mhack (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) has joined #ceph
[23:26] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Ping timeout: 480 seconds)
[23:27] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[23:27] * lcurtis (~lcurtis@47.19.105.250) Quit ()
[23:28] * lcurtis_ (~lcurtis@47.19.105.250) has joined #ceph
[23:33] * dnunez (~dnunez@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[23:34] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[23:41] * MentalRay (~MentalRay@modemcable082.255-70-69.static.videotron.ca) has joined #ceph
[23:47] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[23:47] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[23:49] * FierceForm (~Rosenblut@7EXAAAH1D.tor-irc.dnsbl.oftc.net) Quit ()
[23:52] * moon (~moon@217-19-26-201.dsl.cambrium.nl) has joined #ceph
[23:52] * lcurtis_ (~lcurtis@47.19.105.250) Quit (Remote host closed the connection)
[23:53] * MentalRay (~MentalRay@modemcable082.255-70-69.static.videotron.ca) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[23:56] * jargonmonk (jargonmonk@00022354.user.oftc.net) has joined #ceph
[23:57] * jargonmonk (jargonmonk@00022354.user.oftc.net) Quit ()
[23:59] * jargonmonk (jargonmonk@00022354.user.oftc.net) has joined #ceph
[23:59] * Loopie (~Johnny@12.124.18.126) Quit (Quit: Leaving)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.