#ceph IRC Log

Index

IRC Log for 2016-07-08

Timestamps are in GMT/BST.

[0:07] * wes_dillingham (~wes_dilli@mobile-166-186-168-87.mycingular.net) has joined #ceph
[0:09] * wes_dillingham (~wes_dilli@mobile-166-186-168-87.mycingular.net) Quit ()
[0:09] <badone> acctor: what filesystem would be on the volume?
[0:09] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[0:09] <acctor> badone: ext4
[0:10] <badone> acctor: then you can't mount it in two places at once
[0:10] <acctor> badone: correct, I am trying to guarantee that it can???t happen by using exclusive-lock
[0:10] <acctor> on investigation it seems like exclusive-lock doesn???t guarantee that
[0:10] <badone> acctor: ah, than I'm sorry, misunderstood your question
[0:11] <acctor> badone: np, thanks for taking a look
[0:11] <badone> welcome
[0:12] * hellertime1 (~Adium@72.246.3.14) Quit (Quit: Leaving.)
[0:17] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[0:18] * jargonmonk (~jmnk@123.201.163.12) Quit (Read error: Connection reset by peer)
[0:18] * arcimboldo (~antonio@84-75-174-248.dclient.hispeed.ch) Quit (Quit: Ex-Chat)
[0:18] * jargonmonk (~jmnk@123.201.163.12) has joined #ceph
[0:19] * squizzi (~squizzi@107.13.31.195) Quit (Quit: bye)
[0:20] * jargmonk (~jmnk@175.100.140.104) has joined #ceph
[0:22] * oliveiradan (~doliveira@137.65.133.10) Quit (Remote host closed the connection)
[0:24] <jiffe> so I am looking at http://ceph.com/planet/admin-guide-replacing-a-failed-disk-in-a-ceph-cluster/
[0:25] <jiffe> I am wondering why we need to completely remove the osd from ceph and recreate it when just replacing a disk
[0:25] <jiffe> I can't just replace the disk, recreate the fs and restart the osd in hopes of it rebuilding from a replica?
[0:26] * jargonmonk (~jmnk@123.201.163.12) Quit (Ping timeout: 480 seconds)
[0:26] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[0:27] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) Quit (Ping timeout: 480 seconds)
[0:29] * oliveiradan (~doliveira@137.65.133.10) has joined #ceph
[0:30] * rendar (~I@host225-177-dynamic.20-87-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[0:36] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[0:36] * linuxkidd (~linuxkidd@mobile-166-176-56-147.mycingular.net) has joined #ceph
[0:36] * linuxkidd (~linuxkidd@mobile-166-176-56-147.mycingular.net) Quit ()
[0:37] * linuxkidd (~linuxkidd@mobile-166-176-56-147.mycingular.net) has joined #ceph
[0:38] * EinstCrazy (~EinstCraz@61.165.228.81) Quit (Remote host closed the connection)
[0:40] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[0:40] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit ()
[0:42] <Brochacho> jiffe: IIRC there's a page that explicitly says why not to mount disks on your own but I might be remembering wrong
[0:43] * vata (~vata@207.96.182.162) Quit (Quit: Leaving.)
[0:43] <jiffe> mount disks on my own?
[0:43] <jiffe> something else mounts disks?
[0:44] * Geph (~Geoffrey@169-1-168-102.ip.afrihost.co.za) Quit (Ping timeout: 480 seconds)
[0:46] * penguinRaider (~KiKo@69.163.33.182) Quit (Ping timeout: 480 seconds)
[0:47] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[0:55] * penguinRaider (~KiKo@69.163.33.182) has joined #ceph
[0:56] * noooxqe (~noooxqe@1.ip-51-255-167.eu) Quit (Quit: ZNC - http://znc.in)
[1:00] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[1:00] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[1:05] * jhp (~jhp@2a02:b70:0:1:223:14ff:fe4e:6060) Quit (Quit: Leaving)
[1:08] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[1:11] * davidzlap (~Adium@2605:e000:1313:8003:9d9:8891:c364:71ae) Quit (Quit: Leaving.)
[1:13] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:80fe:bd46:5d83:1f20) has joined #ceph
[1:14] * linjan_ (~linjan@176.195.85.190) has joined #ceph
[1:14] * toMeloos (~toMeloos@53568B3D.cm-6-7c.dynamic.ziggo.nl) Quit (Ping timeout: 480 seconds)
[1:17] * raeven (~raeven@h89n10-oes-a31.ias.bredband.telia.com) Quit (Read error: Connection reset by peer)
[1:19] * tallest_red (~GuntherDW@46.166.190.196) has joined #ceph
[1:20] * Long_yanG (~long@15255.s.time4vps.eu) Quit (Remote host closed the connection)
[1:20] * LongyanG (~long@15255.s.t4vps.eu) has joined #ceph
[1:21] * linjan (~linjan@176.195.184.236) Quit (Ping timeout: 480 seconds)
[1:38] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[1:39] * kuku (~kuku@119.93.91.136) has joined #ceph
[1:47] * Brochacho (~alberto@c-50-141-135-98.hsd1.il.comcast.net) Quit (Quit: Brochacho)
[1:49] * tallest_red (~GuntherDW@9YSAAAEEC.tor-irc.dnsbl.oftc.net) Quit ()
[1:50] * xarses (~xarses@64.124.158.100) Quit (Ping timeout: 480 seconds)
[1:50] * sudocat (~dibarra@192.185.1.20) Quit (Quit: Leaving.)
[1:51] * ffilz (~ffilz@76.115.190.27) Quit (Quit: Leaving)
[1:51] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[1:52] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[1:53] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:80fe:bd46:5d83:1f20) Quit (Ping timeout: 480 seconds)
[2:00] * wushudoin (~wushudoin@38.99.12.237) Quit (Ping timeout: 480 seconds)
[2:03] * BrianA (~BrianA@c-24-130-77-13.hsd1.ca.comcast.net) has joined #ceph
[2:09] * BrianA (~BrianA@c-24-130-77-13.hsd1.ca.comcast.net) has left #ceph
[2:17] * Concubidated (~cube@nat-pool-nrt-t1.redhat.com) has joined #ceph
[2:18] * pdrakewe_ (~pdrakeweb@cpe-71-74-153-111.neo.res.rr.com) has joined #ceph
[2:18] * pdrakeweb (~pdrakeweb@cpe-71-74-153-111.neo.res.rr.com) Quit (Read error: Connection reset by peer)
[2:20] * borei (~dan@216.13.217.230) Quit (Ping timeout: 480 seconds)
[2:23] * huangjun (~kvirc@113.57.168.154) has joined #ceph
[2:24] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has left #ceph
[2:27] * neurodrone__ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[2:28] * mattbenjamin (~mbenjamin@12.118.3.106) Quit (Quit: Leaving.)
[2:30] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Read error: Connection reset by peer)
[2:30] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[2:32] * scg (~zscg@146-115-134-246.c3-0.nwt-ubr1.sbo-nwt.ma.cable.rcn.com) has joined #ceph
[2:32] * scg (~zscg@146-115-134-246.c3-0.nwt-ubr1.sbo-nwt.ma.cable.rcn.com) Quit ()
[2:32] * BrianA (~BrianA@c-24-130-77-13.hsd1.ca.comcast.net) has joined #ceph
[2:33] * vata (~vata@cable-173.246.3-246.ebox.ca) has joined #ceph
[2:35] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[2:35] * neurodrone__ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Read error: Connection reset by peer)
[2:39] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Read error: Connection reset by peer)
[2:39] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[2:39] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit ()
[2:44] * BrianA (~BrianA@c-24-130-77-13.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[2:44] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[2:46] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Ping timeout: 480 seconds)
[2:48] * bniver (~bniver@pool-173-48-58-27.bstnma.fios.verizon.net) Quit (Quit: Leaving)
[3:03] * Iulian (~chatzilla@188.27.7.29) has joined #ceph
[3:08] <Iulian> Trying to choose the persistent storage Openshift for https://docs.openshift.org/latest/architecture/additional_concepts/storage.html
[3:08] <Iulian> Ceph versus GlusterFS. Which performs better with databases ( Mysql, Postgresql) ? From this I would go for Ceph http://www.slideshare.net/Red_Hat_Storage/my-sql-and-ceph-headtohead-performance-lab .
[3:08] <Iulian> Is block storage better ( Ceph) then file storage (Gluster) for databases? Database criteria is very important, since this is usually the bottleneck of web apps. Thanks
[3:10] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[3:10] * swami1 (~swami@27.7.169.81) has joined #ceph
[3:14] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[3:20] * sage__ (~quassel@64.111.99.127) has joined #ceph
[3:23] * sage (~quassel@2607:f298:5:101d:f816:3eff:fe21:1966) Quit (Ping timeout: 480 seconds)
[3:23] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[3:27] * Iulian (~chatzilla@188.27.7.29) Quit (Quit: ChatZilla 0.9.92 [Firefox 44.0.2/20160210153822])
[3:28] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[3:29] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[3:31] * bjornar_ (~bjornar@ti0099a430-0410.bb.online.no) Quit (Ping timeout: 480 seconds)
[3:33] * yanzheng (~zhyan@125.70.20.240) has joined #ceph
[3:41] * sebastian-w_ (~quassel@212.218.8.138) has joined #ceph
[3:41] * sebastian-w (~quassel@212.218.8.139) Quit (Read error: Connection reset by peer)
[3:44] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Quit: Leaving)
[3:45] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[3:55] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[3:56] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit ()
[3:57] * derjohn_mobi (~aj@x590c63d4.dyn.telefonica.de) has joined #ceph
[4:00] * wjw-freebsd3 (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[4:02] * MentalRay (~MentalRay@LPRRPQ1401W-LP130-02-1242363207.dsl.bell.ca) has joined #ceph
[4:04] * flisky (~Thunderbi@106.38.61.186) has joined #ceph
[4:04] * aj__ (~aj@x4db12279.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[4:15] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[4:15] * hellertime (~Adium@pool-71-162-119-41.bstnma.fios.verizon.net) has joined #ceph
[4:22] * acctor (~acctor@208.46.223.218) Quit (Ping timeout: 480 seconds)
[4:27] * kefu (~kefu@183.193.36.182) has joined #ceph
[4:30] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[4:43] * Shnaw (~Jamana@213.61.149.100) has joined #ceph
[4:46] * Nats_ (~natscogs@114.31.195.238) Quit (Read error: Connection reset by peer)
[4:48] * Nats (~natscogs@114.31.195.238) has joined #ceph
[4:49] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[4:50] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[4:55] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[5:13] * Shnaw (~Jamana@61TAAAB9Z.tor-irc.dnsbl.oftc.net) Quit ()
[5:15] * flisky (~Thunderbi@106.38.61.186) Quit (Quit: flisky)
[5:15] * kuku (~kuku@119.93.91.136) has joined #ceph
[5:19] * shaunm (~shaunm@74.83.215.100) Quit (Ping timeout: 480 seconds)
[5:22] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[5:27] <hellertime> ok trying to use monmap tool to remove a mon that exists in the monmap (according to ???print) ??? but when I run `monmap ???rm ` it tells me the mon name doens't exist...
[5:28] <hellertime> any other way to edit a monmap?
[5:28] <hellertime> I'm using the mon name, but the docs imply I should use IP:PORT, but that doesn't work either
[5:30] * kuku (~kuku@119.93.91.136) has joined #ceph
[5:33] * kefu_ (~kefu@114.92.118.31) has joined #ceph
[5:34] * jargmonk (~jmnk@175.100.140.104) Quit (Quit: Leaving)
[5:35] * Mika_c (~Mika@36-227-34-2.dynamic-ip.hinet.net) has joined #ceph
[5:37] * kefu (~kefu@183.193.36.182) Quit (Ping timeout: 480 seconds)
[5:37] * Vacuum__ (~Vacuum@88.130.222.29) has joined #ceph
[5:44] * Vacuum_ (~Vacuum@88.130.203.106) Quit (Ping timeout: 480 seconds)
[5:45] * jargonmonk (~jmnk@175.100.140.104) has joined #ceph
[5:45] * swami1 (~swami@27.7.169.81) Quit (Quit: Leaving.)
[5:58] * [0x4A6F]_ (~ident@p4FC27282.dip0.t-ipconnect.de) has joined #ceph
[6:00] <kuku> hi good morning. Just reinstall my ceph but for some reason, I get this error whenever I tried to run ceph command. 0 librados: client.admin authentication error (1) Operation not permitted
[6:01] <kuku> But I can specify the keyring and void that problem. Is there's a step that I must do so I don't need to specify the keyring everytime I run ceph command?
[6:01] * [0x4A6F] (~ident@0x4a6f.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:01] * [0x4A6F]_ is now known as [0x4A6F]
[6:04] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[6:04] * linuxkidd (~linuxkidd@mobile-166-176-56-147.mycingular.net) Quit (Ping timeout: 480 seconds)
[6:05] <Mika_c> kuku, Try sudo or just execute 'sudo chmod +r /etc/ceph/[your-keyring]'
[6:07] <kuku> okay thanks! it works. :D
[6:07] * kefu_ is now known as kefu
[6:13] * ktdreyer (~kdreyer@polyp.adiemus.org) Quit (Remote host closed the connection)
[6:13] * ktdreyer (~kdreyer@polyp.adiemus.org) has joined #ceph
[6:14] * jamespag` (~jamespage@culvain.gromper.net) Quit (Ping timeout: 480 seconds)
[6:15] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[6:25] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) has joined #ceph
[6:27] * penguinRaider (~KiKo@69.163.33.182) Quit (Ping timeout: 480 seconds)
[6:29] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[6:29] * kuku (~kuku@119.93.91.136) Quit (Read error: Connection reset by peer)
[6:30] * kuku (~kuku@119.93.91.136) has joined #ceph
[6:33] * rinek (~o@62.109.134.112) Quit (Quit: ~)
[6:36] * hellertime (~Adium@pool-71-162-119-41.bstnma.fios.verizon.net) Quit (Quit: Leaving.)
[6:37] * penguinRaider (~KiKo@69.163.33.182) has joined #ceph
[6:39] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[6:40] * rinek (~o@62.109.134.112) has joined #ceph
[6:42] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[6:43] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[6:47] * rinek (~o@62.109.134.112) Quit (Quit: ~)
[6:48] * penguinRaider (~KiKo@69.163.33.182) Quit (Ping timeout: 480 seconds)
[6:53] * MentalRay (~MentalRay@LPRRPQ1401W-LP130-02-1242363207.dsl.bell.ca) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:54] * MentalRay (~MentalRay@LPRRPQ1401W-LP130-02-1242363207.dsl.bell.ca) has joined #ceph
[6:55] * kuku (~kuku@119.93.91.136) Quit (Ping timeout: 480 seconds)
[6:55] * swami1 (~swami@49.38.0.153) has joined #ceph
[6:57] * kuku (~kuku@119.93.91.136) has joined #ceph
[6:59] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[7:01] * penguinRaider (~KiKo@69.163.33.182) has joined #ceph
[7:03] * flisky (~Thunderbi@210.12.157.88) has joined #ceph
[7:06] * vimal (~vikumar@121.244.87.116) has joined #ceph
[7:19] * bvi (~Bastiaan@185.56.32.1) has joined #ceph
[7:29] * shylesh (~shylesh@45.124.227.235) has joined #ceph
[7:29] * MentalRay (~MentalRay@LPRRPQ1401W-LP130-02-1242363207.dsl.bell.ca) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[7:30] * karnan (~karnan@106.51.138.205) has joined #ceph
[7:30] * flisky (~Thunderbi@210.12.157.88) Quit (Quit: flisky)
[7:31] * takarider (~takarider@KD175108208098.ppp-bb.dion.ne.jp) has joined #ceph
[7:32] * takarider (~takarider@KD175108208098.ppp-bb.dion.ne.jp) Quit ()
[7:34] * ade (~abradshaw@dslb-178-008-040-187.178.008.pools.vodafone-ip.de) has joined #ceph
[7:36] * reed (~reed@50-1-125-26.dsl.dynamic.fusionbroadband.com) Quit (Quit: Ex-Chat)
[7:42] * penguinRaider (~KiKo@69.163.33.182) Quit (Ping timeout: 480 seconds)
[7:44] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[7:45] <[arx]> prob don't want your admin keyring to be world readable
[7:45] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[7:48] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[7:49] * markl (~mark@knm.org) Quit (Ping timeout: 480 seconds)
[7:57] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[7:57] * penguinRaider (~KiKo@69.163.33.182) has joined #ceph
[8:01] * flisky (~Thunderbi@106.38.61.188) has joined #ceph
[8:08] * rendar (~I@87.18.177.238) has joined #ceph
[8:10] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[8:10] * dlan (~dennis@116.228.88.131) Quit (Ping timeout: 480 seconds)
[8:11] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[8:18] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[8:20] * Mika_c (~Mika@36-227-34-2.dynamic-ip.hinet.net) Quit (Quit: Leaving)
[8:21] * jargmonk (~jmnk@203.187.205.38) has joined #ceph
[8:27] * jargonmonk (~jmnk@175.100.140.104) Quit (Ping timeout: 480 seconds)
[8:30] * efirs1 (~firs@c-50-185-70-125.hsd1.ca.comcast.net) has joined #ceph
[8:30] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[8:34] * swami1 (~swami@49.38.0.153) Quit (Read error: Connection reset by peer)
[8:37] * Geph (~Geoffrey@41.77.153.99) has joined #ceph
[8:41] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[8:44] * rinek (~o@62.109.134.112) has joined #ceph
[8:48] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[8:49] * kefu (~kefu@114.92.118.31) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[8:50] * kefu (~kefu@45.32.49.168) has joined #ceph
[8:55] <IvanJobs> Strange problem, has anyone used fio with rbd engine? I use fio with rbd engine to test ceph cluster got ~6K IOPS, but my colleague test the ceph cluster with openstack vm(fio with libaio inside a vm) got ~30K IOPS with rbd_cache=true.
[8:55] <IvanJobs> Why?
[8:56] <IvanJobs> I guess fio cannot take advantages of rbd cache. Just here to make sure of this, thx in advance.
[8:57] * praveen (~praveen@122.167.138.108) Quit (Remote host closed the connection)
[8:59] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[9:00] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[9:03] * acctor (~acctor@c-73-170-8-35.hsd1.ca.comcast.net) has joined #ceph
[9:05] * vikhyat (~vumrao@121.244.87.117) has joined #ceph
[9:07] * bara (~bara@ip4-83-240-10-82.cust.nbox.cz) has joined #ceph
[9:08] * yanzheng (~zhyan@125.70.20.240) Quit (Ping timeout: 480 seconds)
[9:08] * yanzheng (~zhyan@125.70.20.240) has joined #ceph
[9:12] * analbeard (~shw@support.memset.com) has joined #ceph
[9:16] * rinek (~o@62.109.134.112) Quit (Quit: ~)
[9:21] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[9:22] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[9:23] * flisky (~Thunderbi@106.38.61.188) Quit (Remote host closed the connection)
[9:23] * flisky (~Thunderbi@210.12.157.93) has joined #ceph
[9:25] * rinek (~o@62.109.134.112) has joined #ceph
[9:30] * penguinRaider (~KiKo@69.163.33.182) Quit (Ping timeout: 480 seconds)
[9:32] * kefu (~kefu@45.32.49.168) Quit (Ping timeout: 480 seconds)
[9:34] * kefu (~kefu@45.32.49.168) has joined #ceph
[9:38] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[9:39] * alexxy (~alexxy@biod.pnpi.spb.ru) has joined #ceph
[9:42] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[9:42] * praveen (~praveen@121.244.155.10) has joined #ceph
[9:42] * wjw-freebsd3 (~wjw@smtp.digiware.nl) has joined #ceph
[9:44] * stein (~stein@185.56.185.82) Quit (Remote host closed the connection)
[9:44] * stein (~stein@185.56.185.82) has joined #ceph
[9:47] * penguinRaider (~KiKo@69.163.33.182) has joined #ceph
[9:50] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Ping timeout: 480 seconds)
[9:50] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[9:52] * Concubidated (~cube@nat-pool-nrt-t1.redhat.com) Quit (Quit: Leaving.)
[9:53] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[9:55] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[9:56] * kefu_ (~kefu@114.92.118.31) has joined #ceph
[9:57] * bjornar_ (~bjornar@ti0099a430-0410.bb.online.no) has joined #ceph
[9:58] * kefu (~kefu@45.32.49.168) Quit (Ping timeout: 480 seconds)
[9:59] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[10:00] <Hatsjoe> TheSov2 and koszik, the speed/performance issues we discussed a couple days ago, turns out the culprit was a faulty SAS controller causing bad performance on one node, which in turn caused the whole cluster to perform badly, guess this is because I only have 3 nodes, and a pool with a size of 3
[10:00] <Hatsjoe> But your help helped me a lot in understanding on how to troubleshoot ceph issues :)
[10:01] <Hatsjoe> Thanks for that
[10:02] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[10:02] * hybrid512 (~walid@195.200.189.206) has joined #ceph
[10:03] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[10:04] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[10:11] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) has joined #ceph
[10:12] * DanFoster (~Daniel@2a00:1ee0:3:1337:d1f6:5f84:e698:b9f9) has joined #ceph
[10:15] * praveen_ (~praveen@121.244.155.10) has joined #ceph
[10:15] * praveen (~praveen@121.244.155.10) Quit (Read error: Connection reset by peer)
[10:17] * Concubidated (~cube@122.103.163.63.ap.gmobb-fix.jp) has joined #ceph
[10:23] * acctor (~acctor@c-73-170-8-35.hsd1.ca.comcast.net) Quit (Quit: acctor)
[10:27] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) has joined #ceph
[10:27] * toMeloos (~toMeloos@53568B3D.cm-6-7c.dynamic.ziggo.nl) has joined #ceph
[10:27] * ggarg_away is now known as ggarg
[10:27] * branto (~branto@178-253-144-120.3pp.slovanet.sk) has joined #ceph
[10:29] * penguinRaider (~KiKo@69.163.33.182) Quit (Ping timeout: 480 seconds)
[10:36] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[10:39] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[10:40] * Concubidated (~cube@122.103.163.63.ap.gmobb-fix.jp) Quit (Ping timeout: 480 seconds)
[10:41] * derjohn_mobi (~aj@x590c63d4.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[10:42] * Concubidated (~cube@86.223.137.133.rev.iijmobile.jp) has joined #ceph
[10:43] * vata (~vata@cable-173.246.3-246.ebox.ca) Quit (Ping timeout: 480 seconds)
[10:45] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[10:47] * penguinRaider (~KiKo@14.139.82.6) has joined #ceph
[10:50] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[10:54] * vata (~vata@cable-173.246.3-246.ebox.ca) has joined #ceph
[10:58] * Tenk (~Helleshin@4PRAAAB1V.tor-irc.dnsbl.oftc.net) has joined #ceph
[10:58] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:80fe:bd46:5d83:1f20) has joined #ceph
[11:00] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[11:01] * bjornar_ (~bjornar@ti0099a430-0410.bb.online.no) Quit (Read error: No route to host)
[11:04] * kefu_ is now known as kefu
[11:13] * rdias (~rdias@2001:8a0:749a:d01:d145:cbdc:263b:d53b) Quit (Ping timeout: 480 seconds)
[11:14] * danardelean (~dan@82.78.203.62) has joined #ceph
[11:18] * rdias (~rdias@2001:8a0:749a:d01:503e:db9d:4f7b:d053) has joined #ceph
[11:21] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Quit: leaving)
[11:21] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[11:22] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit ()
[11:22] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[11:23] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit ()
[11:23] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[11:24] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit ()
[11:25] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[11:25] * derjohn_mob (~aj@2001:6f8:1337:0:980c:12d0:3c4:faab) has joined #ceph
[11:26] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit ()
[11:26] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[11:28] * TMM (~hp@185.5.121.201) has joined #ceph
[11:28] * Tenk (~Helleshin@4PRAAAB1V.tor-irc.dnsbl.oftc.net) Quit ()
[11:28] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit ()
[11:30] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[11:32] * nils_____ (~nils_@82.149.255.29) has joined #ceph
[11:33] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[11:37] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[11:37] * nils_ (~nils_@doomstreet.collins.kg) Quit (Ping timeout: 480 seconds)
[11:38] * nils_____ (~nils_@82.149.255.29) Quit (Quit: This computer has gone to sleep)
[11:38] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[11:38] * danardelean (~dan@82.78.203.62) has left #ceph
[11:38] * nils_____ (~nils_@82.149.255.29) has joined #ceph
[11:38] * danardelean (~dan@82.78.203.62) has joined #ceph
[11:42] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[11:46] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[11:49] * liumxnl_ (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[11:49] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[11:50] * Behedwin (~DougalJac@108.61.210.150) has joined #ceph
[11:58] <The_Ball> I'm replacing some OSDs in my home cluster for bigger drives. To minimise rebuilding, is it best to reweight the old OSD to zero, wait for balancing to finish, add new OSD, wait for balancing to finish, then remove old OSD. Or is it better/quicker to just out the old working OSD without "emptying" it, and add the new OSD before the rebalance is finished?
[12:00] * Concubidated (~cube@86.223.137.133.rev.iijmobile.jp) Quit (Quit: Leaving.)
[12:00] * MrBy2 (~MrBy@85.115.23.2) Quit (Remote host closed the connection)
[12:02] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Quit: leaving)
[12:02] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[12:03] * liumxnl_ (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[12:03] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[12:03] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[12:05] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[12:06] * penguinRaider (~KiKo@14.139.82.6) Quit (Ping timeout: 480 seconds)
[12:07] * nils_____ (~nils_@82.149.255.29) Quit (Quit: This computer has gone to sleep)
[12:09] * nils_____ (~nils_@doomstreet.collins.kg) has joined #ceph
[12:13] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[12:14] * irq0 (~seri@amy.irq0.org) Quit (Quit: WeeChat 1.4)
[12:15] * dlan (~dennis@116.228.88.131) has joined #ceph
[12:16] * penguinRaider (~KiKo@69.163.33.182) has joined #ceph
[12:20] * Behedwin (~DougalJac@108.61.210.150) Quit ()
[12:23] * irq0 (~seri@amy.irq0.org) has joined #ceph
[12:24] * kefu (~kefu@114.92.118.31) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[12:30] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[12:30] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[12:30] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[12:31] <TMM> hey, I was wondering, is it possible to simulate the placement strategies selected by a particular crushmap? I have a crushmap that is supposed to place data at either end of a 'pod' based on the rule (either all in pod1 or in pod2 depending on the pool) but it seems now that almost all of my data is ending up in one of the pods and not the other
[12:31] <TMM> I'm wondering if my crush map is faulty
[12:35] * bniver (~bniver@pool-173-48-58-27.bstnma.fios.verizon.net) has joined #ceph
[12:35] * shylesh (~shylesh@45.124.227.235) Quit (Ping timeout: 480 seconds)
[12:35] * linjan_ (~linjan@176.195.85.190) Quit (Ping timeout: 480 seconds)
[12:35] * shylesh (~shylesh@45.124.227.235) has joined #ceph
[12:39] * penguinRaider (~KiKo@69.163.33.182) Quit (Ping timeout: 480 seconds)
[12:40] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:40] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:40] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:40] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:40] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:41] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:41] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:41] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:42] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:42] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:42] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:42] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:42] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[12:43] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:43] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:43] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:43] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:44] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:44] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:44] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:44] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:45] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:45] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:45] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[12:45] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:45] * toastyde1th (~toast@pool-71-255-253-39.washdc.fios.verizon.net) Quit (Read error: Connection reset by peer)
[12:45] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) Quit (Quit: Ex-Chat)
[12:45] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:45] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[12:45] * toastyde1th (~toast@pool-71-255-253-39.washdc.fios.verizon.net) has joined #ceph
[12:46] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:46] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:46] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:46] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:46] * linjan_ (~linjan@176.195.85.190) has joined #ceph
[12:46] <badone> TMM: $ crushtool --test -i /tmp/crushmap.new --num-rep 3 --rule 1 --show-statistics
[12:46] <badone> http://docs.ceph.com/docs/master/man/8/crushtool/
[12:47] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:47] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:47] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:47] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:48] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:48] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:48] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:48] <badone> also http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/#crush-gives-up-too-soon
[12:48] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:48] <badone> for more examples
[12:49] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:49] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:49] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:49] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:50] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:50] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:50] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:50] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:50] * penguinRaider (~KiKo@69.163.33.182) has joined #ceph
[12:51] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:51] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:51] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:51] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:51] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:52] <TMM> badone, thank you. I don't have any bad mappings though, is it possible to show the output of crushtool in a tree? I'm trying to figure out specifically if there's any chance of data ending up on the wrong side of the 'pick' here
[12:52] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:52] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:52] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:53] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:53] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:53] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:53] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:53] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:54] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:54] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:54] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:54] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:54] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:55] <Hatsjoe> ^ someone needs a bouncer...
[12:55] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:55] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:55] <badone> TMM: That I don't know I'm afraid
[12:55] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:55] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:55] <T1w> Hatsjoe: I think hes bouncy enough..
[12:55] <Hatsjoe> :') true that
[12:56] * liumxnl (~liumxnl@45.32.87.166) has joined #ceph
[12:56] * liumxnl (~liumxnl@45.32.87.166) Quit (Remote host closed the connection)
[12:56] <T1w> but it would be nice if he was banned for the next 10 or 20 mins
[12:57] <TMM> badone, hmm, in my tests with show utilization I'm seeing more or less what I expect. The utilization is twice as high as expected, but that is probably because each crush rule only has access to half of the total number of osds
[12:57] <TMM> I see my crush rules only sending data the osds inside their pods though
[12:57] <TMM> so that is expected
[12:57] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[12:58] <TMM> device 0: stored : 68289 expected : 32052.5
[12:58] * jargmonk (~jmnk@203.187.205.38) Quit (Ping timeout: 480 seconds)
[13:00] * gregmark (~Adium@68.87.42.115) has joined #ceph
[13:10] * hellertime (~Adium@72.246.3.14) has joined #ceph
[13:13] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[13:16] * wjw-freebsd3 (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[13:17] * shylesh (~shylesh@45.124.227.235) Quit (Ping timeout: 480 seconds)
[13:22] * linjan_ (~linjan@176.195.85.190) Quit (Remote host closed the connection)
[13:24] * chengpeng__ (~chengpeng@180.168.170.2) Quit (Quit: Leaving)
[13:25] * shylesh (~shylesh@45.124.227.69) has joined #ceph
[13:26] * bniver (~bniver@pool-173-48-58-27.bstnma.fios.verizon.net) Quit (Remote host closed the connection)
[13:27] * flisky (~Thunderbi@210.12.157.93) Quit (Ping timeout: 480 seconds)
[13:27] * bjornar_ (~bjornar@ti0099a430-0410.bb.online.no) has joined #ceph
[13:38] <danardelean> when installing ceph it is required that i have at least 2 OSD nodes?
[13:39] * i_m (~ivan.miro@deibp9eh1--blueice4n6.emea.ibm.com) has joined #ceph
[13:39] <danardelean> I get "[ceph_deploy.mon][ERROR ] Some monitors have still not reached quorum:" and I am not sure if it is about having only one OSD node
[13:44] <T1w> that is not related to OSDs
[13:44] * georgem (~Adium@24.114.65.35) has joined #ceph
[13:45] <danardelean> that was on "ceph-deploy mon create-initial" not on install
[13:45] <danardelean> i run the monitor on the admin node
[13:54] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[13:54] <T1w> how many mons du you have?
[13:58] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[13:58] <danardelean> 1
[13:59] * dugravot6 (~dugravot6@l-p-dn-in-4a.lionnois.site.univ-lorraine.fr) has joined #ceph
[13:59] <T1w> that your problem
[14:00] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) has joined #ceph
[14:01] <danardelean> ok, why?
[14:08] * huangjun (~kvirc@113.57.168.154) Quit (Ping timeout: 480 seconds)
[14:09] * IvanJobs (~ivanjobs@103.50.11.146) Quit ()
[14:12] * joshd (~jdurgin@206.169.83.146) Quit (Ping timeout: 480 seconds)
[14:15] * allaok (~allaok@machine107.orange-labs.com) has left #ceph
[14:15] <Hatsjoe> Because more than 1 mon has to agree to a decision to make ceph fault tolerant
[14:15] <Hatsjoe> You should have a minimum of 3 mons
[14:15] <Hatsjoe> But not too many, like 3-5 should be enough. Always make it an uneven number
[14:17] * kefu (~kefu@183.193.113.25) has joined #ceph
[14:21] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[14:25] * danieagle (~Daniel@201-0-105-74.dsl.telesp.net.br) has joined #ceph
[14:27] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) has joined #ceph
[14:27] * joshd (~jdurgin@206.169.83.146) has joined #ceph
[14:27] * dmick (~dmick@206.169.83.146) has joined #ceph
[14:32] * georgem (~Adium@24.114.65.35) Quit (Quit: Leaving.)
[14:40] * Jyron (~maku@tor2r.ins.tor.net.eu.org) has joined #ceph
[14:41] * mhack (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) has joined #ceph
[14:43] * vimal (~vikumar@121.244.87.116) Quit (Quit: Leaving)
[14:50] * dugravot6 (~dugravot6@l-p-dn-in-4a.lionnois.site.univ-lorraine.fr) Quit (Quit: Leaving.)
[14:51] * georgem (~Adium@206.108.127.16) has joined #ceph
[14:53] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[14:53] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[14:59] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[14:59] * jargonmonk (~jmnk@219.91.183.27) has joined #ceph
[15:00] * dugravot6 (~dugravot6@l-p-dn-in-4a.lionnois.site.univ-lorraine.fr) has joined #ceph
[15:00] * Racpatel (~Racpatel@2601:87:0:24af::4c8f) Quit (Quit: Leaving)
[15:01] <danardelean> ok, thanks, will look into it
[15:01] * Racpatel (~Racpatel@2601:87:0:24af::4c8f) has joined #ceph
[15:06] * huangjun (~kvirc@117.151.42.175) has joined #ceph
[15:10] * Jyron (~maku@7EXAAAA3M.tor-irc.dnsbl.oftc.net) Quit ()
[15:12] * huangjun|2 (~kvirc@117.151.54.55) has joined #ceph
[15:17] * huangjun (~kvirc@117.151.42.175) Quit (Ping timeout: 480 seconds)
[15:18] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[15:22] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:33] * georgem (~Adium@206.108.127.16) Quit (Ping timeout: 480 seconds)
[15:34] * earnThis (~oftc-webi@209.37.168.99) has joined #ceph
[15:35] <earnThis> can I get a way with a 1Gb front end network and have 10Gb for the backend/cluster network?
[15:37] <koszik> if you don't mind being limited to 1gbps of useful traffic per node then i don't see any issues with this
[15:39] <earnThis> koszik: well I guess thats part of my question - does each node _need_ more than 1Gbps on front end. the front end just receives client requests, right
[15:39] * wes_dillingham (~wes_dilli@140.247.242.44) has joined #ceph
[15:39] <koszik> yes, and i think the whole point of a ceph cluster is to server front-end traffic
[15:39] <koszik> serve*
[15:40] * squizzi (~squizzi@107.13.31.195) has joined #ceph
[15:40] * i_m (~ivan.miro@deibp9eh1--blueice4n6.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[15:44] <earnThis> koszik: sorry, are you saying "yes" to the frontend needing > 1Gb?
[15:45] <koszik> i'm saying that the frontend is what matters regarding the ability to serve the clients, and the backend matters regarding the rebalancing and replication of data
[15:45] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[15:45] <koszik> so it's common to have larger backend network than frontend
[15:45] <koszik> but if your frontend is smaller than your needs, then it has to be upgraded
[15:45] <koszik> i'd personally go with at least 10/10 if it's possible
[15:46] * CydeWeys (~measter@7EXAAAA5Y.tor-irc.dnsbl.oftc.net) has joined #ceph
[15:48] <earnThis> koszik: right, im with you there. I guess Im just trying figure out what my needs might be ahead of time, but thats highly subjective I suppose
[15:53] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Quit: Ex-Chat)
[15:53] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[15:55] * DeMiNe0 (~DeMiNe0@104.131.119.74) Quit (Ping timeout: 480 seconds)
[15:56] * DeMiNe0 (~DeMiNe0@104.131.119.74) has joined #ceph
[15:58] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[16:01] * vikhyat (~vumrao@121.244.87.117) Quit (Quit: Leaving)
[16:02] * yanzheng (~zhyan@125.70.20.240) Quit (Quit: This computer has gone to sleep)
[16:03] * kees_ (~kees@2001:610:600:8774:fd7e:c3ed:7b8e:700b) has joined #ceph
[16:03] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:08] * kefu (~kefu@183.193.113.25) Quit (Ping timeout: 480 seconds)
[16:09] * vbellur (~vijay@2601:18f:700:55b0:5e51:4fff:fee8:6a5c) has joined #ceph
[16:09] * swami1 (~swami@49.38.0.153) has joined #ceph
[16:09] <swami1> hi
[16:10] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[16:10] * kefu (~kefu@114.92.118.31) has joined #ceph
[16:12] <swami1> Iam using a ceph cluster with 231 osd...inthis I have removed a few OSDs...after that cluster recovery started...after some time -- got warn as a few OSDs nearfull and a few PGs stuck unclean
[16:12] <swami1> which warning need to fix fitst: nearfull? or stuck unclean?
[16:12] <TMM> badone, my crush rules were fine but I had applied the wrong rules to my pools, about 2/3rd of the data was ending up in one of the pods due to me being a moron
[16:13] <koszik> if you get your osds actually full recovering from that is very troublesome, so i'd start with that
[16:13] <TMM> recovery 155005178/223306262 objects misplaced (69.414%)
[16:13] <TMM> pom pom pom
[16:13] <TMM> :P
[16:13] * danardelean (~dan@82.78.203.62) has left #ceph
[16:14] <koszik> if you have at least 3way replication then unclean is not necessary that big of a problem, data is still redundant probably (but you may want to double check it to be sure)
[16:15] <swami1> iam using the replica 2 here
[16:15] <TMM> now I have to move about 70tb of data, lovely
[16:15] <koszik> ok then unclean means your data is not protected and near full means if you don't rectify that you're in for a lot of extra work
[16:16] * CydeWeys (~measter@7EXAAAA5Y.tor-irc.dnsbl.oftc.net) Quit ()
[16:16] * huangjun|2 (~kvirc@117.151.54.55) Quit (Ping timeout: 480 seconds)
[16:16] <swami1> koszik: cluster is only 60% full...
[16:17] <koszik> that doesn't mean much, a single osd still can be at 99%
[16:17] <koszik> since data is not distributed evenly
[16:17] <Hatsjoe> Does anyone know if Calamari is still under development? Everything is veryyyyy outdated
[16:18] <koszik> ceph health detail should show you which osd is at what percent
[16:18] * linjan (~linjan@176.195.85.190) has joined #ceph
[16:19] <zdzichu> ceph osd df tree
[16:20] <infernix> TMM: o_O :)
[16:20] <swami1> koszik: ok...nearfull is 85% only...
[16:20] <koszik> it's 85 for a reason, you really don't want to get to 100%
[16:21] <TMM> infernix, whoops! :)
[16:21] <swami1> koszik: please advise, shall reweight the nearfull osds? or pg unclean?
[16:21] <koszik> i personally never had to deal with a full osd, but heard horror stories from someone who has
[16:21] * linjan (~linjan@176.195.85.190) Quit (Remote host closed the connection)
[16:21] <swami1> koszik: total recovery stuck... (no progress on recovery)...
[16:21] <koszik> you have to weigh the risks, i don't know what other processes could fill your disks and what problems you'd have if you lost the other replica from the unclean pgs
[16:22] <swami1> koszik: with 125 pg stuck unclean
[16:22] <koszik> what version do you run?
[16:22] <infernix> TMM: crushtool has some options to validate effect of crushmap/rules before applying, may be helpful
[16:22] <swami1> and 11 osd nearfull (85%)
[16:22] * yanzheng (~zhyan@125.70.20.240) has joined #ceph
[16:22] <TMM> infernix, the problem really was just that my pools were using the wrong crush rulesets
[16:22] <koszik> how many osd hosts do you have?
[16:22] <TMM> infernix, I need to figure out who did what, but it's being fixed now
[16:22] <swami1> koszik: Firelfy version of ceph
[16:24] <swami1> koszik: 21 osds per host
[16:24] <koszik> yes but how many osd hosts?
[16:25] <swami1> koszik: 11 hosts
[16:25] * bara (~bara@ip4-83-240-10-82.cust.nbox.cz) Quit (Quit: Bye guys! (??????????????????? ?????????)
[16:25] <koszik> can you pastebin your full ceph osd health detail and a ceph pg $pg query from an affected pg?
[16:25] * joshd1 (~jdurgin@2602:30a:c089:2b0:f068:e9da:af4e:1ca5) has joined #ceph
[16:26] <swami1> sure
[16:26] * lmb (~Lars@ip5b41f0a4.dynamic.kabel-deutschland.de) Quit (Read error: Connection reset by peer)
[16:26] <wes_dillingham> is there a general conensus on whether it is is a smart idea to limit scrubbing to fall within non-peak hours or should i just let it run constantly ?
[16:26] * wjw-freebsd3 (~wjw@vpn.ecoracks.nl) has joined #ceph
[16:28] * lmb (~Lars@2a02:8109:8100:1d2c:517e:12a6:4dff:f38) has joined #ceph
[16:28] * yanzheng (~zhyan@125.70.20.240) Quit (Quit: This computer has gone to sleep)
[16:29] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[16:29] <koszik> wes_dillingham, scrubbing is supposed to only run when the osd is idle enough
[16:31] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[16:31] * kees_ (~kees@2001:610:600:8774:fd7e:c3ed:7b8e:700b) Quit (Remote host closed the connection)
[16:31] * derjohn_mob (~aj@2001:6f8:1337:0:980c:12d0:3c4:faab) Quit (Ping timeout: 480 seconds)
[16:34] <wes_dillingham> Gotcha, so in effect just let the osds determine when ???non-peak??? is on their own based on utilization, that seems better
[16:34] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[16:34] <wes_dillingham> koszik: thanks
[16:35] <swami1> koszik: http://pastebin.com/hV8qHzFh
[16:36] <swami1> koszik: pasted the ceph health details
[16:36] * lmb (~Lars@2a02:8109:8100:1d2c:517e:12a6:4dff:f38) Quit (Remote host closed the connection)
[16:37] <koszik> backfill_too_full indicates that a backfill operation was requested, but couldn't be completed due to insufficient storage capacity.
[16:37] <koszik> from the docs
[16:37] <koszik> so it seems that your two problems are in fact just one
[16:38] <swami1> koszik: ok...but cluster filled with 59% only...
[16:38] <swami1> koszik: ok..
[16:38] <koszik> yes but you have to consider this on the osd level
[16:38] <swami1> koszik: ok?
[16:38] <koszik> you can try reweighing the near-full osds
[16:39] <swami1> koszik: sure, will reduce a bit weight of nearfull osd..(using ceph osd crush reweight osd.id new_weight)
[16:40] <wes_dillingham> does anyone do any sort of revision control of their crush maps (ie monitor crush map changes and commit them to git) would this be a redundant / unnecessary thing to do?
[16:42] * rakeshgm (~rakesh@121.244.87.117) Quit (Quit: Leaving)
[16:43] <infernix> seems unlikely to me that someone will manually modify a crushmap and have no backup of the original; tracking manual updates in a ticketing system should be enough though, since adding/removing hosts/osds also changes it so a constant git commit workflow may not be very useful
[16:45] * vbellur (~vijay@2601:18f:700:55b0:5e51:4fff:fee8:6a5c) Quit (Quit: Leaving.)
[16:45] * vbellur (~vijay@71.234.224.255) has joined #ceph
[16:45] <[arx]> i wish the crush rules wasn't part of the crushmap tho, i would want to keep that under revision control.
[16:46] * xarses (~xarses@64.124.158.100) has joined #ceph
[16:46] <wes_dillingham> right, infernix, I would like to (possibly) setup a system such that any changes (even automatic ones) like adding/removing osds would trigger a commit when that automatic crush changed happened. Ceph already keeps all of its previous crushmaps (I think) but storing those outside of ceph (in git) might prove useful???
[16:47] * mattbenjamin (~mbenjamin@12.118.3.106) has joined #ceph
[16:48] * alexxy (~alexxy@biod.pnpi.spb.ru) Quit (Remote host closed the connection)
[16:59] <flaf> Hi @all, can you tell me where I can find in ceph documentation: a) explanations about client_oc_* options in ceph.conf (parameters for ceph.fuse) and b) explanations to disallow a cephfs mount in a client except in a specific directory? Thx.
[17:03] <infernix> wes_dillingham: a cronjob that grabs and decompiles crushmap and does a git commit -a -m "`date` crushmap update" && git push shouldn't be too hard to make
[17:03] * wushudoin (~wushudoin@2601:646:8281:cfd:2ab2:bdff:fe0b:a6ee) has joined #ceph
[17:04] <wes_dillingham> I agree, I would instead want to write it such that it caught every change not just those at any given interval
[17:06] * bvi (~Bastiaan@185.56.32.1) Quit (Quit: Leaving)
[17:06] * Concubidated (~cube@ai126248065090.9.tss.access-internet.ne.jp) has joined #ceph
[17:07] * cathode (~cathode@50.232.215.114) has joined #ceph
[17:07] * jargonmonk (~jmnk@219.91.183.27) Quit (Quit: Leaving)
[17:08] * dugravot6 (~dugravot6@l-p-dn-in-4a.lionnois.site.univ-lorraine.fr) Quit (Quit: Leaving.)
[17:10] * Atomizer (~ulterior@tor2r.ins.tor.net.eu.org) has joined #ceph
[17:10] * shaunm (~shaunm@cpe-192-180-17-174.kya.res.rr.com) has joined #ceph
[17:10] * analbeard (~shw@support.memset.com) Quit (Ping timeout: 480 seconds)
[17:18] * kevinc (~kevinc__@client65-34.sdsc.edu) has joined #ceph
[17:24] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) Quit (Quit: Leaving.)
[17:26] * branto (~branto@178-253-144-120.3pp.slovanet.sk) Quit (Quit: Leaving.)
[17:26] * axion5joey (~oftc-webi@static-108-47-170-18.lsanca.fios.frontiernet.net) has joined #ceph
[17:26] * kefu is now known as kefu|afk
[17:26] * wjw-freebsd4 (~wjw@vpn.ecoracks.nl) has joined #ceph
[17:29] * Concubidated1 (~cube@122.103.163.63.ap.gmobb-fix.jp) has joined #ceph
[17:30] * wjw-freebsd3 (~wjw@vpn.ecoracks.nl) Quit (Ping timeout: 480 seconds)
[17:30] <axion5joey> Hi Everyone, one of my hard drives went bad and I'm trying to remove it from the cluster. The drive currently shows as out but up. I've run ceph osd down 11 multiple times. The cil shows that it marked as down, but ceph -s still shows it as up and if I run ceph osd rm 11 it also shows that it is up. Anyone know how to fix this?
[17:31] <[arx]> did you mark it as out first?
[17:32] <axion5joey> yes I did
[17:32] * danieagle (~Daniel@201-0-105-74.dsl.telesp.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[17:33] <axion5joey> I did out, then down, then crush remove, then auth del
[17:33] <axion5joey> Those all worked
[17:33] <axion5joey> it wasn't until I tried rm that I realized that it was still showing as up
[17:33] * Concubidated (~cube@ai126248065090.9.tss.access-internet.ne.jp) Quit (Ping timeout: 480 seconds)
[17:34] * reed (~reed@50-1-125-26.dsl.dynamic.fusionbroadband.com) has joined #ceph
[17:34] * garphy is now known as garphy`aw
[17:35] <m0zes> did the daemon actually stop?
[17:35] <axion5joey> m0zes I'm not sure how to check that on Centos 7
[17:36] * EthanL (~lamberet@cce02cs4035-fa12-z.ams.hpecore.net) Quit (Ping timeout: 480 seconds)
[17:36] <axion5joey> I just looked using netstat -an and it shows the process is still listening
[17:37] <swami1> koszik: nearfull comedown to 2 (from 12)...
[17:37] * kefu|afk (~kefu@114.92.118.31) Quit (Max SendQ exceeded)
[17:37] * linjan (~linjan@176.195.85.190) has joined #ceph
[17:38] <axion5joey> I've got the process id
[17:38] <axion5joey> is it safe to just kill it?
[17:38] * kefu (~kefu@114.92.118.31) has joined #ceph
[17:38] * hybrid512 (~walid@195.200.189.206) Quit (Quit: Leaving.)
[17:40] * kefu (~kefu@114.92.118.31) Quit (Max SendQ exceeded)
[17:40] <koszik> swami1: did the other stats get any better?
[17:40] * Atomizer (~ulterior@9YSAAAEV6.tor-irc.dnsbl.oftc.net) Quit ()
[17:40] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:80fe:bd46:5d83:1f20) Quit (Ping timeout: 480 seconds)
[17:40] * kefu (~kefu@114.92.118.31) has joined #ceph
[17:46] * ntpttr_ (~ntpttr@192.55.54.44) has joined #ceph
[17:47] * EthanL (~lamberet@cce02cs4035-fa12-z.ams.hpecore.net) has joined #ceph
[17:48] * kefu is now known as kefu|afk
[17:50] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[17:50] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit (Remote host closed the connection)
[17:50] <swami1> koszik: all nearfull fixed..
[17:50] <swami1> koszik: but pg stuck unclean increased...
[17:51] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph
[17:53] * Jeffrey4l_ (~Jeffrey@119.251.238.183) Quit (Ping timeout: 480 seconds)
[17:53] * praveen_ (~praveen@121.244.155.10) Quit (Remote host closed the connection)
[17:53] <swami1> koszik: ceph is filled the OSD evenly (ie some OSD filling >85% and some OSD are only filled with around 65%)
[17:53] <swami1> koszik: s/is/is not
[17:54] * Skaag (~lunix@172.56.16.122) has joined #ceph
[18:03] * kefu|afk (~kefu@114.92.118.31) Quit (Quit: Textual IRC Client: www.textualapp.com)
[18:06] * Skaag (~lunix@172.56.16.122) Quit (Quit: Leaving.)
[18:06] * ntpttr_ (~ntpttr@192.55.54.44) Quit (Remote host closed the connection)
[18:06] * ntpttr__ (~ntpttr@134.134.139.83) has joined #ceph
[18:07] * linjan (~linjan@176.195.85.190) Quit (Ping timeout: 480 seconds)
[18:08] <kevinc> we have a 3 server (30 osd) ceph cluster and we want to add 3 more servers (30 more osds), are there any instructions on how we can add the additional osds with out having a big impact on users?
[18:12] <TMM> kevinc, make sure your max backfills aren't too high, and your recovery io isn't set at too high a priority
[18:12] <TMM> if you have those both set low your users won't notice a thing
[18:13] * thansen (~thansen@17.253.sfcn.org) has joined #ceph
[18:16] * derjohn_mob (~aj@x590c63d4.dyn.telefonica.de) has joined #ceph
[18:16] <kevinc> ceph daemon osd.0 config show | grep backfill
[18:16] <kevinc> "osd_max_backfills": "1",
[18:17] <TMM> also check out osd recovery max active , and osd recovery op priority
[18:18] * Concubidated (~cube@ai126248065090.9.tss.access-internet.ne.jp) has joined #ceph
[18:18] * penguinRaider (~KiKo@69.163.33.182) Quit (Ping timeout: 480 seconds)
[18:18] <kevinc> ceph daemon osd.0 config show | egrep "osd_recovery_max_active|osd_recovery_op_priority"
[18:18] <kevinc> "osd_recovery_max_active": "3",
[18:18] <kevinc> "osd_recovery_op_priority": "3",
[18:19] <TMM> should be fine then
[18:19] <TMM> just add your osds
[18:19] <TMM> you can add them with a weight of 0 and slowly increase if your cluster is nearing cpu capacity already
[18:21] <kevinc> thanks you for your time, should we add all the osds in one server then move to the next server or should we add one osd per server then add another osd per server until all are added?
[18:21] <TMM> depends on your goals I suppose, depending on how loaded the existing servers are I'd either add one server at a time, or just add all of them at once
[18:22] <TMM> I don't see much point in moving from server to server on a per-osd basis
[18:22] <kevinc> ok, with openstack swift i am accustom to added new nodes by adding the hard drives with a weight of 0 then slowly increasing the weight, I wasn't sure if the same was recommend for ceph or not
[18:22] <TMM> 'it depends'
[18:22] * Concubidated1 (~cube@122.103.163.63.ap.gmobb-fix.jp) Quit (Ping timeout: 480 seconds)
[18:23] <TMM> if your max backfills are pretty low and you have cpu time to spare it doesn't really matter in my experience
[18:23] <TMM> if you are approaching capacity somewhere it is better to add with 0 and then increase
[18:23] <TMM> adding them all at once has the benefit that you don't get any more data movement than is necessary
[18:24] <TMM> adding them slowly is nice because there will be fewer misplaced objects at any given moment, and less remapped pgs
[18:24] <TMM> saving cpu and network bandwidth
[18:26] * wjw-freebsd5 (~wjw@vpn.ecoracks.nl) has joined #ceph
[18:27] <kevinc> ok, thanks. are there instructions on how to add the osds with a weight of 0 and update the weight?
[18:27] * penguinRaider (~KiKo@69.163.33.182) has joined #ceph
[18:28] <kevinc> I have used the cephs-deploy command to add osds, but i don't remember seeing an option to set the initial weight
[18:28] <TMM> ah, I don't know, sorry, I don't use ceph-deploy
[18:28] * yanzheng (~zhyan@125.70.20.240) has joined #ceph
[18:31] <TMM> I'm heading out, good luck kevinc!
[18:31] * TMM (~hp@185.5.121.201) Quit (Quit: Ex-Chat)
[18:31] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:32] * wjw-freebsd4 (~wjw@vpn.ecoracks.nl) Quit (Ping timeout: 480 seconds)
[18:36] * DanFoster (~Daniel@2a00:1ee0:3:1337:d1f6:5f84:e698:b9f9) Quit (Quit: Leaving)
[18:38] * rinek (~o@62.109.134.112) Quit (Quit: ~)
[18:39] * yanzheng (~zhyan@125.70.20.240) Quit (Quit: This computer has gone to sleep)
[18:40] * ffilz (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) has joined #ceph
[18:42] * swami1 (~swami@49.38.0.153) Quit (Quit: Leaving.)
[18:44] * ira_ (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[18:44] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[18:48] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[18:49] * mattch (~mattch@w5430.see.ed.ac.uk) Quit (Quit: Leaving.)
[18:52] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[18:53] * bryanapperson_ (~bryanappe@50-203-47-138-static.hfc.comcastbusiness.net) has joined #ceph
[18:55] * vbellur1 (~vijay@71.234.224.255) has joined #ceph
[18:55] * vbellur (~vijay@71.234.224.255) Quit (Read error: Connection reset by peer)
[19:00] <TheSov2> http://wiki.minnowboard.org/MinnowBoard_MAX_HW_Setup Single device ceph node?
[19:01] * dnunez (~dnunez@130.64.25.56) has joined #ceph
[19:02] <TheSov2> imagine that with a 5 port multiplier
[19:03] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[19:03] * praveen (~praveen@122.171.64.106) has joined #ceph
[19:04] <TheSov2> anyone?
[19:04] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[19:05] * squizzi_ (~squizzi@2001:420:2240:1268:a0b7:f4b7:490:2105) has joined #ceph
[19:09] * nils_____ is now known as nils_
[19:09] <nils_> I don't know, doesn't look very practical
[19:10] <nils_> oh it's an Atom CPU?
[19:11] * Geph (~Geoffrey@41.77.153.99) Quit (Ping timeout: 480 seconds)
[19:11] * rinek (~o@62.109.134.112) has joined #ceph
[19:13] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[19:13] * dmick (~dmick@206.169.83.146) has left #ceph
[19:14] * overload (~oc-lram@79.108.113.172.dyn.user.ono.com) has joined #ceph
[19:14] <overload> hi
[19:17] <jiffe> if I have an osd per disk and I need to replace a disk, do I need to completely remove and then recreate that osd?
[19:17] <overload> i've an issue with my cluster... I need to do a firmware upgrade in 1 osd server... so i set the flat noout to the osd and after that i shutdown the server. When i've the node down i get slow down request and the VMs can not write...
[19:17] <jiffe> remove and recreate the osd in ceph that is
[19:19] <overload> and this message: cluster [WRN] slow request 240.407203 seconds old, received at 2016-07-08 16:18:46.410218: osd_op(client.854762.0:8678415 3.d1e26272 (undecoded) ondisk+write+known_if_redirected e9030) currently waiting for active
[19:19] <overload> several messages like that
[19:21] <bryanapperson_> jiffe - Yes. Set the OSD as down and out, remove the OSD once data migration is complete. Then add the new disk as a new OSD.
[19:24] * georgem (~Adium@206.108.127.16) has joined #ceph
[19:25] * Skaag (~lunix@65.200.54.234) has joined #ceph
[19:26] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[19:27] * wjw-freebsd5 (~wjw@vpn.ecoracks.nl) Quit (Ping timeout: 480 seconds)
[19:28] * joshd1 (~jdurgin@2602:30a:c089:2b0:f068:e9da:af4e:1ca5) Quit (Quit: Leaving.)
[19:29] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[19:32] * Psi-Jack (~psi-jack@psi-jack.user.oftc.net) has joined #ceph
[19:33] <Psi-Jack> Good afternoon. I'm trying to understand and why I have recovery 80523/450974 objects degraded (17.846%)
[19:34] <Psi-Jack> 205 pgs degraded, 77 pgs stuck degraded, 256 pgs stuck unclean, 77 pgs stuck undersized, and 205 pgs undersized, on top of that.
[19:35] <Psi-Jack> I understand why I have too many PGs per OSD (486 > max 300), but I'm soon to be fixing that, because I've been migrating, within an existing 2xProxmox VE cluster previously using LVM+GlusterFS, to adding a third node and changing everything to use Ceph, so not all my OSDs are in yet while I migrate all my data into Ceph, and kill the LVM and replace with Ceph.
[19:37] <Psi-Jack> Heh, my last thing I'm doing to be able to afford the full conversion of the last system is transferring 2TB over to CephFS, and then I can kill the LVM there and plug in my last
[19:37] <Psi-Jack> Heh, my last thing I'm doing to be able to afford the full conversion of the last system is transferring 2TB over to CephFS, and then I can kill the LVM there and plug in my last 4 OSDs.
[19:40] * dnunez is now known as dnunez-remote
[19:41] <Psi-Jack> I'm also wondering if it would be plausible to setup CRUSH rules to allow me to isolate certain pool's data objects to a single physical host's OSDs with a size of 1.
[19:42] * karnan (~karnan@106.51.138.205) Quit (Quit: Leaving)
[19:42] * karnan (~karnan@106.51.138.205) has joined #ceph
[19:43] * jdillaman_ (~jdillaman@pool-108-18-97-95.washdc.fios.verizon.net) has joined #ceph
[19:46] * hellertime (~Adium@72.246.3.14) Quit (Quit: Leaving.)
[19:49] * swami1 (~swami@27.7.169.81) has joined #ceph
[19:50] * scg (~zscg@valis.gnu.org) has joined #ceph
[19:50] <TheSov2> damn that minnowboard does not support port multiplcation
[19:51] <TheSov2> why is it so hard to find a x64 SBC with a 1 gig native nic and at least 3 sata ports!
[19:58] * shylesh (~shylesh@45.124.227.69) Quit (Remote host closed the connection)
[19:59] * scuttle|afk is now known as scuttlemonkey
[20:00] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[20:00] * noahw (~noahw@96.82.80.65) has joined #ceph
[20:03] <scuttlemonkey> TheSov2: the minnow is what I'm building my traveling ceph demo cluster out of
[20:04] <scuttlemonkey> got it set up with centos 7 + ceph-ansible by hand...when I get back from FISL on the 18th I'll be working to get it working with Foreman so we can have push-button-ish deployment if people want to play with them
[20:04] * rendar (~I@87.18.177.238) Quit (Ping timeout: 480 seconds)
[20:04] <scuttlemonkey> definitely not going to be production infrastructure...but it should be a fun toy
[20:06] * acctor (~acctor@208.46.223.218) has joined #ceph
[20:08] * ntpttr_ (~ntpttr@192.55.54.42) has joined #ceph
[20:08] * ntpttr__ (~ntpttr@134.134.139.83) Quit (Remote host closed the connection)
[20:08] <TheSov2> scuttle its very disappointing that there is no SBC out there that can do what i need for ceph
[20:09] <TheSov2> scuttlemonkey, why a traveling ceph cluster?
[20:10] <scuttlemonkey> TheSov2: yeah, for sure...I fumbled around with a lot of different options (pi3, odroid, etc) before I went with the minnow
[20:10] <zdzichu> TheSov2: disks with integrated OSD are cooler
[20:10] <Psi-Jack> Heh, Well, well, TheSov2. You're here too, I see. :)
[20:10] <TheSov2> well the minnow ix x64
[20:10] <scuttlemonkey> TheSov2: I end up at a lot of events like conferences, ceph days, etc where having a working cluster + blinkenlights to play with would be fun
[20:10] <TheSov2> so its easier to use the prepacked app
[20:11] <TheSov2> scuttle, im trying to build a sellable producit
[20:11] <TheSov2> product
[20:11] <TheSov2> a modular ceph cluster
[20:11] <TheSov2> add as needed
[20:11] <scuttlemonkey> TheSov2: yeah, that's a much different task :)
[20:11] <TheSov2> i would lock a docker ceph
[20:12] <TheSov2> just point it at a disk and your done
[20:12] * ffilzwin (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) Quit (Quit: Leaving)
[20:12] <TheSov2> you're done
[20:12] <TheSov2> *
[20:13] <scuttlemonkey> TheSov2: yeah, but as zdzichu mentioned...the integrated disk+osd thing is looking pretty hot
[20:13] <TheSov2> i refuse to buy a seagate
[20:13] <TheSov2> LOL
[20:13] <koszik> kevinc: is it an option to upgrade to jewel? they fixed the io priorities, i just recently rebalanced a large part of my cluster and it was painless
[20:13] <scuttlemonkey> the first person to build a sellable/scalable appliance out of those WDLabs drives is gonna get me super excited
[20:13] <scuttlemonkey> ceph.com/community/500-osd-ceph-cluster/
[20:13] <zdzichu> WD has propotypes, segate has ready products, sandisk have cool SSD with OSD
[20:14] <scuttlemonkey> zdzichu: yeah, lots of hotness out there in that space
[20:14] <kevinc> koszik: we are already running jewel
[20:14] <scuttlemonkey> I get frustrated w/ the seagate drives since they came from a history of being so locked down
[20:14] <kevinc> 10.2.2
[20:14] <TheSov2> how does journaling work on those?
[20:15] <scuttlemonkey> TheSov2: I think each has a co-located journal right now...but I'd have to re-read the writeup to remember
[20:15] <kevinc> koszik: so should we add the osds with a weight of 0 (if that is even possible) or just add them with the full weight? We are using ceph for Openstack
[20:17] <TheSov2> damn if they build those disks, with integrated osd servers, so u can attach a m.2 disk as a journal
[20:17] <TheSov2> that would own
[20:17] <Psi-Jack> Hmmm, curious, is it possible to move an OSD's journal to a journal device after creation of the OSD?
[20:17] <TheSov2> it is
[20:17] <Psi-Jack> Oh, nice. :)
[20:17] <TheSov2> the journal is a symnlink on the osd
[20:18] <koszik> kevinc: if your system is not on the edge regarding io capacity then i think it will be fine if you add them straight; the io subsystem is not that bad like it was in the previous versions
[20:18] <Psi-Jack> Is it plausible to put the journal on an LVM LV?
[20:18] * hellertime (~Adium@72.246.3.14) has joined #ceph
[20:18] <TheSov2> you can move that journal anywhere as long as its RAW
[20:18] <TheSov2> but it has to be consistent
[20:18] <Psi-Jack> Cool. :)
[20:18] <TheSov2> dont fuck it up
[20:18] <TheSov2> on root of the osd is a symlink
[20:18] <TheSov2> it points to a device
[20:19] <TheSov2> like /dev/sda5 or something
[20:19] <Psi-Jack> Gotcha.
[20:19] <kevinc> koszik: ok, that is good to know, thank you!
[20:19] <kevinc> we are at 33% capacity right now
[20:20] <Psi-Jack> Cool, my SSD drives, what I intend to move the journals to, provide LV's for PVE instances and I could slice out some for OSD journal devices.
[20:20] <TheSov2> -rw-r--r-- 1 ceph ceph 37 Jun 15 14:38 fsid
[20:20] <TheSov2> lrwxrwxrwx 1 ceph ceph 58 Jun 15 14:38 journal -> /dev/disk/by-partuuid/f4aa2450-6ab0-4e40-b4c7-11bf381a9034
[20:20] <TheSov2> -rw-r--r-- 1 ceph ceph 37 Jun 15 14:38 journal_uuid
[20:20] <TheSov2> thats in my /var/lib/ceph/osd/ceph-0
[20:20] <TheSov2> in order to move it, you would have to dd the journal onto another disk and then change the symlink
[20:20] <TheSov2> obviously while the osd is offline
[20:21] <earnThis> opinion question - which is more important a ceph network - fault tolerance or performance? I have 2 options - the first being to used (2) 1GbE swtiches for the frontend network and (2) 10Gb swtiches for the backend, each node would be redundantly connected to each switch in both networks. the other option is to use (1) 10Gb switch for the frontend and (1) 10Gb swtich for the backend, better performacen this way but no redundant connections
[20:21] <Psi-Jack> Ahh, so when pveceph initialized my OSD's, it created the journal on the same HDD in a partition, I see. :)
[20:21] <TheSov2> the backend network needs to be faster
[20:21] <TheSov2> earnThis,
[20:21] <TheSov2> and you really need to have a backend net for actual performance
[20:22] <T1> earnThis: I've got 1gbit front/client net and 10gbit cluster net
[20:22] <earnThis> T1: how many nodes?
[20:22] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) Quit (Ping timeout: 480 seconds)
[20:22] <TheSov2> remember the client net will only use as much speed as the clients use, the backend network has to write 2/3 times that amount of data
[20:22] <T1> the cluster is done with cheap switches without stacking, but just with active/passive (bonding with mode 1) and then I make sure that every node pick the same device
[20:23] <T1> earnThis: < 10
[20:23] <T1> dell has some cheap and well performing 12 port 10gbit SFP+ switches
[20:23] <TheSov2> use cienna service delivery switches, they are so cheap
[20:23] <TheSov2> 300 bucks and better than cisco
[20:24] <earnThis> T1: TheSov2: so do my 2 options not realyl matter as long as the backend is faster ie. 10Gb?
[20:24] <TheSov2> earnThis, realize this, your client machine, does it have 10 gig nics?
[20:24] <TheSov2> or are they 1 gig nics
[20:24] <TheSov2> client machines*
[20:25] <T1> earnThis: yes. the cluster net needs to be faster since a single OSD probably could saturate a 1gbit link during recovery
[20:25] <earnThis> TheSov2: the frontend has to hook into a 1Gbe network
[20:25] <T1> and if you have 2+ OSDs per node then you are in for a bad time..
[20:25] <Psi-Jack> heh yeah...
[20:26] <Psi-Jack> I used to saturate a 2x1Gbit LACP network with recovery.
[20:26] <Psi-Jack> It slowed things down, but for my home cluster that was okay.
[20:26] <earnThis> TheSov2: T1: so i guess im still not sure if redudancy amtters
[20:28] <T1> redundancy matters
[20:28] <TheSov2> earnThis, so understand that the way ceph works, each osd server can send information out to the same client. imagine 100 osd hosts, sending data to 1 client machine.
[20:28] <Psi-Jack> Redundancy always matters.
[20:28] <T1> but speed matters more
[20:28] * wjw-freebsd5 (~wjw@smtp.digiware.nl) has joined #ceph
[20:28] <TheSov2> so your 100 osd servers are trying to send data at 1 gig x100 to a system with 1, 1 gig nic
[20:28] <TheSov2> makes no sense right?
[20:28] <Psi-Jack> Speed, debatable somewhat. More speed is always better, obviously, in production use.
[20:28] <TheSov2> so save the speed for your backend
[20:28] <earnThis> alright so gun to your head - which of my optiosn do you choose
[20:28] <TheSov2> 10 gig on backend, 2x1gig on back end
[20:28] <T1> 1gbit for front 10g for cluster
[20:28] <TheSov2> err front
[20:28] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:28] <T1> and you could easily go for active/passive cluster net to avoid expensive 10g switches
[20:30] <earnThis> T1: ive already got the 2 10gb switches
[20:30] * penguinRaider (~KiKo@69.163.33.182) Quit (Ping timeout: 480 seconds)
[20:30] * rendar (~I@87.18.177.238) has joined #ceph
[20:31] <T1> TheSov2: what model cienna?
[20:32] <TheSov2> 3930
[20:32] * zero_shane (~textual@208.46.223.218) has joined #ceph
[20:33] * ffilzwin (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) has joined #ceph
[20:33] <T1> but that's got just 2 10g ports?
[20:33] <TheSov2> yes
[20:33] <TheSov2> u connect those to your ceph hosts
[20:34] <TheSov2> and use the rest of your ports as front end
[20:34] <TheSov2> not for backend
[20:34] <T1> errr...
[20:34] <TheSov2> for backend use mellanox
[20:34] <T1> I think we're talking about 2 different need here.. :)
[20:34] <T1> needs even
[20:34] <T1> oh well.. afk
[20:34] <TheSov2> so i use those cienna's to connect 10g to 1gig front end
[20:35] <TheSov2> and for backend i use mellanox IPoIB
[20:38] * rakeshgm (~rakesh@106.51.28.105) has joined #ceph
[20:38] * sickology (~mio@vpn.bcs.hr) Quit (Read error: Connection reset by peer)
[20:39] <TheSov2> my home one is all ethernet
[20:39] <Psi-Jack> Heh, one thing I really love about CephFS is the xattr's for it. So freaking nice to have that. :)
[20:39] * penguinRaider (~KiKo@69.163.33.182) has joined #ceph
[20:41] * sickology (~mio@vpn.bcs.hr) has joined #ceph
[20:42] * ade (~abradshaw@dslb-178-008-040-187.178.008.pools.vodafone-ip.de) Quit (Quit: Too sexy for his shirt)
[20:53] <rkeene> Your mom likes it.
[20:54] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[20:55] * sage__ is now known as sage
[20:55] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[21:04] * shaunm (~shaunm@cpe-192-180-17-174.kya.res.rr.com) Quit (Ping timeout: 480 seconds)
[21:06] <Psi-Jack> heh, rkeene too, I see. LOL
[21:06] * praveen (~praveen@122.171.64.106) Quit (Read error: Connection reset by peer)
[21:06] * rakeshgm (~rakesh@106.51.28.105) Quit (Quit: Leaving)
[21:07] * ntpttr_ (~ntpttr@192.55.54.42) Quit (Remote host closed the connection)
[21:10] * garphy`aw is now known as garphy
[21:11] * sudocat (~dibarra@192.185.1.20) has left #ceph
[21:11] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[21:13] * karnan (~karnan@106.51.138.205) Quit (Quit: Leaving)
[21:25] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit (Quit: Ex-Chat)
[21:28] * praveen (~praveen@122.171.64.106) has joined #ceph
[21:30] * squizzi (~squizzi@107.13.31.195) Quit (Quit: bye)
[21:31] * earnThis (~oftc-webi@209.37.168.99) Quit (Ping timeout: 480 seconds)
[21:36] * Concubidated (~cube@ai126248065090.9.tss.access-internet.ne.jp) Quit (Quit: Leaving.)
[21:37] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[21:38] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[21:38] * ntpttr_ (~ntpttr@134.134.139.83) has joined #ceph
[21:40] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph
[21:40] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit (Remote host closed the connection)
[21:42] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) Quit (Quit: Leaving)
[21:48] * reed_ (~reed@184-23-0-196.dsl.static.fusionbroadband.com) has joined #ceph
[21:48] * reed_ (~reed@184-23-0-196.dsl.static.fusionbroadband.com) Quit (Remote host closed the connection)
[21:53] * scuttlemonkey is now known as scuttle|afk
[21:54] * reed (~reed@50-1-125-26.dsl.dynamic.fusionbroadband.com) Quit (Ping timeout: 480 seconds)
[21:54] * bryanapperson_ (~bryanappe@50-203-47-138-static.hfc.comcastbusiness.net) Quit (Quit: Leaving)
[21:55] * dnunez-remote (~dnunez@130.64.25.56) Quit (Quit: Leaving)
[21:57] * jproulx (~jon@128.30.30.25) has left #ceph
[22:04] * scg (~zscg@valis.gnu.org) Quit (Remote host closed the connection)
[22:07] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[22:12] * scg (~zscg@valis.gnu.org) has joined #ceph
[22:12] * praveen (~praveen@122.171.64.106) Quit (Read error: Connection reset by peer)
[22:15] * georgem (~Adium@206.108.127.16) has left #ceph
[22:22] <overload> i've an issue with my cluster... I need to do a firmware upgrade in 1 osd server... so i set the flat noout to the osd and after that i shutdown the server. When i've the node down i get slow down request and the VMs can not write...
[22:22] <overload> ans several messages like this: cluster [WRN] slow request 240.407203 seconds old, received at 2016-07-08 16:18:46.410218: osd_op(client.854762.0:8678415 3.d1e26272 (undecoded) ondisk+write+known_if_redirected e9030) currently waiting for active
[22:26] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Quit: Ex-Chat)
[22:26] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[22:27] * praveen (~praveen@122.171.64.106) has joined #ceph
[22:29] * rotbeard (~redbeard@2a02:908:df13:bb00:a022:ecb7:cc5e:d8dd) has joined #ceph
[22:34] * dusti (~Hejt@torrelay1.tomhek.net) has joined #ceph
[22:35] * haplo37 (~haplo37@199.91.185.156) Quit (Remote host closed the connection)
[22:42] * reed (~reed@184-23-0-196.dsl.static.fusionbroadband.com) has joined #ceph
[22:46] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[22:47] * ntpttr_ (~ntpttr@134.134.139.83) Quit (Remote host closed the connection)
[22:47] * ntpttr_ (~ntpttr@192.55.54.44) has joined #ceph
[22:49] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[22:55] <untoreh> ceph-docker got really fat with jewel, it doubled the size from infernalis :o
[22:58] * squizzi_ (~squizzi@2001:420:2240:1268:a0b7:f4b7:490:2105) Quit (Quit: bye)
[23:02] * dusti (~Hejt@7EXAAABFM.tor-irc.dnsbl.oftc.net) Quit ()
[23:03] <T1> overload: the node you closed down - has it got more than one OSD?
[23:04] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[23:05] * wes_dillingham (~wes_dilli@140.247.242.44) Quit (Quit: wes_dillingham)
[23:11] * scg (~zscg@valis.gnu.org) Quit (Remote host closed the connection)
[23:11] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[23:11] * hellertime (~Adium@72.246.3.14) Quit (Quit: Leaving.)
[23:13] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[23:15] * rdias (~rdias@2001:8a0:749a:d01:503e:db9d:4f7b:d053) Quit (Ping timeout: 480 seconds)
[23:20] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[23:20] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[23:24] * praveen (~praveen@122.171.64.106) Quit (Read error: Connection reset by peer)
[23:29] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[23:34] * rotbeard (~redbeard@2a02:908:df13:bb00:a022:ecb7:cc5e:d8dd) Quit (Quit: Leaving)
[23:39] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) has joined #ceph
[23:42] * swami1 (~swami@27.7.169.81) Quit (Quit: Leaving.)
[23:44] * praveen (~praveen@122.171.64.106) has joined #ceph
[23:48] * shaunm (~shaunm@74.83.215.100) has joined #ceph
[23:49] * thansen (~thansen@17.253.sfcn.org) Quit (Quit: Ex-Chat)
[23:50] <Psi-Jack> I read about Ceph Hammer having straw2 for a new algorithm. Is it possible to change from straw to straw2 easily, online even?
[23:52] <Psi-Jack> Ahh, apparently so.
[23:56] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[23:57] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[23:58] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.