#ceph IRC Log

Index

IRC Log for 2016-02-25

Timestamps are in GMT/BST.

[0:00] * dillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[0:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[0:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[0:03] * fsimonce (~simon@host85-7-dynamic.49-82-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[0:04] * adun153 (~ljtirazon@49.144.74.236) has joined #ceph
[0:05] * jclm (~jclm@63.117.50.130) has joined #ceph
[0:12] * LongyanG (~long@15255.s.time4vps.eu) has joined #ceph
[0:22] * ronrib (~boswortr@45.32.242.135) has joined #ceph
[0:23] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) Quit (Quit: valeech)
[0:23] * rapedex (~maku@46.166.188.220) Quit ()
[0:23] * elt (~Coe|work@destiny.enn.lu) has joined #ceph
[0:27] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:37] * LongyanG (~long@15255.s.time4vps.eu) Quit (Quit: Changing server)
[0:38] * simeon_ (~simeon@apu0.dhcp.meraka.csir.co.za) Quit (Ping timeout: 480 seconds)
[0:38] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[0:41] * LongyanG (~long@15255.s.time4vps.eu) has joined #ceph
[0:42] * LongyanG (~long@15255.s.time4vps.eu) Quit ()
[0:42] * LongyanG (~long@15255.s.time4vps.eu) has joined #ceph
[0:45] * rendar (~I@host18-177-dynamic.20-87-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[0:48] * rendar (~I@host18-177-dynamic.20-87-r.retail.telecomitalia.it) has joined #ceph
[0:49] * bene_in_mtg (~bene@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[0:50] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[0:51] * yanzheng1 (~zhyan@182.139.20.43) has joined #ceph
[0:52] <motk> bah
[0:52] <motk> anyone familiar with ceph-ansible?
[0:53] * mattbenjamin (~mbenjamin@sccc-66-78-236-243.smartcity.com) Quit (Ping timeout: 480 seconds)
[0:53] * elt (~Coe|work@84ZAACVD5.tor-irc.dnsbl.oftc.net) Quit ()
[0:53] * basicxman (~datagutt@chomsky.torservers.net) has joined #ceph
[0:58] * LongyanG (~long@15255.s.time4vps.eu) Quit (Quit: Changing server)
[1:00] * LongyanG (~long@15255.s.time4vps.eu) has joined #ceph
[1:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[1:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[1:02] * rendar (~I@host18-177-dynamic.20-87-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[1:04] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Ping timeout: 480 seconds)
[1:13] * MrFusion (~reflexer@209-207-112-193.ip.van.radiant.net) Quit (Quit: Leaving)
[1:21] * yanzheng1 (~zhyan@182.139.20.43) Quit (Quit: This computer has gone to sleep)
[1:21] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[1:21] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[1:22] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[1:23] * basicxman (~datagutt@4MJAACWHR.tor-irc.dnsbl.oftc.net) Quit ()
[1:23] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[1:23] * Rickus (~Rickus@office.protected.ca) Quit (Read error: No route to host)
[1:27] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[1:31] * adun153 (~ljtirazon@49.144.74.236) Quit (Ping timeout: 480 seconds)
[1:31] * adun153 (~ljtirazon@49.144.74.236) has joined #ceph
[1:32] * adun153 (~ljtirazon@49.144.74.236) Quit (Remote host closed the connection)
[1:34] <motk> aw man
[1:34] * davidzlap1 (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[1:34] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[1:35] * bjornar__ (~bjornar@ti0099a430-0908.bb.online.no) Quit (Ping timeout: 480 seconds)
[1:35] * bjornar_ (~bjornar@ti0099a430-0908.bb.online.no) Quit (Ping timeout: 480 seconds)
[1:35] * LobsterRoll (~LobsterRo@209-6-180-200.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[1:38] * bpetit (~bpetit@chiron.b0rk.in) Quit (Quit: Lost terminal)
[1:57] * sudocat1 (~dibarra@192.185.1.19) has joined #ceph
[1:57] * sudocat1 (~dibarra@192.185.1.19) Quit ()
[1:58] * sudocat1 (~dibarra@192.185.1.19) has joined #ceph
[1:58] * sudocat1 (~dibarra@192.185.1.19) Quit ()
[1:58] * sudocat1 (~dibarra@192.185.1.19) has joined #ceph
[1:58] * sudocat1 (~dibarra@192.185.1.19) Quit ()
[1:59] * sudocat1 (~dibarra@192.185.1.19) has joined #ceph
[1:59] * ronrib (~boswortr@45.32.242.135) Quit (Remote host closed the connection)
[2:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[2:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[2:01] * enax (~enax@94-21-125-144.pool.digikabel.hu) has joined #ceph
[2:02] * neurodrone_ (~neurodron@pool-100-35-67-57.nwrknj.fios.verizon.net) has joined #ceph
[2:03] * neurodrone_ (~neurodron@pool-100-35-67-57.nwrknj.fios.verizon.net) Quit ()
[2:05] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[2:05] * vbellur (~vijay@2601:647:4f00:4960:5e51:4fff:fee8:6a5c) Quit (Ping timeout: 480 seconds)
[2:05] * oms101 (~oms101@p20030057EA053A00C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[2:06] * xarses (~xarses@64.124.158.100) Quit (Ping timeout: 480 seconds)
[2:07] * smokedmeets (~smokedmee@c-73-158-201-226.hsd1.ca.comcast.net) Quit (Quit: smokedmeets)
[2:07] * sudocat1 (~dibarra@192.185.1.19) Quit (Ping timeout: 480 seconds)
[2:07] * neurodrone_ (~neurodron@pool-100-35-67-57.nwrknj.fios.verizon.net) has joined #ceph
[2:08] * LeaChim (~LeaChim@host86-171-90-242.range86-171.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:11] * dillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[2:13] * oms101 (~oms101@p20030057EA05DA00C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[2:20] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[2:21] * dillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[2:22] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[2:23] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[2:24] * thansen (~thansen@17.253.sfcn.org) Quit (Quit: Ex-Chat)
[2:25] * longsube (~longlq@203.162.130.241) has joined #ceph
[2:26] * longsube (~longlq@203.162.130.241) Quit ()
[2:27] * curtis864 (~airsoftgl@192.42.115.101) has joined #ceph
[2:28] * mattbenjamin (~mbenjamin@sccc-66-78-236-243.smartcity.com) has joined #ceph
[2:29] * simeon_ (~simeon@apu0.dhcp.meraka.csir.co.za) has joined #ceph
[2:31] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Ping timeout: 480 seconds)
[2:32] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Ping timeout: 480 seconds)
[2:33] * sleinen1 (~Adium@2001:620:0:82::100) has joined #ceph
[2:36] * sleinen (~Adium@2001:620:1000:3:a65e:60ff:fedb:f305) Quit (Ping timeout: 480 seconds)
[2:37] * simeon_ (~simeon@apu0.dhcp.meraka.csir.co.za) Quit (Ping timeout: 480 seconds)
[2:40] * LobsterRoll (~LobsterRo@209-6-180-200.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has left #ceph
[2:41] * mattbenjamin (~mbenjamin@sccc-66-78-236-243.smartcity.com) Quit (Ping timeout: 480 seconds)
[2:44] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) has joined #ceph
[2:47] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) has joined #ceph
[2:49] * shyu_ (~shyu@123.116.175.103) Quit (Quit: Leaving)
[2:57] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[2:57] * curtis864 (~airsoftgl@4MJAACWKY.tor-irc.dnsbl.oftc.net) Quit ()
[2:58] * Mika_c (~quassel@122.146.93.152) has joined #ceph
[2:59] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[3:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[3:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[3:02] * KristopherBel (~xolotl@94.242.228.108) has joined #ceph
[3:03] * aj__ (~aj@x590d8bc7.dyn.telefonica.de) has joined #ceph
[3:04] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[3:11] * LobsterRoll (~LobsterRo@209-6-180-200.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[3:11] * derjohn_mobi (~aj@x590dfb93.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[3:12] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) Quit (Quit: valeech)
[3:19] * enax (~enax@94-21-125-144.pool.digikabel.hu) Quit (Ping timeout: 480 seconds)
[3:24] * neurodrone_ (~neurodron@pool-100-35-67-57.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[3:24] * Mika_c (~quassel@122.146.93.152) Quit (Remote host closed the connection)
[3:24] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[3:24] * davidzlap1 (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[3:26] * xarses (~xarses@173-164-194-206-SFBA.hfc.comcastbusiness.net) has joined #ceph
[3:26] * vbellur (~vijay@sccc-66-78-236-243.smartcity.com) has joined #ceph
[3:27] * xarses (~xarses@173-164-194-206-SFBA.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[3:27] * xarses (~xarses@173-164-194-206-SFBA.hfc.comcastbusiness.net) has joined #ceph
[3:29] * zhaochao (~zhaochao@125.39.112.15) has joined #ceph
[3:29] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[3:32] * KristopherBel (~xolotl@84ZAACVI3.tor-irc.dnsbl.oftc.net) Quit ()
[3:33] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[3:33] * yanzheng1 (~zhyan@182.139.20.43) has joined #ceph
[3:35] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[3:35] * naoto (~naotok@27.131.11.254) has joined #ceph
[3:35] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[3:37] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[3:39] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[3:40] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[3:40] * xarses (~xarses@173-164-194-206-SFBA.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[3:41] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) has joined #ceph
[3:47] * branto (~branto@ip-78-102-208-28.net.upcbroadband.cz) has joined #ceph
[3:49] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[3:51] * Meths_ (~meths@95.151.244.217) has joined #ceph
[3:51] * neurodrone_ (~neurodron@pool-100-35-67-57.nwrknj.fios.verizon.net) has joined #ceph
[3:55] * Meths (~meths@2.25.223.148) Quit (Ping timeout: 480 seconds)
[3:57] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[3:58] * MentalRay (~MentalRay@142.169.78.61) has joined #ceph
[3:58] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[3:59] * BrianA (~BrianA@fw-rw.shutterfly.com) Quit (Read error: Connection reset by peer)
[3:59] * MentalRay (~MentalRay@142.169.78.61) Quit ()
[4:00] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[4:00] * sudocat (~dibarra@2602:306:8bc7:4c50:4cff:bb45:565b:74a) has joined #ceph
[4:00] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[4:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[4:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[4:02] * Pirate (~yuastnav@edwardsnowden0.torservers.net) has joined #ceph
[4:02] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[4:04] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[4:04] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[4:04] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[4:05] * kefu (~kefu@114.92.107.250) has joined #ceph
[4:08] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:12] * davidzlap1 (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[4:12] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[4:14] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[4:14] * davidzlap1 (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[4:15] * rakeshgm (~rakesh@106.51.225.55) Quit (Quit: Leaving)
[4:20] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[4:22] * davidzlap1 (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[4:22] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[4:26] * ira (~ira@121.244.87.117) has joined #ceph
[4:32] * Pirate (~yuastnav@84ZAACVK1.tor-irc.dnsbl.oftc.net) Quit ()
[4:32] * Phase (~Spessu@tesla16.play-with-crypto.space) has joined #ceph
[4:33] * Phase is now known as Guest5550
[4:33] * kefu (~kefu@114.92.107.250) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[4:34] * vbellur (~vijay@sccc-66-78-236-243.smartcity.com) Quit (Ping timeout: 480 seconds)
[4:34] * ira (~ira@121.244.87.117) Quit (Quit: Leaving)
[4:38] * vbellur (~vijay@sccc-66-78-236-243.smartcity.com) has joined #ceph
[4:39] * vbellur (~vijay@sccc-66-78-236-243.smartcity.com) has left #ceph
[4:50] * georgem (~Adium@184.175.1.33) has joined #ceph
[4:55] * kefu (~kefu@114.92.107.250) has joined #ceph
[4:55] * kefu (~kefu@114.92.107.250) Quit (Remote host closed the connection)
[4:56] * kefu (~kefu@114.92.107.250) has joined #ceph
[5:00] * petracvv_ (~quassel@c-50-158-9-81.hsd1.il.comcast.net) Quit (Quit: http://quassel-irc.org - Chat comfortably. Anywhere.)
[5:00] * kefu (~kefu@114.92.107.250) Quit (Remote host closed the connection)
[5:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[5:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[5:02] * Guest5550 (~Spessu@7V7AACVXP.tor-irc.dnsbl.oftc.net) Quit ()
[5:02] * TheDoudou_a (~homosaur@195.228.45.176) has joined #ceph
[5:02] * kefu (~kefu@114.92.107.250) has joined #ceph
[5:09] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) has joined #ceph
[5:10] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[5:10] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[5:10] * pvh_sa (~pvh@169-0-182-212.ip.afrihost.co.za) has joined #ceph
[5:19] * neurodrone_ (~neurodron@pool-100-35-67-57.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[5:21] * Vacuum_ (~Vacuum@88.130.210.240) has joined #ceph
[5:27] * Vacuum__ (~Vacuum@88.130.221.85) Quit (Ping timeout: 480 seconds)
[5:32] * TheDoudou_a (~homosaur@76GAACP8P.tor-irc.dnsbl.oftc.net) Quit ()
[5:32] * Averad (~murmur@93.115.95.206) has joined #ceph
[5:33] * swami1 (~swami@49.44.57.235) has joined #ceph
[5:33] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Ping timeout: 480 seconds)
[5:36] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[5:38] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) has joined #ceph
[5:42] * swami2 (~swami@49.38.2.20) has joined #ceph
[5:42] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) Quit ()
[5:47] * swami1 (~swami@49.44.57.235) Quit (Ping timeout: 480 seconds)
[5:59] * davidzlap1 (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[5:59] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[6:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[6:01] * linuxkidd (~linuxkidd@52.sub-70-193-68.myvzw.com) Quit (Ping timeout: 480 seconds)
[6:01] * georgem (~Adium@184.175.1.33) Quit (Quit: Leaving.)
[6:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[6:02] * Averad (~murmur@84ZAACVOW.tor-irc.dnsbl.oftc.net) Quit ()
[6:02] * Xa (~Freddy@chulak.enn.lu) has joined #ceph
[6:05] * simeon_ (~simeon@apu0.dhcp.meraka.csir.co.za) has joined #ceph
[6:06] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[6:10] * linuxkidd (~linuxkidd@52.sub-70-193-68.myvzw.com) has joined #ceph
[6:11] * linuxkidd (~linuxkidd@52.sub-70-193-68.myvzw.com) Quit (Remote host closed the connection)
[6:14] * simeon_ (~simeon@apu0.dhcp.meraka.csir.co.za) Quit (Ping timeout: 480 seconds)
[6:27] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[6:27] * pvh_sa (~pvh@169-0-182-212.ip.afrihost.co.za) Quit (Ping timeout: 480 seconds)
[6:30] * simeon_ (~simeon@apu0.dhcp.meraka.csir.co.za) has joined #ceph
[6:32] * Xa (~Freddy@4MJAACWRS.tor-irc.dnsbl.oftc.net) Quit ()
[6:32] * KeeperOfTheSoul (~Tenk@tor.yrk.urgs.uk0.bigv.io) has joined #ceph
[6:36] * karnan (~karnan@121.244.87.117) has joined #ceph
[6:39] * rdas (~rdas@121.244.87.116) has joined #ceph
[6:39] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[6:56] <motk> well this is very vexing
[6:56] * swami2 (~swami@49.38.2.20) Quit (Ping timeout: 480 seconds)
[6:56] <motk> running an ordinary ceph-ansible play and one monitor refuses to join
[6:57] * swami1 (~swami@49.38.2.20) has joined #ceph
[7:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[7:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[7:02] * KeeperOfTheSoul (~Tenk@4MJAACWSN.tor-irc.dnsbl.oftc.net) Quit ()
[7:02] * dicko (~oracular@84ZAACVRK.tor-irc.dnsbl.oftc.net) has joined #ceph
[7:04] <motk> just hanging on ceph-create-keys --id {{ ansible_hostname }}
[7:05] * swami1 (~swami@49.38.2.20) Quit (Ping timeout: 480 seconds)
[7:11] * swami1 (~swami@49.38.2.20) has joined #ceph
[7:13] * smokedmeets (~smokedmee@c-73-158-201-226.hsd1.ca.comcast.net) has joined #ceph
[7:21] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[7:22] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) has joined #ceph
[7:23] * ibravo2 (~ibravo@72.198.142.104) Quit (Quit: Leaving)
[7:23] * Mika_c (~quassel@122.146.93.152) has joined #ceph
[7:26] * swami2 (~swami@49.38.2.20) has joined #ceph
[7:27] * swami1 (~swami@49.38.2.20) Quit (Ping timeout: 480 seconds)
[7:32] * dicko (~oracular@84ZAACVRK.tor-irc.dnsbl.oftc.net) Quit ()
[7:32] * PierreW (~pakman__@94.242.228.108) has joined #ceph
[7:34] * swami2 (~swami@49.38.2.20) Quit (Ping timeout: 480 seconds)
[7:37] * dgurtner (~dgurtner@178.197.225.39) has joined #ceph
[7:38] * kawa2014 (~kawa@94.165.24.146) has joined #ceph
[7:41] * swami1 (~swami@49.44.57.235) has joined #ceph
[7:41] * nardial (~ls@dslb-088-066-162-091.088.066.pools.vodafone-ip.de) has joined #ceph
[7:46] * kawa2014 (~kawa@94.165.24.146) Quit (Ping timeout: 480 seconds)
[7:47] * kawa2014 (~kawa@94.162.75.170) has joined #ceph
[7:48] * Geph (~Geoffrey@169-1-168-32.ip.afrihost.co.za) Quit (Ping timeout: 480 seconds)
[7:49] * overclk (~vshankar@121.244.87.117) has joined #ceph
[7:51] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[7:53] * simeon_ (~simeon@apu0.dhcp.meraka.csir.co.za) Quit (Ping timeout: 480 seconds)
[7:55] * ade (~abradshaw@80.62.20.66) has joined #ceph
[7:56] * ade (~abradshaw@80.62.20.66) Quit (Remote host closed the connection)
[7:56] * swami1 (~swami@49.44.57.235) Quit (Ping timeout: 480 seconds)
[7:58] * swami1 (~swami@49.44.57.235) has joined #ceph
[8:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[8:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[8:02] * PierreW (~pakman__@84ZAACVSG.tor-irc.dnsbl.oftc.net) Quit ()
[8:02] * SEBI (~chrisinaj@94.242.228.108) has joined #ceph
[8:03] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[8:09] * LobsterRoll (~LobsterRo@209-6-180-200.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: LobsterRoll)
[8:09] * LobsterRoll (~LobsterRo@209-6-180-200.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[8:09] * sleinen1 (~Adium@2001:620:0:82::100) Quit (Ping timeout: 480 seconds)
[8:10] * LobsterRoll (~LobsterRo@209-6-180-200.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit ()
[8:12] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[8:12] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[8:15] <Be-El> hi
[8:20] * kefu is now known as kefu|afk
[8:21] <motk> ceph-ansible seems borken atm :(
[8:21] <motk> leseb: ping?
[8:23] * sleinen (~Adium@2001:620:0:2d:a65e:60ff:fedb:f305) has joined #ceph
[8:24] * kefu|afk is now known as kefu
[8:24] * simeon_ (~simeon@2001:4200:7000:3:5d82:6e7f:9126:61df) has joined #ceph
[8:29] * pvh_sa (~pvh@41.164.8.114) has joined #ceph
[8:30] * shohn (~shohn@71.Red-88-5-214.dynamicIP.rima-tde.net) has joined #ceph
[8:32] * SEBI (~chrisinaj@76GAACQCC.tor-irc.dnsbl.oftc.net) Quit ()
[8:32] * offender (~Altitudes@193.90.12.86) has joined #ceph
[8:33] * enax (~enax@94-21-125-144.pool.digikabel.hu) has joined #ceph
[8:34] * shohn (~shohn@71.Red-88-5-214.dynamicIP.rima-tde.net) has left #ceph
[8:38] * dgurtner (~dgurtner@178.197.225.39) Quit (Ping timeout: 480 seconds)
[8:38] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) Quit (Quit: Bye)
[8:43] * daviddcc (~dcasier@80.12.39.221) has joined #ceph
[8:44] <simeon_> following on from discussion with neurodrone yesterday, I've been trying to influence the source IP address selection for librbd connections (qemu): turns out that neither "public network" or "public addr" seem to work - maybe those are only used by ceph daemons?
[8:46] * shylesh__ (~shylesh@121.244.87.118) has joined #ceph
[8:49] <simeon_> I only see references to pick_addresses() in ceph_osd.cc, ceph_mds.cc and ceph_osd.cc
[8:49] <Be-El> simeon_: afaik these settings only apply to the ceph daemons
[8:50] <simeon_> Be-El: ok, that's useful to know
[8:51] <Be-El> i believe the number of use cases for binding client source address is rather limited
[8:51] * dgurtner (~dgurtner@178.197.231.69) has joined #ceph
[8:56] <simeon_> Be-El: I'm running ceph in an L3 routed network and I want the rbd client to bind to an interface that never goes down (a bridge, in my case, but equivalent to a loopback)
[8:57] <simeon_> Be-El: problem is that it binds directly to the NICs, so if one of two goes down, I loose half the TCP connections
[8:58] <simeon_> elsewhere I'm using network namespaces to force source address binding, but it won't work here
[8:58] * vikhyat is now known as vikhyat|food
[8:58] * aj__ (~aj@x590d8bc7.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[8:59] * daviddcc (~dcasier@80.12.39.221) Quit (Ping timeout: 480 seconds)
[9:00] * analbeard (~shw@support.memset.com) has joined #ceph
[9:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[9:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[9:02] <simeon_> I have a sneaky idea, but it seems a bit hackish: set the 'src' attribute of the kernel route to the destination
[9:02] * offender (~Altitudes@4MJAACWVY.tor-irc.dnsbl.oftc.net) Quit ()
[9:02] * Popz (~cryptk@37.48.80.101) has joined #ceph
[9:08] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[9:14] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[9:15] * pam (~pam@193.106.183.1) has joined #ceph
[9:16] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[9:17] * enax (~enax@94-21-125-144.pool.digikabel.hu) Quit (Ping timeout: 480 seconds)
[9:20] <simeon_> ah my sneaky plan worked
[9:20] <simeon_> BIRD to the rescue
[9:25] * ade (~abradshaw@guest.dyn.rm.dk) has joined #ceph
[9:32] * hyperbaba (~hyperbaba@private.neobee.net) has joined #ceph
[9:32] * Popz (~cryptk@7V7AACV1T.tor-irc.dnsbl.oftc.net) Quit ()
[9:33] * fsimonce (~simon@host85-7-dynamic.49-82-r.retail.telecomitalia.it) has joined #ceph
[9:37] * Snowcat4 (~Neon@91.109.29.120) has joined #ceph
[9:37] * dvanders (~dvanders@dvanders-pro.cern.ch) has joined #ceph
[9:40] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[9:42] <Miouge> Thank you for the info IcePic
[9:42] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[9:44] * aj__ (~aj@2001:6f8:1337:0:3005:d440:b7b4:65a9) has joined #ceph
[9:46] * analbeard (~shw@support.memset.com) has joined #ceph
[9:46] * analbeard (~shw@support.memset.com) has left #ceph
[9:50] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit (Ping timeout: 480 seconds)
[9:53] * kawa2014 (~kawa@94.162.75.170) Quit (Quit: Leaving)
[9:54] <boichev> Hello, I am trying to fix a stuck pg with "ceph pg repair" and I get "Error EAGAIN: pg 6.28 primary osd.4 not up" for some reason it thinks the primary pg is on osd.4 but "ceph pg map" gives me "osdmap e1064 pg 6.28 (6.28) -> up [3] acting [3]" osd.4 is out of the cluster.... How can I force primary pg change ?
[9:57] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:4963:611d:d4e9:c457) has joined #ceph
[9:57] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[9:59] * vikhyat|food is now known as vikhyat
[10:00] * kefu is now known as kefu|afk
[10:00] <Be-El> boichev: ceph pg map only calculates where a PG _should_ be placed. for several reasons it might be placed on another osd
[10:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[10:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[10:03] <boichev> Be-El I get it now.... Can you just tell me "ceph pg dump" has a column named "bytes" this is the data inside the PG right ?
[10:04] <Be-El> boichev: i think it should be the accumulated size of all objects, yes
[10:04] <boichev> So If I force create a PG that has 0 objects, 0 mip, 0 degr, 0 unf, 0 bytes, 0 log, 0 disklog there will be no data lost because the PG is empty
[10:06] <boichev> I notice another thing the empty PGs has only 1 "acting" osd and the full ones have 2 (because of my replication factor of 2)
[10:06] <boichev> So when I removed the osd the empty PGs that were on it got lost
[10:07] <boichev> but because they are empty no problem to recreate ....
[10:07] * Snowcat4 (~Neon@76GAACQD4.tor-irc.dnsbl.oftc.net) Quit ()
[10:07] * kefu|afk is now known as kefu
[10:07] * pdrakeweb (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) Quit (Ping timeout: 480 seconds)
[10:07] * Schaap (~cyphase@94.242.228.108) has joined #ceph
[10:08] <Be-El> boichev: if you have a size of 2 for the pool, you should have 2 acting osds.
[10:08] <Be-El> boichev: if this is not the case, there might be a problem with your crush ruleset
[10:11] <boichev> Be-El I will try to force recreate the ones that are stale+active+clean because there is no data on it and I will see if they get recreated on 2 locations
[10:11] <Be-El> if your problems are crush related, this won't help
[10:11] * enax (~enax@hq.ezit.hu) has joined #ceph
[10:12] * enax (~enax@hq.ezit.hu) has left #ceph
[10:21] * zhaochao_ (~zhaochao@124.202.191.135) has joined #ceph
[10:25] <hyperbaba> Hi there, i have a big problem trying to delete object from radosgw integrated with openstack. The error is : "WARNING: set_req_state_err err_no=95 resorting to 500" . Can anyone help me please?
[10:26] * owasserm (~owasserm@nat-pool-ams-t.redhat.com) has joined #ceph
[10:27] * zhaochao (~zhaochao@125.39.112.15) Quit (Ping timeout: 480 seconds)
[10:27] * zhaochao_ is now known as zhaochao
[10:31] * lmb (~lmb@ip5b404b34.dynamic.kabel-deutschland.de) Quit (Remote host closed the connection)
[10:32] * lmb (~lmb@ip5b404b34.dynamic.kabel-deutschland.de) has joined #ceph
[10:33] * pam (~pam@193.106.183.1) Quit (Quit: pam)
[10:35] * pam (~pam@193.106.183.1) has joined #ceph
[10:35] * b0e (~aledermue@213.95.25.82) has joined #ceph
[10:36] <boichev> Be-El I force created one, not it stuck on creating state....
[10:37] * Schaap (~cyphase@4MJAACWY8.tor-irc.dnsbl.oftc.net) Quit ()
[10:37] * luigiman (~Shadow386@chomsky.torservers.net) has joined #ceph
[10:39] <boichev> Be-El from the output of ceph pg dump | grep "\[4\]" I see PGs only empty PGs on the problematic OSD.... Should I try to mark the osd as lost.... I tried ceph osd rm and ceph osd crush remove both returnin that this osd is not there......
[10:40] * TMM (~hp@185.5.122.2) has joined #ceph
[10:43] * LeaChim (~LeaChim@host86-171-90-242.range86-171.btcentralplus.com) has joined #ceph
[10:43] <edgarsb> Good morning! Will primary-affinity be respected if OSDs are split into seperate branches?
[10:43] <edgarsb> ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
[10:43] <edgarsb> -100 17.27998 root default
[10:43] <edgarsb> -15 5.75999 chassis chassis-prox-03
[10:43] <edgarsb> -10 0.29999 host prox-03-ssd
[10:43] <edgarsb> 7 0.09999 osd.7 up 1.00000 1.00000
[10:43] <edgarsb> 12 0.09999 osd.12 up 1.00000 1.00000
[10:43] <edgarsb> 13 0.09999 osd.13 up 1.00000 1.00000
[10:43] <edgarsb> -7 0.29999 host prox-03-hdd
[10:44] <edgarsb> 5 1.81999 osd.5 up 1.00000 0
[10:44] <edgarsb> 14 1.81999 osd.14 up 1.00000 0
[10:44] <edgarsb> 17 1.81999 osd.17 up 1.00000 0
[10:44] <edgarsb> -14 5.75999 chassis chassis-prox-02
[10:44] <edgarsb> -9 0.29999 host prox-02-ssd
[10:44] <edgarsb> 6 0.09999 osd.6 up 1.00000 1.00000
[10:44] <edgarsb> 9 0.09999 osd.9 up 1.00000 1.00000
[10:44] <edgarsb> 10 0.09999 osd.10 up 1.00000 1.00000
[10:44] <edgarsb> -6 0.29999 host prox-02-hdd
[10:44] <edgarsb> 3 1.81999 osd.3 up 1.00000 0
[10:44] <IcePic> ..if only there was pastebin sites where you could put text-vomit and get an URL back...
[10:44] <edgarsb> 11 1.81999 osd.11 up 1.00000 0
[10:44] <edgarsb> 16 1.81999 osd.16 up 1.00000 0
[10:44] <edgarsb> -13 5.75999 chassis chassis-prox-01
[10:44] <edgarsb> -8 0.29999 host prox-01-ssd
[10:44] <edgarsb> 0 0.09999 osd.0 up 1.00000 1.00000
[10:44] <edgarsb> 2 0.09999 osd.2 up 1.00000 1.00000
[10:44] <edgarsb> 4 0.09999 osd.4 up 1.00000 1.00000
[10:44] <edgarsb> -5 0.29999 host prox-01-hdd
[10:44] <edgarsb> 1 1.81999 osd.1
[10:44] <edgarsb> damn, sorry
[10:46] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[10:47] <edgarsb> rule is set to chooseleaf type chassis
[10:48] * haomaiwa_ (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[10:49] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[10:51] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[11:00] * kefu (~kefu@114.92.107.250) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[11:01] * Geph (~Geoffrey@41.77.153.99) has joined #ceph
[11:01] * haomaiwa_ (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[11:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[11:07] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[11:07] * rendar (~I@95.238.180.245) has joined #ceph
[11:07] * luigiman (~Shadow386@4MJAACW0P.tor-irc.dnsbl.oftc.net) Quit ()
[11:07] * Miho (~bildramer@67.ip-92-222-38.eu) has joined #ceph
[11:07] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[11:08] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[11:08] * Geph (~Geoffrey@41.77.153.99) Quit (Quit: Leaving)
[11:10] * Geph (~Geoffrey@41.77.153.99) has joined #ceph
[11:12] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[11:12] * GabrielDias (~rb@srx.h1host.ru) has joined #ceph
[11:13] * Geph (~Geoffrey@41.77.153.99) has left #ceph
[11:14] * Geph (~Geoffrey@41.77.153.99) has joined #ceph
[11:15] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[11:16] <GabrielDias> Hi!
[11:16] <GabrielDias> What was the problem?
[11:16] <GabrielDias> [ceph-node-00][ERROR ] RuntimeError: command returned non-zero exit status: 1
[11:16] <GabrielDias> [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk -v activate --mark-init upst
[11:16] <GabrielDias> IF i manualy start it than
[11:16] <GabrielDias> ceph@ceph-node-00:~/ceph-admin$ ceph-disk -v activate --mark-init upstart --mount /dev/sdb
[11:16] <GabrielDias> Traceback (most recent call last):
[11:16] <GabrielDias> File "/usr/sbin/ceph-disk", line 2798, in <module>
[11:16] <GabrielDias> main()
[11:16] <GabrielDias> File "/usr/sbin/ceph-disk", line 2776, in main
[11:16] <GabrielDias> args.func(args)
[11:16] <GabrielDias> File "/usr/sbin/ceph-disk", line 1996, in main_activate
[11:16] <GabrielDias> activate_lock.acquire() # noqa
[11:16] <GabrielDias> File "/usr/sbin/ceph-disk", line 146, in acquire
[11:16] <GabrielDias> self.fd = file(self.fn, 'w')
[11:17] <GabrielDias> Does ls -lah /var/lib/ceph/ must be ceph permissions?
[11:17] <GabrielDias> Now it is root.
[11:20] * GabrielDias (~rb@srx.h1host.ru) Quit (Quit: BitchX-1.3-git -- just do it.)
[11:21] * GabrielDias (~GabrielDi@srx.h1host.ru) has joined #ceph
[11:25] <IcePic> at some point ceph made osds run as "ceph" instead of root, so older guides/scripts may be wrong on that account.
[11:26] * Mika_c (~quassel@122.146.93.152) Quit (Remote host closed the connection)
[11:27] <GabrielDias> i use ceph-deploy
[11:27] <IcePic> https://ceph.com/releases/v9-2-0-infernalis-released/
[11:29] <GabrielDias> Thank you. I read now.
[11:37] * Miho (~bildramer@84ZAACVZE.tor-irc.dnsbl.oftc.net) Quit ()
[11:37] * spidu_ (~Nijikokun@84ZAACVZ3.tor-irc.dnsbl.oftc.net) has joined #ceph
[11:43] * erwan_taf (~erwan@46.231.131.178) Quit (Remote host closed the connection)
[11:44] * erwan_taf (~erwan@46.231.131.178) has joined #ceph
[11:49] * wjw-freebsd (~wjw@vpn.ecoracks.nl) has joined #ceph
[11:50] * Concubidated (~Adium@c-50-173-245-118.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[11:54] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Quit: Konversation terminated!)
[11:54] * arbrandes (~arbrandes@ec2-54-172-54-135.compute-1.amazonaws.com) Quit (Quit: WeeChat 1.2)
[11:56] * drankis (~drankis__@89.111.13.198) has joined #ceph
[12:00] * pam (~pam@193.106.183.1) Quit (Quit: pam)
[12:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[12:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[12:02] * destrudo (~destrudo@64.142.74.180) Quit (Read error: Connection reset by peer)
[12:02] * erwan_taf (~erwan@46.231.131.178) Quit (Remote host closed the connection)
[12:02] * destrudo (~destrudo@64.142.74.180) has joined #ceph
[12:03] * erwan_taf (~erwan@46.231.131.178) has joined #ceph
[12:06] * naoto (~naotok@27.131.11.254) Quit (Quit: Leaving...)
[12:07] * spidu_ (~Nijikokun@84ZAACVZ3.tor-irc.dnsbl.oftc.net) Quit ()
[12:07] * rapedex (~Peaced@4MJAACW4R.tor-irc.dnsbl.oftc.net) has joined #ceph
[12:14] * sleinen1 (~Adium@2001:620:0:82::101) has joined #ceph
[12:16] * smokedmeets (~smokedmee@c-73-158-201-226.hsd1.ca.comcast.net) Quit (Quit: smokedmeets)
[12:19] * shyu (~Shanzhi@119.254.120.66) Quit (Remote host closed the connection)
[12:21] * sleinen (~Adium@2001:620:0:2d:a65e:60ff:fedb:f305) Quit (Ping timeout: 480 seconds)
[12:25] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) has joined #ceph
[12:34] * zhaochao (~zhaochao@124.202.191.135) Quit (Quit: ChatZilla 0.9.92 [Iceweasel 44.0.2/20160214092551])
[12:37] * rapedex (~Peaced@4MJAACW4R.tor-irc.dnsbl.oftc.net) Quit ()
[12:37] * Aal (~BlS@195.228.45.176) has joined #ceph
[12:38] * arbrandes (~arbrandes@ec2-54-172-54-135.compute-1.amazonaws.com) has joined #ceph
[12:43] * EinstCrazy (~EinstCraz@58.39.29.31) has joined #ceph
[13:00] * simeon__ (~simeon@grannysmith.wifi.meraka.csir.co.za) has joined #ceph
[13:00] * pabluk__ is now known as pabluk_
[13:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[13:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[13:02] * simeon_ (~simeon@2001:4200:7000:3:5d82:6e7f:9126:61df) Quit (Ping timeout: 480 seconds)
[13:07] * Aal (~BlS@76GAACQIH.tor-irc.dnsbl.oftc.net) Quit ()
[13:07] * Chaos_Llama (~spate@lumumba.torservers.net) has joined #ceph
[13:11] * nardial (~ls@dslb-088-066-162-091.088.066.pools.vodafone-ip.de) Quit (Quit: Leaving)
[13:16] * smokedmeets (~smokedmee@c-73-158-201-226.hsd1.ca.comcast.net) has joined #ceph
[13:17] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[13:17] * pam (~pam@193.106.183.1) has joined #ceph
[13:21] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[13:23] * adun153 (~ljtirazon@49.144.74.236) has joined #ceph
[13:26] * kefu (~kefu@114.92.107.250) has joined #ceph
[13:29] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[13:31] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[13:31] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[13:33] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[13:33] * nhm (~nhm@8.25.222.2) has joined #ceph
[13:33] * ChanServ sets mode +o nhm
[13:37] * Chaos_Llama (~spate@84ZAACV2H.tor-irc.dnsbl.oftc.net) Quit ()
[13:37] * Esge (~Bj_o_rn@chulak.enn.lu) has joined #ceph
[13:39] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Ping timeout: 480 seconds)
[13:42] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:45] * bara (~bara@213.175.37.12) has joined #ceph
[13:49] * kanagaraj (~kanagaraj@121.244.87.124) has joined #ceph
[13:50] * kanagaraj (~kanagaraj@121.244.87.124) Quit (Read error: Connection reset by peer)
[13:53] * bniver (~bniver@nat-pool-bos-u.redhat.com) has joined #ceph
[13:53] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) has joined #ceph
[13:54] * wyang (~wyang@114.111.166.47) has joined #ceph
[13:57] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[13:58] * Racpatel (~Racpatel@2601:87:3:3601::2d54) has joined #ceph
[13:58] * neurodrone_ (~neurodron@pool-100-35-67-57.nwrknj.fios.verizon.net) has joined #ceph
[13:59] * sleinen1 (~Adium@2001:620:0:82::101) Quit (Quit: Leaving.)
[14:02] * Kupo (~tyler.wil@23.111.254.159) has joined #ceph
[14:05] * Kupo1 (~tyler.wil@23.111.254.159) Quit (Ping timeout: 480 seconds)
[14:06] * Racpatel (~Racpatel@2601:87:3:3601::2d54) Quit (Quit: Leaving)
[14:06] * Racpatel (~Racpatel@2601:87:3:3601::2d54) has joined #ceph
[14:07] * Esge (~Bj_o_rn@4MJAACW78.tor-irc.dnsbl.oftc.net) Quit ()
[14:09] * linuxkidd (~linuxkidd@52.sub-70-193-68.myvzw.com) has joined #ceph
[14:12] * bara (~bara@213.175.37.12) Quit (Ping timeout: 480 seconds)
[14:14] * madkiss (~Adium@2a00:13c8:2000:16:1496:3ff3:a3c7:77f7) has joined #ceph
[14:14] <madkiss> hi all.
[14:14] <madkiss> do I remember correctly that jewel will be the release that declares CephFS stable?
[14:16] * jclm (~jclm@63.117.50.130) Quit (Quit: Leaving.)
[14:16] * erwan_taf is now known as evelu
[14:16] * dan__ (~Daniel@office.34sp.com) has joined #ceph
[14:17] <neurodrone_> madkiss: I think that is true.
[14:17] <madkiss> I was looking for documents on the intarwebs for this, but I can't seem to find any.
[14:17] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[14:18] * pdrakeweb (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) has joined #ceph
[14:20] * overclk (~vshankar@121.244.87.117) Quit (Quit: Zzzzzzz...)
[14:20] * rotbeard (~redbeard@ppp-49-237-214-82.revip6.asianet.co.th) has joined #ceph
[14:21] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[14:23] * DanFoster (~Daniel@2a00:1ee0:3:1337:b8cd:ba24:7ca8:17b1) Quit (Ping timeout: 480 seconds)
[14:28] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[14:30] * smokedmeets (~smokedmee@c-73-158-201-226.hsd1.ca.comcast.net) Quit (Quit: smokedmeets)
[14:33] * jclm (~jclm@static-100-38-70-130.nycmny.fios.verizon.net) has joined #ceph
[14:35] <boolman> How do you guys calculate iops of your clusters, for example if I have 73 iops per sata disk, 20 osds, running journal on the same disk and replication set at 3. so, 73 * 20 / 2 / 3 = 243 iops excluding raid controller cache and rbd client cache and os caches
[14:38] * bene_in_mtg (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[14:39] <pam> Hi, when we use different networks for public and cluster for ceph. What will happen if there is a problem on the cluster network?
[14:40] * kefu_ (~kefu@114.92.107.250) has joined #ceph
[14:40] <pam> let???s say it???s unreachble because of a switch failure etc??? but the public network works
[14:40] * simeon__ (~simeon@grannysmith.wifi.meraka.csir.co.za) Quit (Ping timeout: 480 seconds)
[14:41] * Popz (~lmg@85.93.218.204) has joined #ceph
[14:43] * neurodrone_ (~neurodron@pool-100-35-67-57.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[14:47] * kefu (~kefu@114.92.107.250) Quit (Ping timeout: 480 seconds)
[14:51] * neurodrone_ (~neurodron@pool-100-35-67-57.nwrknj.fios.verizon.net) has joined #ceph
[14:54] <Miouge> boolman: That is the formula I use as well, but in reallity I get 30~50% less than that
[15:01] * simeon__ (~simeon@2001:4200:7000:3:94f0:9428:1f3a:a323) has joined #ceph
[15:02] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[15:02] <boichev> Be-El Hmm force-create on PG now makes it stuck in "creating" state .... How can I get out of this situation ?
[15:04] <Be-El> boichev: as i said before....your problem might be related to your crush ruleset, especially if ceph pg query/dump does not list enough osds for that pg
[15:05] <boichev> Be-El I hunted down all PGs and they belong to empty pools with indeed size of 1 ... I don't know how they got created because I never used those pools, they are the default ones
[15:06] <boichev> Be-El so I just need to recreate them or delete them and go on with the upgrade of the cluster :)
[15:06] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[15:08] * ibravo (~ibravo@72.198.142.104) has joined #ceph
[15:08] * georgem (~Adium@184-175-1-33.dsl.teksavvy.com) has joined #ceph
[15:10] * rotbeard (~redbeard@ppp-49-237-214-82.revip6.asianet.co.th) Quit (Ping timeout: 480 seconds)
[15:11] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[15:11] * Popz (~lmg@7V7AACV87.tor-irc.dnsbl.oftc.net) Quit ()
[15:11] * Tenk (~Sun7zu@lumumba.torservers.net) has joined #ceph
[15:14] * karnan (~karnan@121.244.87.117) Quit (Quit: Leaving)
[15:16] * rakeshgm (~rakesh@121.244.87.117) Quit (Quit: Leaving)
[15:16] * pvh_sa (~pvh@41.164.8.114) Quit (Ping timeout: 480 seconds)
[15:19] * dsfgdsfg (~n.borisov@admins.1h.com) has joined #ceph
[15:19] * swami1 (~swami@49.44.57.235) Quit (Quit: Leaving.)
[15:19] * rotbeard (~redbeard@ppp-49-237-252-120.revip6.asianet.co.th) has joined #ceph
[15:19] <dsfgdsfg> hello, why is it not possible to set CFQ as the scheduler for an RBD device? my end-goal is to apply the proportional blkio cgroup limits
[15:22] * shyu (~shyu@123.116.175.103) has joined #ceph
[15:26] * fcape (~fcape@107-220-57-73.lightspeed.austtx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[15:28] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:29] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) has joined #ceph
[15:29] * adun153 (~ljtirazon@49.144.74.236) Quit (Quit: Leaving)
[15:29] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) Quit ()
[15:32] * rotbeard (~redbeard@ppp-49-237-252-120.revip6.asianet.co.th) Quit (Ping timeout: 480 seconds)
[15:33] * jdohms (~jdohms@flyingmonkey.concordia.ab.ca) Quit (Remote host closed the connection)
[15:33] * jdohms (~jdohms@flyingmonkey.concordia.ab.ca) has joined #ceph
[15:39] * shyu (~shyu@123.116.175.103) Quit (Ping timeout: 480 seconds)
[15:41] * Tenk (~Sun7zu@76GAACQND.tor-irc.dnsbl.oftc.net) Quit ()
[15:41] * Kayla (~Kurimus@broadband-77-37-218-145.nationalcablenetworks.ru) has joined #ceph
[15:42] * ircolle (~ircolle@nat-pool-bos-u.redhat.com) has joined #ceph
[15:45] * rotbeard (~redbeard@ppp-115-87-78-32.revip4.asianet.co.th) has joined #ceph
[15:47] * pam (~pam@193.106.183.1) Quit (Quit: pam)
[15:48] * LobsterRoll (~LobsterRo@65.112.8.206) has joined #ceph
[15:48] * fcape (~fcape@rrcs-97-77-228-30.sw.biz.rr.com) has joined #ceph
[15:49] * yanzheng1 (~zhyan@182.139.20.43) Quit (Quit: This computer has gone to sleep)
[15:49] * shyu (~shyu@123.123.60.179) has joined #ceph
[15:49] * shyu (~shyu@123.123.60.179) Quit ()
[15:50] * shyu (~shyu@123.123.60.179) has joined #ceph
[15:55] * neerbeer (~neerbeer@209.8.190.77) has joined #ceph
[16:05] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) Quit (Quit: valeech)
[16:06] * ermak (~Adium@178.218.119.168) has joined #ceph
[16:06] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:07] * fcape_ (~fcape@38.67.19.2) has joined #ceph
[16:07] * georgem (~Adium@184-175-1-33.dsl.teksavvy.com) Quit (Quit: Leaving.)
[16:08] * neerbeer (~neerbeer@209.8.190.77) Quit (Ping timeout: 480 seconds)
[16:08] * fcape (~fcape@rrcs-97-77-228-30.sw.biz.rr.com) Quit (Ping timeout: 480 seconds)
[16:09] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[16:11] * Kayla (~Kurimus@7V7AACWAT.tor-irc.dnsbl.oftc.net) Quit ()
[16:12] * rotbeard (~redbeard@ppp-115-87-78-32.revip4.asianet.co.th) Quit (Ping timeout: 480 seconds)
[16:13] * rotbeard (~redbeard@ppp-115-87-78-32.revip4.asianet.co.th) has joined #ceph
[16:14] * Miouge (~Miouge@host-95-199-19-144.mobileonline.telia.com) has joined #ceph
[16:19] * simeon__ (~simeon@2001:4200:7000:3:94f0:9428:1f3a:a323) Quit (Ping timeout: 480 seconds)
[16:19] <boichev> 0.80.11 is the latest firefly right ?
[16:20] * neerbeer (~neerbeer@208.79.16.11) has joined #ceph
[16:21] * dsfgdsfg is now known as nikbor
[16:22] <nils_> anyone know what kind of controller one would need for a drive like this: http://ark.intel.com/products/79625/Intel-SSD-DC-P3700-Series-400GB-2_5in-PCIe-3_0-20nm-MLC?
[16:23] <etienneme> boichev http://docs.ceph.com/docs/master/releases/ yes :)
[16:23] <sep> boichev, based on what's available on https://download.ceph.com :: yes
[16:23] <Be-El> nils_: that's a pci-express card, you don't need a controller for it
[16:23] <nils_> Be-El, so I need some sort of cable to connect it to a PCIe slot?
[16:24] <boichev> And when I upgrade Do i have to go Firefly Latest -> Giant Latest -> Hammer Latest -> Infernalis Latest
[16:24] <sep> nils_, you need a available pcie slot to put the card in
[16:25] <boichev> Or it is like ubuntu and I can go LTs to LTS
[16:25] <sep> nils_, normaly not a cable. but sometimes a risercard
[16:25] <nils_> sep, note that it's in 2.5" format, not in PCIe card format.
[16:26] * fcape_ (~fcape@38.67.19.2) Quit (Ping timeout: 480 seconds)
[16:26] <nils_> SFF-8639 compatible connector
[16:26] <sep> nils_, you do have backplanes in hotswap drive chassis where some drives are nvme
[16:27] <sep> similar to http://www.supermicro.nl/products/nfo/Ultra.cfm
[16:30] <sep> nils sounds like you want something like http://www.amazon.com/Funtin-Adapter-SFF-8639-Interface-Express/dp/B011YJ7WLK ; but i would perhaps just get a pci card format, if you do not have the hardware that eats the drive type natively
[16:30] * fcape (~fcape@rrcs-97-77-228-30.sw.biz.rr.com) has joined #ceph
[16:31] <nils_> sep, they sell a PCIe card version, I'm just looking for a way to install more than one of those in 1U
[16:32] * neerbeer (~neerbeer@208.79.16.11) Quit (Ping timeout: 480 seconds)
[16:32] <sep> something like http://www.supermicro.nl/products/system/1U/1028/SYS-1028U-TN10RT_.cfm room for 10
[16:33] <nils_> yeah
[16:36] * ade (~abradshaw@guest.dyn.rm.dk) Quit (Ping timeout: 480 seconds)
[16:38] * georgem (~Adium@184-175-1-33.dsl.teksavvy.com) has joined #ceph
[16:38] * xarses (~xarses@64.124.158.100) has joined #ceph
[16:45] <nils_> really just wanted something fast for a really, really small server (2 disks)
[16:46] <nils_> a Xeon D machine, and most boards only come with one m.2 slot, those that come with two for some reason have them connected via *ONE* PCIe lane. Sorry for off-topic
[16:47] * georgem (~Adium@184-175-1-33.dsl.teksavvy.com) Quit (Quit: Leaving.)
[16:47] * georgem (~Adium@206.108.127.16) has joined #ceph
[16:48] * rotbeard (~redbeard@ppp-115-87-78-32.revip4.asianet.co.th) Quit (Quit: Leaving)
[16:49] * ermak (~Adium@178.218.119.168) Quit (Quit: Leaving.)
[16:49] * neerbeer (~neerbeer@209.8.190.77) has joined #ceph
[16:52] * allaok (~allaok@machine107.orange-labs.com) Quit (Quit: Leaving.)
[16:53] * swami1 (~swami@27.7.172.54) has joined #ceph
[16:53] * owasserm (~owasserm@nat-pool-ams-t.redhat.com) Quit (Ping timeout: 480 seconds)
[16:53] <Geph> Sorry to interrupt nils_ we chatted the other day, what's an accepted SATA drive for ceph?
[16:54] <Geph> will be running some VM loads
[16:54] <Geph> SSD for journals
[16:54] <Gugge-47527> an enterprise drive with tler
[16:54] <Geph> and was thinking Seagate Enterprise NAS for OSDs
[16:55] <etienneme> boichev : check the version you want http://docs.ceph.com/docs/master/release-notes/ you will find upgrade doc
[16:55] <nils_> sounds good, although these days I wonder if it may be better to use SSD for a dedicated cache tier pool
[16:55] <Geph> or are the lower spec'd WD Red NAS 5400rpm ok?
[16:56] <Gugge-47527> RED != Enterprise :)
[16:56] <boichev> etienneme thanks
[16:56] <nils_> I'm actually using non enterprise disks... They do fail some times but are cheaper to replace.
[16:56] * ibravo2 (~ibravo@72.198.142.104) has joined #ceph
[16:57] <Geph> Gugge-47527: thanks thought maybe only the Red Pro was
[16:57] <nils_> how many replica do you want to run?
[16:57] <Geph> Nils_: so WD Black or Seagate Desktop are suitable
[16:58] <Geph> hoping to use erasure coded pools
[16:58] * angdraug (~angdraug@c-69-181-140-42.hsd1.ca.comcast.net) has joined #ceph
[16:59] <Geph> similar to a RAID 5 configuration to save some space
[16:59] <nils_> we are running these: Seagate ST2000NM0033
[17:00] * madkiss (~Adium@2a00:13c8:2000:16:1496:3ff3:a3c7:77f7) Quit (Ping timeout: 480 seconds)
[17:00] <Geph> this will be a backup storage location for raw video footage so it needs to be fairly efficient
[17:02] * swami1 (~swami@27.7.172.54) Quit (Quit: Leaving.)
[17:02] <Geph> Nils_: isn't the ST2000NM0033 Constellation the Enterprise NAS
[17:03] <nils_> yeah possibly, we just used what we had laying around, maybe someone with a larger cluster can chip it
[17:03] <Geph> I think the Ent NAS was the replacement
[17:03] <nils_> I'm also using the 4 TB HGST Deskstars for the backup server
[17:03] <Miouge> If I can jump in the conversation, I was thinking ST2000NX0273 (Seagate 2.5??? 2TB Enterprise)?
[17:03] <nils_> (backup server snapshots the VM images, mounts them and does an rsync to zfs)
[17:04] * ibravo (~ibravo@72.198.142.104) Quit (Ping timeout: 480 seconds)
[17:04] <nils_> Miouge, way too expensive for my taste
[17:05] * Miouge (~Miouge@host-95-199-19-144.mobileonline.telia.com) Quit (Quit: Miouge)
[17:06] <Geph> thank Nils_ another question regarding the partitioning of the SSD
[17:06] <Geph> say I get 200GB SSDs
[17:08] <Geph> and they are to be used by 5 OSDs, i hear the recommended partition size is 5-6GB maybe 10GB
[17:08] * kefu_ (~kefu@114.92.107.250) Quit (Max SendQ exceeded)
[17:08] <Geph> does it make sense to just create each partition 40GB?
[17:08] * kefu (~kefu@114.92.107.250) has joined #ceph
[17:08] <Geph> 5 x 40GB to = 200GB
[17:09] <Geph> would that influence the wear of the SSD?
[17:09] <Geph> as opposed to 5 x 6GB partitions
[17:10] <nils_> depends on the controller, I use a 10 GiB journal and ceph then automatically created the partitions for me
[17:10] <Be-El> Geph: the journal partition size influences how "spiky" write throughput will be
[17:11] <nils_> since the rest of the disk should be zeroed out that will be used for wear leveling
[17:11] <Geph> just seems silly to only carve out 30 GB of a SSD
[17:11] <Be-El> Geph: if the journal is full, write to the osd is suspended until the journal has been written to hdd
[17:11] <Be-El> Geph: large journal -> longer interruption
[17:11] * Skyrider (~Heliwr@192.42.116.16) has joined #ceph
[17:12] <Geph> Be-El: thanks so a larger journal partition can't hurt
[17:12] <Be-El> Geph: it depends on yoru workload
[17:13] <Geph> and regardless of the size i give the SSD controller will take care of wear leveling
[17:13] <Be-El> Geph: there are also timeout parameters that define how long a journal entry may stay in the journal etc.
[17:14] <nils_> yeah, although I gave it a hint by halving the visible capacity via smartctl...
[17:14] <Be-El> Geph: write speed of hdd is also important etc.
[17:14] <Geph> only have 2 1Gb NICs for Public network so I guess max write to SSD could be about 200MBps
[17:15] * drankis (~drankis__@89.111.13.198) Quit (Ping timeout: 480 seconds)
[17:15] <Geph> yea that's why I wounded if the slower WD red NAS drives are a good idea
[17:16] * dgurtner (~dgurtner@178.197.231.69) Quit (Ping timeout: 480 seconds)
[17:16] <nils_> yeah you'll be slowed down by the network anyways
[17:18] <Geph> if I only have 4 x 1Gb NICs is it still recommended to have two dedicated to the private network and two to the Pub?
[17:19] * sleinen1 (~Adium@macsl.switch.ch) has joined #ceph
[17:19] <Geph> My build will be 6 nodes each with 4 to 5 6TB HDDs and one SSD partitioned i suppose 6 ways
[17:20] * etienneme (~root@158.69.81.132) Quit (Quit: WeeChat 1.3)
[17:20] * wjw-freebsd (~wjw@vpn.ecoracks.nl) Quit (Ping timeout: 480 seconds)
[17:20] <Geph> 5 partitions for the max 5 journals and one +- 40GB partition for the OS
[17:21] * sudocat (~dibarra@2602:306:8bc7:4c50:4cff:bb45:565b:74a) Quit (Quit: Leaving.)
[17:21] <Geph> and as mentioned each node will have one PCIe NIC with 4 10/100/1000 ports
[17:21] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) has joined #ceph
[17:21] <Geph> does any of that sound bad?
[17:21] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) Quit (Read error: Connection reset by peer)
[17:22] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) has joined #ceph
[17:22] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) Quit ()
[17:22] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) has joined #ceph
[17:23] * wyang (~wyang@114.111.166.47) Quit (Quit: This computer has gone to sleep)
[17:23] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) Quit (Read error: Connection reset by peer)
[17:23] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) has joined #ceph
[17:24] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[17:25] * wjw-freebsd (~wjw@vpn.ecoracks.nl) has joined #ceph
[17:26] * funnel (~funnel@0001c7d4.user.oftc.net) Quit (Quit: leaving)
[17:26] * drankis (~drankis__@46.109.81.218) has joined #ceph
[17:26] * etienneme (~root@158.69.81.132) has joined #ceph
[17:28] * funnel (~funnel@0001c7d4.user.oftc.net) has joined #ceph
[17:31] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[17:34] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph
[17:37] * RMar04 (~RMar04@5.153.255.226) has joined #ceph
[17:38] * kanagaraj (~kanagaraj@27.7.11.25) has joined #ceph
[17:40] * sleinen (~Adium@macsl.switch.ch) has joined #ceph
[17:40] * sleinen1 (~Adium@macsl.switch.ch) Quit (Read error: Connection reset by peer)
[17:41] * Skyrider (~Heliwr@4MJAACXIG.tor-irc.dnsbl.oftc.net) Quit ()
[17:41] * mason1 (~totalworm@exit.tor.uwaterloo.ca) has joined #ceph
[17:43] <Geph> Anybody think using 8TB HDDs is a bad idea?
[17:44] <Be-El> Geph: without SMR...it might be ok if the performance is sufficient and you can cope with recovering 8 TB in case of a single hard disk failure
[17:44] <Be-El> Geph: with SMR...forget it
[17:45] <Geph> seems most go for 4TB because of the cost per GB but there's a cost for every chassis, PSU, CPU, NIC and switch ports too
[17:46] * EinstCrazy (~EinstCraz@58.39.29.31) Quit (Remote host closed the connection)
[17:46] <Geph> Be-El: I would only consider 7200rpm std drives like the Enterprise NAS
[17:46] <Be-El> that's why I've ordered 6TB drives for our new cluster hosts
[17:48] * etienneme (~root@158.69.81.132) Quit (Quit: WeeChat 1.3)
[17:48] <Geph> what about 7200rpm Surveillance drives, they seems to have around 200MBps throughput and the same workload as a WD Red
[17:48] * etienneme (~root@158.69.81.132) has joined #ceph
[17:48] <Geph> they are cheaper is there any reason you wouldn't use them
[17:49] * shylesh__ (~shylesh@121.244.87.118) Quit (Remote host closed the connection)
[17:50] <Geph> spec wise they look simlar to the WD Red NAS and the Seagate NAS (not the Ent NAS)
[17:50] <Be-El> i don't have any experience with that kind of drives
[17:50] <Geph> cool thanks
[17:50] <Be-El> companies like backblaze use standard consumer drives for their clusters
[17:52] <Geph> yes, i've seen that not sure if you can compare their workloads to "mine"
[17:53] <Geph> i did notice they have higher failure rates with the WD than they do with the Seagates
[17:53] * kefu (~kefu@114.92.107.250) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:54] <Geph> if i'm choosing between WD and Seagate i guess the Seagates might be less prone to failure
[17:56] * neerbeer (~neerbeer@209.8.190.77) Quit (Ping timeout: 480 seconds)
[17:56] <Be-El> Geph: well, i have a stack of about 20 failed drives.....most of them are seagate 3tb drivers
[17:56] <Be-El> -r
[17:57] <Geph> the seagate barracudas?
[17:57] <Geph> me too
[18:00] <Be-El> on the other hand there's also a bunch of 4TB seagate drives which work pretty well
[18:00] * reed_ (~reed@75-101-54-18.dsl.static.fusionbroadband.com) Quit (Quit: Ex-Chat)
[18:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[18:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[18:01] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[18:05] * wjw-freebsd (~wjw@vpn.ecoracks.nl) Quit (Ping timeout: 480 seconds)
[18:08] * TMM (~hp@185.5.122.2) Quit (Quit: Ex-Chat)
[18:08] * neerbeer (~neerbeer@150.sub-70-208-144.myvzw.com) has joined #ceph
[18:09] <nils_> very often a problem of just getting a bad batch I guess?
[18:11] <Geph> Thanks guys for your help, gotta head now
[18:11] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[18:11] * mason1 (~totalworm@76GAACQSF.tor-irc.dnsbl.oftc.net) Quit ()
[18:11] * Yopi (~lmg@46.166.136.162) has joined #ceph
[18:14] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[18:16] * pabluk_ is now known as pabluk__
[18:17] * drankis (~drankis__@46.109.81.218) Quit (Quit: Leaving)
[18:18] * shyu (~shyu@123.123.60.179) Quit (Ping timeout: 480 seconds)
[18:18] * shyu (~shyu@123.123.60.179) has joined #ceph
[18:18] * vata (~vata@cable-21.246.173-197.electronicbox.net) has joined #ceph
[18:19] * sleinen (~Adium@macsl.switch.ch) Quit (Read error: Connection reset by peer)
[18:19] * sleinen (~Adium@macsl.switch.ch) has joined #ceph
[18:23] * Geph (~Geoffrey@41.77.153.99) Quit (Ping timeout: 480 seconds)
[18:25] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[18:27] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:28] * georgem (~Adium@184.175.1.33) has joined #ceph
[18:28] * georgem (~Adium@184.175.1.33) Quit ()
[18:28] * georgem (~Adium@206.108.127.16) has joined #ceph
[18:30] * aj__ (~aj@2001:6f8:1337:0:3005:d440:b7b4:65a9) Quit (Ping timeout: 480 seconds)
[18:33] * evelu (~erwan@46.231.131.178) Quit (Ping timeout: 480 seconds)
[18:34] * madkiss (~madkiss@2001:6f8:12c3:f00f:b86f:13d:3766:104) has joined #ceph
[18:35] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) Quit (Quit: Leaving)
[18:41] * Yopi (~lmg@76GAACQTN.tor-irc.dnsbl.oftc.net) Quit ()
[18:41] * Aramande_ (~Guest1390@76GAACQUJ.tor-irc.dnsbl.oftc.net) has joined #ceph
[18:43] <LobsterRoll> can I run ceph-deploy without ???overwrite-conf and not get it to not complain about a conf existing on the remote host? I simply want to add a disk, and do not want to modify the conf
[18:45] <vasu> LobsterRoll: why dont you request that feature in tracker.ceph.com?
[18:45] <alfredodeza> LobsterRoll: it will not modify it if the configuration is the same
[18:45] <alfredodeza> you really want to have the same configuration everywhere
[18:46] * Geph (~Geoffrey@169-0-138-216.ip.afrihost.co.za) has joined #ceph
[18:46] <LobsterRoll> alfredodeza: but i dont really as I have multiple networks involved on various hosts so my .conf is different everywhere
[18:46] <LobsterRoll> vasu: thanks i will
[18:47] <vasu> it will be still looked up by alfredo :) so you might want to explain full use case
[18:47] <alfredodeza> LobsterRoll: ceph-deploy is not meant to be a fully fledged deployment system
[18:47] <alfredodeza> it seems you are looking for an advanced usage, which ceph-deploy is not really meant to work with
[18:49] * Geph (~Geoffrey@169-0-138-216.ip.afrihost.co.za) Quit ()
[18:49] <LobsterRoll> thanks alfredodeza. I think you are right in that regard
[18:50] <alfredodeza> sure thing! sorry that use case is out of ceph-deploy's hands
[18:50] * alfredodeza recommends ceph-ansible
[18:50] * herrsergio (~herrsergi@200.77.224.239) has joined #ceph
[18:50] <LobsterRoll> in my mind it would be nice to do an operation like ceph-deploy osd create ceph-osd01:/dev/sdb ???ignore-differing-conf
[18:51] <LobsterRoll> so it just adds the disk, and doesnt care that a conf exists remotely and/or differs from the local ./ceph.conf
[18:51] * herrsergio is now known as Guest5646
[18:52] * pam (~pam@host246-56-dynamic.48-82-r.retail.telecomitalia.it) has joined #ceph
[18:53] <LobsterRoll> my workaround thuse far is just to pull the remote conf and pair the ceph-deploy with ???overwrite-conf ???ceph-conf /path/to/pulled.conf
[18:53] * Guest5646 is now known as herrsergio
[18:54] <LobsterRoll> I will check out ceph-ansible, thank you
[18:54] * shylesh (~shylesh@59.95.70.8) has joined #ceph
[18:54] <alfredodeza> LobsterRoll: that way of deploying makes it super easy for someone to get into trouble because of differing ceph.conf files
[18:55] * rmart04 (~rmart04@support.memset.com) has joined #ceph
[18:56] * reed (~reed@216.38.134.18) has joined #ceph
[18:56] * Rickus (~Rickus@office.protected.ca) has joined #ceph
[18:58] <LobsterRoll> Agreed alfredodeza. This is currently something we are grappling with. We drop our desired ceph.conf on the host with puppet, then pull it with deploy and then push it again with deploy combined with our desired operation
[18:59] <alfredodeza> I knew there was some kind of super advanced mangling there :)
[18:59] * swami1 (~swami@27.7.172.54) has joined #ceph
[19:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[19:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[19:01] * Heebie (~thebert@dub-bdtn-office-r1.net.digiweb.ie) Quit (Remote host closed the connection)
[19:02] * bjornar__ (~bjornar@ti0099a430-0908.bb.online.no) has joined #ceph
[19:02] * bjornar_ (~bjornar@ti0099a430-0908.bb.online.no) has joined #ceph
[19:02] * kanagaraj (~kanagaraj@27.7.11.25) Quit (Quit: Leaving)
[19:08] <madkiss> hi folks
[19:08] <madkiss> is there any progress on http://tracker.ceph.com/projects/ceph/wiki/Rados_qos?parent=Jewel ?
[19:11] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[19:11] * Aramande_ (~Guest1390@76GAACQUJ.tor-irc.dnsbl.oftc.net) Quit ()
[19:11] * vend3r (~Corti^car@tsn109-201-152-238.dyn.nltelcom.net) has joined #ceph
[19:12] * RMar04 (~RMar04@5.153.255.226) Quit (Quit: Leaving.)
[19:12] * rmart04 (~rmart04@support.memset.com) Quit (Quit: rmart04)
[19:14] * vbellur (~vijay@2601:647:4f00:4960:5e51:4fff:fee8:6a5c) has joined #ceph
[19:14] * Concubidated (~Adium@c-50-173-245-118.hsd1.ca.comcast.net) has joined #ceph
[19:15] * mykola (~Mikolaj@91.225.200.81) has joined #ceph
[19:15] * mgolub (~Mikolaj@91.225.200.81) has joined #ceph
[19:17] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) has joined #ceph
[19:17] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) Quit ()
[19:20] * johnavp1989 (~jpetrini@69.160.43.38) has joined #ceph
[19:21] <johnavp1989> Hi all, I'm having issues recreating a journal for existing OSD's
[19:22] <johnavp1989> After creating the partition with the journal uuid from /var/lib/ceph/osd/ceph-3/journal_uuid I am running this command
[19:22] <johnavp1989> /var/lib/ceph/osd/ceph-3
[19:22] <johnavp1989> Sorry this command: sudo ceph-osd --mkjournal -i 3
[19:22] * smokedmeets (~smokedmee@c-73-158-201-226.hsd1.ca.comcast.net) has joined #ceph
[19:22] <johnavp1989> It worked for the first journal and the OSD is up. but it fails with the following error error creating fresh journal /var/lib/ceph/osd/ceph-3/journal for object store /var/lib/ceph/osd/ceph-3: (22) Invalid argument
[19:25] * sleinen (~Adium@macsl.switch.ch) Quit (Ping timeout: 480 seconds)
[19:29] * neerbeer (~neerbeer@150.sub-70-208-144.myvzw.com) Quit (Ping timeout: 480 seconds)
[19:30] * smokedmeets (~smokedmee@c-73-158-201-226.hsd1.ca.comcast.net) Quit (Quit: smokedmeets)
[19:31] * swami1 (~swami@27.7.172.54) Quit (Quit: Leaving.)
[19:32] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[19:34] * Heebie (~thebert@dub-bdtn-office-r1.net.digiweb.ie) has joined #ceph
[19:38] * neerbeer (~neerbeer@208.79.16.11) has joined #ceph
[19:38] * valeech (~valeech@50-205-143-162-static.hfc.comcastbusiness.net) has joined #ceph
[19:39] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[19:40] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[19:40] * nhm (~nhm@8.25.222.2) Quit (Ping timeout: 480 seconds)
[19:40] * reed (~reed@216.38.134.18) Quit (Quit: Ex-Chat)
[19:41] * smokedmeets (~smokedmee@c-73-158-201-226.hsd1.ca.comcast.net) has joined #ceph
[19:41] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[19:41] * vend3r (~Corti^car@tsn109-201-152-238.dyn.nltelcom.net) Quit ()
[19:41] * rf`1 (~Tenk@64.ip-37-187-176.eu) has joined #ceph
[19:43] * pvh_sa (~pvh@169-0-182-212.ip.afrihost.co.za) has joined #ceph
[19:44] * BrianA (~BrianA@fw-rw.shutterfly.com) has joined #ceph
[19:44] * LongyanG (~long@15255.s.time4vps.eu) Quit (Quit: leaving)
[19:44] * LongyanG (~long@15255.s.time4vps.eu) has joined #ceph
[19:48] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[19:48] <johnavp1989> Nevermind I figured it out. Just had to reboot after creating the partitions so that the kernel would pick up the new partition table
[19:50] * sleinen1 (~Adium@2001:620:0:82::101) has joined #ceph
[19:50] * jtriley (~jtriley@140.247.242.54) has joined #ceph
[19:51] * smokedmeets (~smokedmee@c-73-158-201-226.hsd1.ca.comcast.net) Quit (Quit: smokedmeets)
[19:52] * pam (~pam@host246-56-dynamic.48-82-r.retail.telecomitalia.it) Quit (Quit: pam)
[19:52] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Ping timeout: 480 seconds)
[19:53] * Miouge (~Miouge@89.248.140.12) has joined #ceph
[19:56] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:56] * cathode (~cathode@50-198-166-81-static.hfc.comcastbusiness.net) has joined #ceph
[20:00] * diq (~diq@2620:11c:f:2:c23f:d5ff:fe62:112c) has joined #ceph
[20:00] <diq> any idea what causes this message in ceph-deploy after a partprobe?
[20:00] <diq> Error: Error informing the kernel about modifications to partition /dev/sdf1 -- Device or resource busy.
[20:00] <diq> This means Linux won't know about any changes you made to /dev/sdf1 until you reboot -- so you shouldn't mount it or use it in
[20:00] <diq> any way before rebooting.
[20:00] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) has joined #ceph
[20:00] <diq> brand new drives, brand new machine, just kickstarted. Nothing has touched those drives so they absolutely cannot be in use.
[20:02] * Kupo1 (~tyler.wil@23.111.254.159) has joined #ceph
[20:03] <LobsterRoll> diq: see https://github.com/ceph/ceph/pull/7001
[20:03] <Miouge> nils_: I am somewhat tempted by the higher density that 2.5??? drives offer (2U chassis with 24x2.5???). I have a hard time finding chassis with 2.5??? bays (journal+OS) and 3.5??? bays (OSD)
[20:03] <diq> I see some mailing list messages that Hammer's ceph-deploy should use partx instead of partprobe on RHEL/CentOS, but it's definitely using partprobe
[20:04] <diq> LobsterRoll, ahh the good ol sleep/wait longer fix. Thanks!
[20:05] <nils_> Miouge, I hear ya...
[20:06] <nils_> Miouge, but the 2.5" are usually so god damn expensive...
[20:06] <diq> ok another question and TIA. Any ideas on this? Device dev-disk-by\x2dpartlabel-ceph\x5cx20journal.device appeared twice with different sysfs p
[20:06] <diq> aths /sys/devices/pci0000:00/0000:00:1f.2/ata6/host6/target6:0:0/6:0:0:0/block/sdb/sdb23 and /sys/devices/pci0000:00/0000:00:1f
[20:06] <diq> .2/ata6/host6/target6:0:0/6:0:0:0/block/sdb/sdb18
[20:07] <Miouge> nils_: Yep :( the 2TB 2.5??? is is x2.5 more expensive than its 3.5??? version
[20:07] * Kupo (~tyler.wil@23.111.254.159) Quit (Ping timeout: 480 seconds)
[20:08] <cathode> what about using a 1U for each node head and a couple high-density disk shelves attached via 12Gb/s SAS?
[20:08] <nils_> Miouge, I think we have a supermicro case here with 12x3.5 and 4x2.5 (laptop size)
[20:08] * Concubidated (~Adium@c-50-173-245-118.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[20:09] <diq> cathode, more complexity/moving parts to fail/miswire/go bad. I much prefer the self-contained route.
[20:09] <cathode> ah
[20:09] <cathode> fair enough
[20:10] <nils_> http://www.supermicro.nl/products/chassis/2U/?chs=826 this one offers at least 2 2.5" on the back side.
[20:10] <diq> you'll generally need a minimum of 1U for head node and 3U for a 3.5" disk shelf. That's 4U. You can get 70+ 3.5" drives in 4U servers these days.
[20:11] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[20:11] * rf`1 (~Tenk@84ZAACWF2.tor-irc.dnsbl.oftc.net) Quit ()
[20:11] * PeterRabbit (~nih@192.42.115.101) has joined #ceph
[20:12] <cathode> nils_ - we just bought two supermicro SC216BAC-R920LPB chassis for redundant freebsd + zfs NAS nodes
[20:12] <cathode> they are the 24 x 2.5" version
[20:12] <Miouge> nils_: Maybe the 6028R-E1CR16T? it has 16x3.5??? slots in 2U. Quanta also has some interesting models (like SD1Q-1ULH and D51PH-1ULH) but hard to find a reseller
[20:12] <cathode> with the 2x2.5" cage at the back
[20:12] * angdraug (~angdraug@c-69-181-140-42.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[20:13] <diq> Quanta resellers are easy to find
[20:13] * babilen (~babilen@babilen.user.oftc.net) Quit (Quit: leaving)
[20:13] <diq> ixSystems is a good one
[20:13] <diq> we buy direct, but I've used ixSystems in the past
[20:13] <Miouge> cathode: I am exploring that option as well
[20:14] <Miouge> s/hard to find a reseller/hard to find a reseller in my region :D
[20:14] <diq> ah. where are you located?
[20:14] <cathode> supermicro sells nice trays that fit a 2.5" drive into a 3.5" hot-swap bay
[20:14] * mattbenjamin (~mbenjamin@sccc-66-78-236-243.smartcity.com) has joined #ceph
[20:14] * sleinen1 (~Adium@2001:620:0:82::101) Quit (Ping timeout: 480 seconds)
[20:14] <Miouge> In order of preference a reseller in Sweden or EU
[20:15] <cathode> so you if you bought, say, a supermicro 847-series chassis (24x3.5 in front, 12x3.5 in back, 2U motherboard tray area, 4U overall server size)
[20:15] <cathode> you could sacrifice a couple 3.5" bays and put your journal SSDs and OS SSDs in them
[20:17] <diq> Miouge, I can ask if you'd like
[20:18] * babilen (~babilen@babilen.user.oftc.net) has joined #ceph
[20:19] <Miouge> diq: Happy to a company name if you have that in stock :)
[20:20] <Miouge> cathode: Yep, looks reasonable. I???ll have to run some comparaisons I guess
[20:20] <diq> I emailed them. I'll let you know what they say.
[20:21] <Miouge> Awesome!
[20:21] * hardwire (~oftc-webi@x3550-1.attalascom.net) has joined #ceph
[20:22] <hardwire> Aloha from Alaska!
[20:22] * davidzlap1 (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[20:22] <cathode> i just got my two hp blades for my future ceph cluster (virtual lab)
[20:23] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[20:23] <hardwire> Curious if ceph daemons can be identified by multiple interfaces and load balanced with a heartbeat.
[20:23] <hardwire> I'
[20:23] <cathode> HP BL620c G7, 2x Xeon E7-2830 (8c/16t), 256GB DDR3 (32 x 8GB), 2x 300GB 10k SAS, 4-port 10GbE NIC onboard
[20:23] <hardwire> I'm trying to move away from some vendor locked LAG scenarios
[20:24] <diq> front end or back end network?
[20:24] <hardwire> diq: me or cathode?
[20:24] <diq> hardwire, you
[20:24] * squizzi (~squizzi@107.13.31.195) has joined #ceph
[20:25] <hardwire> I'm hoping to use two active switches for backend and two active for front end. Exploring what protocol options are available for each component when it comes to .. well.. multipath style deployments.
[20:25] <diq> multiple 10gig I'm assuming?
[20:25] <hardwire> single 10 gig for each OSD host to each backend switch.
[20:25] <hardwire> I don't want the backend stacked
[20:26] <hardwire> I have some ideas on how to handle this without much of a fuss or potential flapping.. just curious what others have done.
[20:27] <Miouge> you mean that you don???t want to do switch aware bonding on the backend NICs?
[20:27] <diq> there's always linux bonding ALB
[20:27] <diq> it's not perfect, but if you have enough front-end hosts things will get balanced around (roughly) on inbound
[20:27] <hardwire> I don't have enough diversity in connections to properly balance more than a single LAG channel.
[20:28] * nth (~ryan@188.25.108.216) has joined #ceph
[20:28] * nthro (~ryan@188.25.108.216) has joined #ceph
[20:28] <diq> running a separate network per public front-end and manually pinning the OSD per network would work but could get really messy config wise
[20:29] <hardwire> Miouge: I'm using some IBM Blade 10gigE switches for my two cores.. I'd like to keep them separate since VLAGs can become a bottleneck if a switch goes down or needs replaced.
[20:29] * pam (~pam@host246-56-dynamic.48-82-r.retail.telecomitalia.it) has joined #ceph
[20:29] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[20:29] <hardwire> It'd be better to run half capacity on independant ethernet segments.
[20:29] <diq> even then, you're going to have soft interrupt issues
[20:29] * davidzlap1 (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[20:29] <hardwire> diq: gotcha.
[20:30] * nthro (~ryan@188.25.108.216) Quit ()
[20:30] <hardwire> I'm thinking it may be easier to use OSPF a bit and manually balance my managed clients.
[20:30] <hardwire> or simply VRRP
[20:31] <diq> you're going to have flow issues with ECMP (which was my first thought)
[20:31] <hardwire> I'm less hesitant to use failover on the backend than on the front end.
[20:31] <hardwire> hmmmm.
[20:32] <hardwire> I.. may want to dig up my previous work with CLUSTERIP and give that a test.
[20:32] <diq> eeeek
[20:32] <hardwire> hahaha
[20:32] <hardwire> okok
[20:32] * diq messed with it
[20:32] <diq> a long time ago
[20:33] <hardwire> well. if I'm gonna have 3 OSD servers (6 OSD per host + 2 cache disks) and some extra 1u servers for frontend handling then I'm gonna need to figure out a good setup that allows 3 failover zones.
[20:33] <hardwire> failure
[20:33] * Meths_ is now known as Meths
[20:33] <hardwire> 3 failure zones seems a bit more difficult than simply 2... and I feel like that train of thought means I don't get something I need to be getting that makes both equally straight forward.
[20:34] * sudocat (~dibarra@192.185.1.20) Quit (Read error: Connection reset by peer)
[20:34] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[20:35] * ct16k (~ryan@188.25.108.216) Quit (Ping timeout: 480 seconds)
[20:36] * nth (~ryan@188.25.108.216) Quit (Ping timeout: 480 seconds)
[20:39] * Miouge (~Miouge@89.248.140.12) Quit (Quit: Miouge)
[20:39] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[20:39] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[20:40] * stpierre (~oftc-webi@72.163.2.247) has joined #ceph
[20:41] * PeterRabbit (~nih@76GAACQXX.tor-irc.dnsbl.oftc.net) Quit ()
[20:42] * yuastnav (~blank@91.250.241.241) has joined #ceph
[20:42] * Miouge (~Miouge@89.248.140.12) has joined #ceph
[20:42] * herrsergio (~herrsergi@00021432.user.oftc.net) Quit (Ping timeout: 480 seconds)
[20:45] * smokedmeets (~smokedmee@c-73-158-201-226.hsd1.ca.comcast.net) has joined #ceph
[20:46] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[20:47] <stpierre> when i enable ceph logging to syslog on rhel 7, the update_stats reports end up getting logged to emerg, which sends them to console. is this intended? or even known? or am i doing something egregiously wrong?
[20:47] <stpierre> i assume the (0) in https://github.com/ceph/ceph/blob/master/src/mon/DataHealthService.cc#L162 is a log level of some sort, but i'm not sure what exactly
[20:52] * Miouge (~Miouge@89.248.140.12) Quit (Quit: Miouge)
[20:57] <hardwire> well. if I'm gonna have 3 OSD servers (6 OSD per host + 2 cache disks) and some extra 1u servers for frontend handling then I'm gonna need to figure out a good setup that allows 3 failover zones.
[20:57] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Ping timeout: 480 seconds)
[20:58] * valeech (~valeech@50-205-143-162-static.hfc.comcastbusiness.net) Quit (Quit: valeech)
[20:59] * nhm (~nhm@156.39.10.45) has joined #ceph
[20:59] * ChanServ sets mode +o nhm
[20:59] * evelu (~erwan@37.161.42.55) has joined #ceph
[21:01] * wCPO (~Kristian@188.228.31.139) has joined #ceph
[21:03] * neerbeer (~neerbeer@208.79.16.11) Quit (Ping timeout: 480 seconds)
[21:03] * pam (~pam@host246-56-dynamic.48-82-r.retail.telecomitalia.it) has left #ceph
[21:04] <portante> stpierre: I believe that has been fixed upstream
[21:04] <portante> I ran into that a while ago
[21:06] * mattbenjamin (~mbenjamin@sccc-66-78-236-243.smartcity.com) Quit (Ping timeout: 480 seconds)
[21:06] <portante> See http://tracker.ceph.com/issues/13993
[21:08] * evelu (~erwan@37.161.42.55) Quit (Ping timeout: 480 seconds)
[21:09] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[21:11] * yuastnav (~blank@7V7AACWKB.tor-irc.dnsbl.oftc.net) Quit ()
[21:11] * OODavo (~Crisco@7V7AACWKY.tor-irc.dnsbl.oftc.net) has joined #ceph
[21:12] * dgurtner (~dgurtner@178.197.232.108) has joined #ceph
[21:15] * dgurtner (~dgurtner@178.197.232.108) Quit ()
[21:16] * dgurtner (~dgurtner@178.197.232.108) has joined #ceph
[21:17] * fcape (~fcape@rrcs-97-77-228-30.sw.biz.rr.com) Quit (Ping timeout: 480 seconds)
[21:21] * fcape (~fcape@rrcs-97-77-228-30.sw.biz.rr.com) has joined #ceph
[21:24] * nhm (~nhm@156.39.10.45) Quit (Ping timeout: 480 seconds)
[21:30] <neurodrone_> stpierre: It's also backported to hammer as of yesterday.
[21:32] <stpierre> thanks
[21:32] <lurbs> Are the Ceph mirrors ({au,eu}.ceph.com) actually being updated? 0.94.6 still hasn't arrived there.
[21:33] <alfredodeza> lurbs: that is odd, I would've expected that they would
[21:34] * davidzlap1 (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[21:35] * shylesh (~shylesh@59.95.70.8) Quit (Remote host closed the connection)
[21:37] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[21:39] * angdraug (~angdraug@64.124.158.100) has joined #ceph
[21:40] * bniver (~bniver@nat-pool-bos-u.redhat.com) Quit (Remote host closed the connection)
[21:40] <lurbs> download.ceph.com isn't allowing me to rsync either, but that could be intended.
[21:41] * OODavo (~Crisco@7V7AACWKY.tor-irc.dnsbl.oftc.net) Quit ()
[21:41] * mykola (~Mikolaj@91.225.200.81) Quit (Quit: away)
[21:41] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[21:42] * mgolub (~Mikolaj@91.225.200.81) Quit (Quit: away)
[21:42] * Uniju1 (~skrblr@destiny.enn.lu) has joined #ceph
[21:46] * haplo37 (~haplo37@199.91.185.156) Quit (Ping timeout: 480 seconds)
[21:46] * dgurtner (~dgurtner@178.197.232.108) Quit (Read error: Connection reset by peer)
[21:47] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[21:50] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[21:50] * mattbenjamin (~mbenjamin@8.25.222.2) has joined #ceph
[21:50] * ircolle (~ircolle@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[22:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[22:05] * sleinen (~Adium@2001:620:1000:3:a65e:60ff:fedb:f305) has joined #ceph
[22:06] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[22:06] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[22:07] * pvh_sa (~pvh@169-0-182-212.ip.afrihost.co.za) Quit (Ping timeout: 480 seconds)
[22:09] * fcape (~fcape@rrcs-97-77-228-30.sw.biz.rr.com) Quit (Quit: leaving)
[22:09] * fcape (~fcape@rrcs-97-77-228-30.sw.biz.rr.com) has joined #ceph
[22:11] * stpierre (~oftc-webi@72.163.2.247) Quit (Quit: Page closed)
[22:11] * Uniju1 (~skrblr@84ZAACWKX.tor-irc.dnsbl.oftc.net) Quit ()
[22:12] * mrapple (~Pirate@76GAACQ1U.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:13] * herrsergio (~herrsergi@200.77.224.239) has joined #ceph
[22:14] * herrsergio is now known as Guest5666
[22:16] * smokedmeets (~smokedmee@c-73-158-201-226.hsd1.ca.comcast.net) Quit (Quit: smokedmeets)
[22:18] * aj__ (~aj@x590d8bc7.dyn.telefonica.de) has joined #ceph
[22:19] * jclm (~jclm@static-100-38-70-130.nycmny.fios.verizon.net) Quit (Quit: Leaving.)
[22:21] * davidzlap1 (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Quit: Leaving.)
[22:21] * squizzi (~squizzi@107.13.31.195) Quit (Quit: to the moon!)
[22:22] * evelu (~erwan@37.161.42.55) has joined #ceph
[22:32] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[22:32] * fcape (~fcape@rrcs-97-77-228-30.sw.biz.rr.com) Quit (Ping timeout: 480 seconds)
[22:33] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit ()
[22:35] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[22:36] * rendar (~I@95.238.180.245) Quit (Ping timeout: 480 seconds)
[22:37] * johnavp1989 (~jpetrini@69.160.43.38) Quit (Ping timeout: 480 seconds)
[22:39] * rendar (~I@95.238.180.245) has joined #ceph
[22:40] * jclm (~jclm@63.117.50.130) has joined #ceph
[22:41] * mrapple (~Pirate@76GAACQ1U.tor-irc.dnsbl.oftc.net) Quit ()
[22:41] * Teddybareman (~KapiteinK@us2x.mullvad.net) has joined #ceph
[22:42] * bniver (~bniver@pool-173-48-58-27.bstnma.fios.verizon.net) has joined #ceph
[22:43] * smokedmeets (~smokedmee@c-73-158-201-226.hsd1.ca.comcast.net) has joined #ceph
[22:47] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[22:48] <wCPO> Reading the docs, it looks like ceph require "tons" of memory. Is 512MB unrealistic for for 20-40GB data?
[22:51] <m0zes> it is probably very tight. especially when in backfill/recovery mode.
[22:52] * LobsterRoll (~LobsterRo@65.112.8.206) Quit (Quit: LobsterRoll)
[22:54] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[22:55] * mhack|afk (~mhack@66-168-117-78.dhcp.oxfr.ma.charter.com) Quit (Ping timeout: 480 seconds)
[22:56] * georgem (~Adium@184-175-1-33.dsl.teksavvy.com) has joined #ceph
[22:57] * georgem (~Adium@184-175-1-33.dsl.teksavvy.com) Quit ()
[22:57] * georgem (~Adium@206.108.127.16) has joined #ceph
[22:58] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[23:00] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[23:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[23:02] * fcape (~fcape@107-220-57-73.lightspeed.austtx.sbcglobal.net) has joined #ceph
[23:03] <wCPO> m0zes, I only plan to use it for object storage, so no block storage. Do it have anything to say?
[23:08] * Discovery (~Discovery@178.239.49.68) has joined #ceph
[23:09] * bene_in_mtg (~bene@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[23:11] * Teddybareman (~KapiteinK@4MJAACXWZ.tor-irc.dnsbl.oftc.net) Quit ()
[23:11] * RaidSoft (~Zyn@static-ip-85-25-103-119.inaddr.ip-pool.com) has joined #ceph
[23:12] <m0zes> even my smallest osd daemons are using more than 512MB of memory while the cluster is happy.
[23:14] <wCPO> m0zes, how much storage?
[23:14] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[23:14] <m0zes> the overall amount of data isn't a big factor in memory utilization. I would suggest the greater of 2GB/osd or 1GB/TB of storage on the host.
[23:15] <wCPO> m0zes, i'm not at that range. I'm max 50GB
[23:15] * johnavp1989 (~jpetrini@wsip-174-79-34-199.ph.ph.cox.net) has joined #ceph
[23:15] <m0zes> right, but the bottom end I would recommend is 2GB/osd then.
[23:15] * evelu (~erwan@37.161.42.55) Quit (Ping timeout: 480 seconds)
[23:15] <m0zes> 2.1PB is the raw capacity of my cluster. this small osd was a 200GB partition using 800MB of memory.
[23:16] <cathode> wow
[23:16] <cathode> m0zes - that's for a business right?
[23:16] <cathode> 2.1PB for a home or lab environment seems ... expensive
[23:16] <m0zes> hpc cluster.
[23:17] <m0zes> academic and research at higher ed.
[23:18] <cathode> ah cool
[23:18] <m0zes> https://support.beocat.ksu.edu/BeocatDocs/index.php/Main_Page
[23:19] <wCPO> m0zes, do numbers of files affect memory usage?
[23:19] * Geph (~Geoffrey@169-0-138-216.ip.afrihost.co.za) has joined #ceph
[23:19] <m0zes> number of pgs per osd can. not so much the number of files.
[23:21] <m0zes> If you want to *try* on a system that memory constrained, no one is going to stop you. but you may run into cascading "out of memory" situations.
[23:23] <wCPO> I don't want to try, if it is nearly 100% sure it won't work. :) Thanks for the help!
[23:25] * nhm (~nhm@216.3.171.24) has joined #ceph
[23:25] * ChanServ sets mode +o nhm
[23:28] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[23:34] * Long_yanG (~long@15255.s.time4vps.eu) has joined #ceph
[23:35] * jtriley (~jtriley@140.247.242.54) Quit (Ping timeout: 480 seconds)
[23:38] * davidz (~davidz@2605:e000:1313:8003:94f6:8dfe:5ec3:63c1) has joined #ceph
[23:41] * RaidSoft (~Zyn@76GAACQ3T.tor-irc.dnsbl.oftc.net) Quit ()
[23:42] * LongyanG (~long@15255.s.time4vps.eu) Quit (Ping timeout: 480 seconds)
[23:48] <georgem> m0zes: my use case is very similar with yours (academic research) and I'm using radosgw with civetweb behind apache with ssl termination
[23:48] <georgem> m0zes: I'm curious how much download throughput you get from your cluster
[23:49] <m0zes> I've just started playing with radosgw (in the last week)
[23:49] <m0zes> and certainly not done enough to know what kind of performance I should be getting out of radosgw... there are a lot of moving pieces just in the radosgw->civetweb->apache space.
[23:50] <m0zes> and at the moment I'm playing with https://bitbucket.org/nikratio/s3ql/ on top of radosgw/civetweb. for mirroring distros
[23:53] * sudocat (~dibarra@192.185.1.20) Quit (Quit: Leaving.)
[23:53] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:55] <georgem> m0zes: ok, then I guess I'm ahead on that front; I've reached 12.5 Gbps while doing 40 consecutive downloads but haproxy is 100% on all 22 cores I gave it, because of the SSL termination; the four rgw/civetweb are also running hot
[23:59] * aarontc (~aarontc@2001:470:e893::1:1) Quit (Quit: Bye!)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.