#ceph IRC Log

Index

IRC Log for 2015-07-10

Timestamps are in GMT/BST.

[0:03] <mtanski> if cost is your primary motivator then you should do errasure coding
[0:03] * Destreyf_ (~quassel@host-74-211-21-38.beyondbb.com) Quit (Remote host closed the connection)
[0:03] <mtanski> where you???re overhead is like 1.4x to 1.6x but you have lot more resilience to a single disk loss
[0:03] <snakamoto> disk dying per day - now I'm panicking again =D
[0:03] * visbits (~textual@8.29.138.28) Quit (Quit: Textual IRC Client: www.textualapp.com)
[0:04] <mtanski> you have you figure out which 2 out of the 3 you want to focus on: fast, cheap, resilient
[0:05] * TheSov2 (~TheSov@cip-248.trustwave.com) Quit (Read error: Connection reset by peer)
[0:05] * MentalRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[0:08] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[0:09] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[0:09] <SpaceDump> Bah, why no calamari package for ubuntu 14.04 ? :>
[0:09] <SpaceDump> Or is it just bad google skills?
[0:10] <snakamoto> mtanski: everyone always wants #2???
[0:12] <doppelgrau> snakamoto: use EC => cheap (and slow) system :)
[0:13] <snakamoto> The work they are doing around the caching layer definitely makes it more attactive, but I'm going to keep fighting for 3 replicas
[0:14] * bitserker (~toni@188.87.126.67) Quit (Quit: Leaving.)
[0:14] <mtanski> depends on your case
[0:14] * branto (~branto@ip-213-220-232-132.net.upcbroadband.cz) has left #ceph
[0:14] <mtanski> write little, read mostly case works well
[0:16] <mtanski> some other systems get write benefits from using EC (like qfs) but that???s because they use it to do large linear write out of hadoop and they want to avoid the network and disk write amplification
[0:17] * Snowcat4 (~cmrn@9S0AAB2DZ.tor-irc.dnsbl.oftc.net) Quit ()
[0:17] * ade (~abradshaw@p4FF82196.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[0:18] * nardial (~ls@dslb-088-076-092-027.088.076.pools.vodafone-ip.de) Quit (Quit: Leaving)
[0:19] * bgleb (~bgleb@84.201.164.233-vpn.dhcp.yndx.net) has joined #ceph
[0:21] * CoZmicShReddeR (~Spikey@exit1.torproxy.org) has joined #ceph
[0:24] * rlrevell (~leer@184.52.129.221) Quit (Quit: Leaving.)
[0:25] * bgleb (~bgleb@84.201.164.233-vpn.dhcp.yndx.net) Quit (Remote host closed the connection)
[0:26] * rendar (~I@host95-176-dynamic.47-79-r.retail.telecomitalia.it) Quit ()
[0:27] * Nacer (~Nacer@alf94-4-82-224-79-209.fbx.proxad.net) Quit (Remote host closed the connection)
[0:32] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[0:42] * LeaChim (~LeaChim@host81-157-90-38.range81-157.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[0:43] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) Quit (Quit: leaving)
[0:44] * jwilkins (~jwilkins@2601:644:4100:bfef:ea2a:eaff:fe08:3f1d) Quit (Ping timeout: 480 seconds)
[0:45] * bgleb (~bgleb@84.201.151.57-vpn.dhcp.yndx.net) has joined #ceph
[0:46] * rlrevell (~leer@184.52.129.221) has joined #ceph
[0:51] * CoZmicShReddeR (~Spikey@7R2AACHDF.tor-irc.dnsbl.oftc.net) Quit ()
[0:51] * `Jin (~mog_@176.10.99.206) has joined #ceph
[0:51] * rlrevell (~leer@184.52.129.221) Quit (Read error: Connection reset by peer)
[0:52] * al (quassel@niel.cx) Quit (Remote host closed the connection)
[0:53] * jwilkins (~jwilkins@2600:1010:b002:c7d0:ea2a:eaff:fe08:3f1d) has joined #ceph
[0:56] * al (quassel@niel.cx) has joined #ceph
[0:56] * stiopa (~stiopa@cpc73828-dals21-2-0-cust630.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[0:58] * sleinen1 (~Adium@2001:620:0:69::100) Quit (Ping timeout: 480 seconds)
[1:02] * bgleb (~bgleb@84.201.151.57-vpn.dhcp.yndx.net) Quit (Ping timeout: 480 seconds)
[1:05] * jwilkins (~jwilkins@2600:1010:b002:c7d0:ea2a:eaff:fe08:3f1d) Quit (Ping timeout: 480 seconds)
[1:05] * davidbitton (~davidbitt@ool-44c4506f.dyn.optonline.net) has joined #ceph
[1:07] * lcurtis (~lcurtis@47.19.105.250) Quit (Remote host closed the connection)
[1:10] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[1:11] * vata (~vata@207.96.182.162) Quit (Quit: Leaving.)
[1:11] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[1:11] * rlrevell (~leer@184.52.129.221) has joined #ceph
[1:12] * kevinperks (~Adium@cpe-75-177-32-14.triad.res.rr.com) Quit (Quit: Leaving.)
[1:15] * jwilkins (~jwilkins@2601:644:4100:bfef:ea2a:eaff:fe08:3f1d) has joined #ceph
[1:19] * jbautista- (~wushudoin@transit-86-181-132-209.redhat.com) has joined #ceph
[1:21] * `Jin (~mog_@9S0AAB2F7.tor-irc.dnsbl.oftc.net) Quit ()
[1:23] * davidbitton (~davidbitt@ool-44c4506f.dyn.optonline.net) Quit (Quit: Textual IRC Client: www.textualapp.com)
[1:27] * wushudoin_ (~wushudoin@38.140.108.2) Quit (Ping timeout: 480 seconds)
[1:27] * moore (~moore@64.202.160.88) Quit (Remote host closed the connection)
[1:27] * moore (~moore@64.202.160.88) has joined #ceph
[1:28] * jbautista- (~wushudoin@transit-86-181-132-209.redhat.com) Quit (Ping timeout: 480 seconds)
[1:28] * jbautista- (~wushudoin@38.140.108.2) has joined #ceph
[1:29] * dlan_ (~dennis@116.228.88.131) Quit (Remote host closed the connection)
[1:30] <rkeene> ~
[1:30] <rkeene> i
[1:30] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[1:30] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[1:30] * Grum (~Shadow386@tor-exit1.arbitrary.ch) has joined #ceph
[1:30] * oms101 (~oms101@p20030057EA612000EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:32] * dyasny (~dyasny@104.158.33.70) Quit (Ping timeout: 480 seconds)
[1:33] * ingslovak (~peto@cloud.vps.websupport.sk) has joined #ceph
[1:34] * dlan_ (~dennis@116.228.88.131) has joined #ceph
[1:35] * moore (~moore@64.202.160.88) Quit (Ping timeout: 480 seconds)
[1:38] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) Quit (Quit: Leaving)
[1:39] * oms101 (~oms101@p20030057EA456C00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[1:44] * MentalRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: This computer has gone to sleep)
[1:44] * rlrevell (~leer@184.52.129.221) Quit (Read error: Connection reset by peer)
[1:50] * rlrevell (~leer@184.52.129.221) has joined #ceph
[1:54] * jwilkins (~jwilkins@2601:644:4100:bfef:ea2a:eaff:fe08:3f1d) Quit (Ping timeout: 480 seconds)
[2:00] * Grum (~Shadow386@7R2AACHFO.tor-irc.dnsbl.oftc.net) Quit ()
[2:02] * dopesong (~dopesong@lb1.mailer.data.lt) Quit (Remote host closed the connection)
[2:05] * jwilkins (~jwilkins@c-67-180-123-48.hsd1.ca.comcast.net) has joined #ceph
[2:06] * xarses (~xarses@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:12] * tdb (~tdb@myrtle.kent.ac.uk) Quit (Ping timeout: 480 seconds)
[2:20] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[2:27] * Debesis (Debesis@233.128.140.82.mobile.mezon.lt) Quit (Ping timeout: 480 seconds)
[2:27] * arsenaali (~aleksag@9S0AAB2JG.tor-irc.dnsbl.oftc.net) has joined #ceph
[2:28] * dustinm` (~dustinm`@105.ip-167-114-152.net) Quit (Ping timeout: 480 seconds)
[2:30] * Debesis (0x@233.128.140.82.mobile.mezon.lt) has joined #ceph
[2:32] * fam_away is now known as fam
[2:34] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) Quit (Quit: Leaving.)
[2:41] * primechuck (~primechuc@173-17-128-216.client.mchsi.com) has joined #ceph
[2:41] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[2:41] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[2:43] * primechu_ (~primechuc@173-17-128-216.client.mchsi.com) has joined #ceph
[2:43] * primechuck (~primechuc@173-17-128-216.client.mchsi.com) Quit (Read error: Connection reset by peer)
[2:44] * primechuck (~primechuc@173-17-128-216.client.mchsi.com) has joined #ceph
[2:44] * primechu_ (~primechuc@173-17-128-216.client.mchsi.com) Quit (Read error: Connection reset by peer)
[2:45] * kfox1111 (bob@leary.csoft.net) has joined #ceph
[2:45] <kfox1111> upgrading from openstack juno->kilo. with a firefly ceph. its breaking on it trying to send a 'df' command to the mon.
[2:45] <kfox1111> the string in the mon debug logs has what looks to be spaces in it.
[2:45] <kfox1111> a potential unicode issue?
[2:46] <kfox1111> whats the minimum version of ceph supported by the openstack kilo ceph driver?
[2:47] * tdb (~tdb@willow.kent.ac.uk) has joined #ceph
[2:48] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[2:49] * flisky (~Thunderbi@118.186.156.6) has joined #ceph
[2:52] <joshd1> kfox1111: responded in the cinder channel - versions before giant don't have the 'max_avail' field used there
[2:52] * primechuck (~primechuc@173-17-128-216.client.mchsi.com) Quit (Remote host closed the connection)
[2:52] <joshd1> but it's a bug - it should work with firefly at least too
[2:54] * ibravo (~ibravo@72.198.142.104) Quit (Quit: Leaving)
[2:57] * arsenaali (~aleksag@9S0AAB2JG.tor-irc.dnsbl.oftc.net) Quit ()
[2:57] * dusti (~cryptk@relay-d.tor-exit.network) has joined #ceph
[2:59] * vata (~vata@cable-21.246.173-197.electronicbox.net) has joined #ceph
[3:03] * flisky (~Thunderbi@118.186.156.6) Quit (Ping timeout: 480 seconds)
[3:07] * tdb (~tdb@willow.kent.ac.uk) Quit (Ping timeout: 480 seconds)
[3:08] * tdb (~tdb@willow.kent.ac.uk) has joined #ceph
[3:08] * sjusthm (~sam@96-39-232-68.dhcp.mtpk.ca.charter.com) Quit (Quit: Leaving.)
[3:10] * Debesis (0x@233.128.140.82.mobile.mezon.lt) Quit (Ping timeout: 480 seconds)
[3:10] * derjohn_mobi (~aj@x4db0e1a3.dyn.telefonica.de) has joined #ceph
[3:12] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) Quit (Ping timeout: 480 seconds)
[3:14] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[3:14] * georgem (~Adium@69-196-163-65.dsl.teksavvy.com) has joined #ceph
[3:15] * georgem (~Adium@69-196-163-65.dsl.teksavvy.com) Quit ()
[3:15] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[3:17] * derjohn_mob (~aj@x4db0d73e.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[3:18] * chentian (~oftc-webi@116.228.88.99) has joined #ceph
[3:19] * yguang11 (~yguang11@c-50-131-146-113.hsd1.ca.comcast.net) has joined #ceph
[3:20] * MentalRay (~MRay@107.171.161.165) has joined #ceph
[3:21] * Vacuum__ (~Vacuum@i59F79CEA.versanet.de) Quit (Ping timeout: 480 seconds)
[3:24] * Vacuum_ (~Vacuum@i59F79382.versanet.de) has joined #ceph
[3:26] * shyu (~Shanzhi@119.254.120.66) has joined #ceph
[3:26] * chentian (~oftc-webi@116.228.88.99) Quit (Ping timeout: 480 seconds)
[3:27] * dusti (~cryptk@9S0AAB2KL.tor-irc.dnsbl.oftc.net) Quit ()
[3:28] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[3:28] * vbellur (~vijay@122.171.181.56) has joined #ceph
[3:30] * ingslovak (~peto@cloud.vps.websupport.sk) Quit (Quit: Leaving.)
[3:32] * eXeler0n (~xul@9S0AAB2LI.tor-irc.dnsbl.oftc.net) has joined #ceph
[3:39] * calvinx (~calvin@101.100.172.246) has joined #ceph
[3:40] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[3:41] * calvinx (~calvin@101.100.172.246) Quit ()
[3:45] * yanzheng (~zhyan@182.139.21.13) has joined #ceph
[3:47] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[3:52] * davidz1 (~davidz@2605:e000:1313:8003:480d:9e3a:99f9:1bb1) Quit (Quit: Leaving.)
[3:52] * davidz (~davidz@2605:e000:1313:8003:4dae:e6ec:b271:eb61) has joined #ceph
[3:55] * snakamoto (~Adium@192.16.26.2) Quit (Quit: Leaving.)
[3:56] * haomaiwa_ (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[3:56] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) has joined #ceph
[4:02] * eXeler0n (~xul@9S0AAB2LI.tor-irc.dnsbl.oftc.net) Quit ()
[4:04] * zhaochao (~zhaochao@111.161.77.233) has joined #ceph
[4:04] * tacticus (~tacticus@v6.kca.id.au) Quit (Quit: WeeChat 1.1.1)
[4:05] * tacticus (~tacticus@v6.kca.id.au) has joined #ceph
[4:07] * fmanana (~fdmanana@bl5-246-109.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[4:11] * verbalins (~Corti^car@spftor1e1.privacyfoundation.ch) has joined #ceph
[4:16] * haomaiwang (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) has joined #ceph
[4:19] * flisky (~Thunderbi@106.39.60.34) has joined #ceph
[4:41] * verbalins (~Corti^car@9S0AAB2M8.tor-irc.dnsbl.oftc.net) Quit ()
[4:41] * Crisco (~Deiz@spftor1e1.privacyfoundation.ch) has joined #ceph
[4:46] * flisky1 (~Thunderbi@106.39.60.34) has joined #ceph
[4:51] * haomaiwang (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[4:51] * haomaiwa_ (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) has joined #ceph
[4:52] * flisky (~Thunderbi@106.39.60.34) Quit (Read error: Connection reset by peer)
[5:01] * ketor (~ketor@li218-88.members.linode.com) has joined #ceph
[5:03] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Ping timeout: 480 seconds)
[5:08] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[5:11] * Crisco (~Deiz@7R2AACHL5.tor-irc.dnsbl.oftc.net) Quit ()
[5:14] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[5:15] * kefu (~kefu@114.92.106.47) has joined #ceph
[5:15] * ylmson (~nicatronT@tor2e1.privacyfoundation.ch) has joined #ceph
[5:18] * vbellur (~vijay@122.171.181.56) Quit (Ping timeout: 480 seconds)
[5:19] * davidz1 (~davidz@2605:e000:1313:8003:791e:1ea2:ce22:df81) has joined #ceph
[5:20] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) has joined #ceph
[5:20] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) Quit (Remote host closed the connection)
[5:22] * davidz (~davidz@2605:e000:1313:8003:4dae:e6ec:b271:eb61) Quit (Ping timeout: 480 seconds)
[5:28] * theanalyst (theanalyst@open.source.rocks.my.socks.firrre.com) Quit (Quit: ZNC - http://znc.in)
[5:30] * theanalyst (theanalyst@open.source.rocks.my.socks.firrre.com) has joined #ceph
[5:40] * theanalyst (theanalyst@open.source.rocks.my.socks.firrre.com) Quit (Remote host closed the connection)
[5:43] * dustinm` (~dustinm`@105.ip-167-114-152.net) has joined #ceph
[5:43] * theanalyst (theanalyst@open.source.rocks.my.socks.firrre.com) has joined #ceph
[5:43] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[5:45] * ylmson (~nicatronT@5NZAAEWUV.tor-irc.dnsbl.oftc.net) Quit ()
[5:45] * Averad (~tallest_r@marcuse-2.nos-oignons.net) has joined #ceph
[5:58] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[5:58] * Vacuum__ (~Vacuum@i59F79A78.versanet.de) has joined #ceph
[6:01] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Ping timeout: 480 seconds)
[6:05] * Vacuum_ (~Vacuum@i59F79382.versanet.de) Quit (Ping timeout: 480 seconds)
[6:12] * kefu (~kefu@114.92.106.47) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:15] * Averad (~tallest_r@5NZAAEWV1.tor-irc.dnsbl.oftc.net) Quit ()
[6:16] * kevinperks (~Adium@cpe-75-177-32-14.triad.res.rr.com) has joined #ceph
[6:16] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[6:16] * kevinperks (~Adium@cpe-75-177-32-14.triad.res.rr.com) Quit ()
[6:17] * kevinperks (~Adium@2606:a000:80ad:1300:5018:697:8d15:cd9b) has joined #ceph
[6:17] * fsimonce (~simon@host30-242-dynamic.23-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[6:20] * biGGer (~BillyBobJ@luxemburg.gtor.org) has joined #ceph
[6:20] * ketor (~ketor@li218-88.members.linode.com) Quit (Remote host closed the connection)
[6:24] * haomaiwa_ (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[6:33] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[6:34] * MentalRay (~MRay@107.171.161.165) Quit (Quit: This computer has gone to sleep)
[6:34] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) has joined #ceph
[6:35] * sleinen1 (~Adium@2001:620:0:82::100) has joined #ceph
[6:39] * haomaiwang (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) has joined #ceph
[6:40] * kefu (~kefu@114.92.106.47) has joined #ceph
[6:41] * tahder (~tahder@130.123.179.59) has joined #ceph
[6:41] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[6:42] <tahder> hi, anybody around?
[6:44] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[6:46] * kevinperks (~Adium@2606:a000:80ad:1300:5018:697:8d15:cd9b) Quit (Quit: Leaving.)
[6:48] * derjohn_mobi (~aj@x4db0e1a3.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[6:49] * sleinen1 (~Adium@2001:620:0:82::100) Quit (Ping timeout: 480 seconds)
[6:49] <m0zes> channel isn't particularly active at this time, but I might be able to help if it is fairly quick, tahder.
[6:50] * biGGer (~BillyBobJ@7R2AACHN7.tor-irc.dnsbl.oftc.net) Quit ()
[6:50] * spate (~GuntherDW@relay-a.tor-exit.network) has joined #ceph
[6:51] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[6:51] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[6:51] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[6:55] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Read error: Connection reset by peer)
[6:59] * kefu (~kefu@114.92.106.47) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[7:05] * yguang11_ (~yguang11@nat-dip15.fw.corp.yahoo.com) has joined #ceph
[7:08] * haomaiwang (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[7:08] * haomaiwang (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) has joined #ceph
[7:09] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[7:09] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[7:10] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:11] * yguang11 (~yguang11@c-50-131-146-113.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[7:12] * treenerd (~treenerd@178.165.133.216.wireless.dyn.drei.com) has joined #ceph
[7:12] * treenerd (~treenerd@178.165.133.216.wireless.dyn.drei.com) Quit ()
[7:17] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[7:19] * sleinen1 (~Adium@2001:620:0:82::101) has joined #ceph
[7:20] * spate (~GuntherDW@9S0AAB2R3.tor-irc.dnsbl.oftc.net) Quit ()
[7:24] * Kalado (~Kalado@relay-h.tor-exit.network) has joined #ceph
[7:25] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[7:28] * zaitcev (~zaitcev@2001:558:6001:10:61d7:f51f:def8:4b0f) Quit (Quit: Bye)
[7:29] * branto (~branto@ip-213-220-232-132.net.upcbroadband.cz) has joined #ceph
[7:44] * vbellur (~vijay@121.244.87.124) has joined #ceph
[7:46] * rakesh (~rakesh@121.244.87.117) has joined #ceph
[7:52] * overclk (~overclk@121.244.87.117) has joined #ceph
[7:52] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[7:54] * Kalado (~Kalado@7R2AACHPC.tor-irc.dnsbl.oftc.net) Quit ()
[7:54] * KungFuHamster (~Kakeru@tor-exit1.arbitrary.ch) has joined #ceph
[7:59] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[8:06] * rlrevell (~leer@184.52.129.221) Quit (Read error: Connection reset by peer)
[8:06] * rlrevell (~leer@184.52.129.221) has joined #ceph
[8:07] * derjohn_mob (~aj@x4db0e1a3.dyn.telefonica.de) has joined #ceph
[8:07] * haomaiwa_ (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) has joined #ceph
[8:07] * haomaiwang (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[8:07] * yguang11_ (~yguang11@nat-dip15.fw.corp.yahoo.com) Quit ()
[8:09] * jks (~jks@178.155.151.121) Quit (Ping timeout: 480 seconds)
[8:12] * rakesh (~rakesh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[8:14] * kefu (~kefu@183.192.12.152) has joined #ceph
[8:16] * rakesh (~rakesh@121.244.87.124) has joined #ceph
[8:16] * spinoshi (~spinoshi@131.114.226.78) has joined #ceph
[8:17] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[8:17] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[8:19] * derjohn_mobi (~aj@x590e4201.dyn.telefonica.de) has joined #ceph
[8:22] * dopesong (~dopesong@lb1.mailer.data.lt) has joined #ceph
[8:22] * dopesong (~dopesong@lb1.mailer.data.lt) Quit (Remote host closed the connection)
[8:24] * derjohn_mob (~aj@x4db0e1a3.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[8:24] * KungFuHamster (~Kakeru@9S0AAB2TG.tor-irc.dnsbl.oftc.net) Quit ()
[8:24] * geegeegee1 (~ricin@brownhatsecurity.com) has joined #ceph
[8:25] * kefu (~kefu@183.192.12.152) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[8:25] * sleinen1 (~Adium@2001:620:0:82::101) Quit (Ping timeout: 480 seconds)
[8:31] * kefu (~kefu@183.192.12.152) has joined #ceph
[8:37] * karnan (~karnan@121.244.87.117) has joined #ceph
[8:44] * b0e (~aledermue@213.95.25.82) has joined #ceph
[8:46] * Sysadmin88 (~IceChat77@2.124.164.69) Quit (Quit: Do fish get thirsty?)
[8:48] * kefu_ (~kefu@183.192.12.152) has joined #ceph
[8:51] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) has joined #ceph
[8:53] * kefu (~kefu@183.192.12.152) Quit (Ping timeout: 480 seconds)
[8:53] * kefu (~kefu@114.92.106.47) has joined #ceph
[8:54] * geegeegee1 (~ricin@7R2AACHQX.tor-irc.dnsbl.oftc.net) Quit ()
[8:57] * ade (~abradshaw@tmo-111-113.customers.d1-online.com) has joined #ceph
[8:58] * kefu_ (~kefu@183.192.12.152) Quit (Ping timeout: 480 seconds)
[8:59] * delcake (~Throlkim@hessel2.torservers.net) has joined #ceph
[8:59] * spinoshi (~spinoshi@131.114.226.78) Quit (Remote host closed the connection)
[9:09] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[9:10] * shylesh (~shylesh@121.244.87.124) has joined #ceph
[9:15] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[9:16] * oblu (~o@62.109.134.112) Quit (Ping timeout: 480 seconds)
[9:17] * yanzheng1 (~zhyan@182.139.207.212) has joined #ceph
[9:18] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[9:21] * yanzheng (~zhyan@182.139.21.13) Quit (Ping timeout: 480 seconds)
[9:23] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[9:26] * fmanana (~fdmanana@bl13-158-174.dsl.telepac.pt) has joined #ceph
[9:27] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[9:27] * haomaiwang (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) has joined #ceph
[9:29] * delcake (~Throlkim@5NZAAEW12.tor-irc.dnsbl.oftc.net) Quit ()
[9:29] * Sophie1 (~rikai@relay-a.tor-exit.network) has joined #ceph
[9:30] * oblu (~o@62.109.134.112) has joined #ceph
[9:30] * haomaiwa_ (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[9:32] * SamYaple (~SamYaple@162.209.126.134) Quit (Ping timeout: 480 seconds)
[9:33] * madkiss (~madkiss@2001:6f8:12c3:f00f:408c:71c8:49b2:db5d) Quit (Quit: Leaving.)
[9:37] * rdas (~rdas@121.244.87.116) has joined #ceph
[9:37] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[9:43] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[9:43] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[9:44] * spinoshi (~spinoshi@131.114.236.107) has joined #ceph
[9:45] * jordanP (~jordan@213.215.2.194) has joined #ceph
[9:46] * tdb (~tdb@willow.kent.ac.uk) Quit (Quit: leaving)
[9:46] * tdb (~tdb@myrtle.kent.ac.uk) has joined #ceph
[9:46] * tdb (~tdb@myrtle.kent.ac.uk) Quit ()
[9:46] * tdb (~tdb@myrtle.kent.ac.uk) has joined #ceph
[9:58] * nardial (~ls@dslb-088-076-092-027.088.076.pools.vodafone-ip.de) has joined #ceph
[9:58] * bitserker (~toni@88.87.194.130) has joined #ceph
[9:58] * flisky1 (~Thunderbi@106.39.60.34) Quit (Read error: Connection reset by peer)
[9:58] * bitserker (~toni@88.87.194.130) Quit ()
[9:58] * bitserker (~toni@88.87.194.130) has joined #ceph
[9:59] * spinoshi (~spinoshi@131.114.236.107) Quit (Ping timeout: 480 seconds)
[9:59] * Sophie1 (~rikai@7R2AACHS3.tor-irc.dnsbl.oftc.net) Quit ()
[10:00] * flisky (~Thunderbi@106.39.60.34) has joined #ceph
[10:01] * spinoshi (~spinoshi@131.114.236.107) has joined #ceph
[10:01] * flisky (~Thunderbi@106.39.60.34) Quit ()
[10:06] * Nacer (~Nacer@80.12.59.248) has joined #ceph
[10:07] * pdrakeweb (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) Quit (Ping timeout: 480 seconds)
[10:07] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[10:07] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[10:10] * bgleb (~bgleb@94.19.146.224) has joined #ceph
[10:11] * leseb- (~leseb@81-64-215-19.rev.numericable.fr) has joined #ceph
[10:12] * bgleb_ (~bgleb@84.201.166.98-vpn.dhcp.yndx.net) has joined #ceph
[10:13] * An_T_oine (~Antoine@192.93.37.4) has joined #ceph
[10:14] * spinoshi (~spinoshi@131.114.236.107) Quit (Ping timeout: 480 seconds)
[10:14] * dgurtner (~dgurtner@178.197.231.188) has joined #ceph
[10:17] * Nacer (~Nacer@80.12.59.248) Quit (Remote host closed the connection)
[10:18] * bgleb (~bgleb@94.19.146.224) Quit (Ping timeout: 480 seconds)
[10:18] * Nacer (~Nacer@80.12.59.248) has joined #ceph
[10:18] * bgleb_ (~bgleb@84.201.166.98-vpn.dhcp.yndx.net) Quit (Remote host closed the connection)
[10:19] * derjohn_mobi (~aj@x590e4201.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[10:19] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:28] <kefu> erikh: i don't think you are able to read the "mon addr" using rados_conf_get .
[10:29] * Teddybareman (~Lite@166.70.181.109) has joined #ceph
[10:29] * rendar (~I@host182-180-dynamic.7-87-r.retail.telecomitalia.it) has joined #ceph
[10:30] * ade (~abradshaw@tmo-111-113.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[10:31] * sankarsh_ (~sankarsha@106.206.156.30) has joined #ceph
[10:32] * kostikas (~kostikas@2001:648:2320:1:d63d:7eff:febc:c3e1) has joined #ceph
[10:33] <kostikas> hi all, is there an official repo for debian jessie ?
[10:33] <erikh> kefu: hmm. how would I go about it then? I'm working with C, fwiw.
[10:34] * linjan (~linjan@176.195.10.145) has joined #ceph
[10:34] * Nacer (~Nacer@80.12.59.248) Quit (Ping timeout: 480 seconds)
[10:34] <erikh> I'm able to pull out globals with it, just not subsection configuration
[10:35] * brutuscat (~brutuscat@151.Red-83-50-63.dynamicIP.rima-tde.net) has joined #ceph
[10:35] <kefu> erickh: i am not sure i am misleading you, =)
[10:35] <kefu> erikh: you can get_val_from_conf_file()
[10:35] <erikh> one sec, let me look that one up
[10:36] <erikh> ah, that's C++; I don't know if I can use it.
[10:36] * Nacer (~Nacer@80.12.59.248) has joined #ceph
[10:36] <erikh> rather, seeing as this is cgo, probably not
[10:36] * anorak (~anorak@62.27.88.230) Quit (Remote host closed the connection)
[10:37] * ade (~abradshaw@tmo-100-84.customers.d1-online.com) has joined #ceph
[10:37] <kefu> erikh: yeah, it's C++, which is not exposed as part of rados API.
[10:37] <erikh> yeah. I'm also looking at it and wondering if the key syntax is the same there
[10:37] <kefu> erikh: you mean the the go binding of rados ?
[10:38] <erikh> I'm writing a librbd binding
[10:38] <kefu> you might need to reference ceph_mon.c:626
[10:38] <erikh> peeking
[10:38] <kefu> actually, i found it by reading the ceph-mon code =)
[10:38] <erikh> yeah, I have been digging all day for something
[10:39] <erikh> I'm new to the source tree, obvs :)
[10:39] * tahder (~tahder@130.123.179.59) Quit (Ping timeout: 480 seconds)
[10:39] * SamYaple (~SamYaple@162.209.126.134) has joined #ceph
[10:39] <erikh> .cc or .c?
[10:39] <erikh> I can only find .cc
[10:40] <erikh> anyhow, I'll dig in here and find something :)
[10:40] * treenerd (~treenerd@85.193.140.98) has joined #ceph
[10:40] <kefu> oh .cc
[10:41] <kefu> erikh: sorry
[10:42] <erikh> nbd
[10:42] <erikh> it's giving me places to look :)
[10:42] * aarcane_ (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[10:42] * ade (~abradshaw@tmo-100-84.customers.d1-online.com) Quit (Quit: Too sexy for his shirt)
[10:42] * ade (~abradshaw@tmo-100-84.customers.d1-online.com) has joined #ceph
[10:42] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[10:43] * bvivek (~bvivek@idp01webcache6-z.apj.hpecore.net) has joined #ceph
[10:45] <kefu> eirkh: yeah. i believe cgo should enable us to link with C++ , but config.cc is considered the innards of librados (and ceph!),
[10:45] <erikh> yeah.
[10:45] <kefu> erikh: so i am not sure if you are able to find the symbols you are looking for in librados.
[10:45] <erikh> yeah.
[10:45] <erikh> :)
[10:46] <erikh> oh well, I'll see if I can find another way to work
[10:46] <kefu> that forces you to change the Makefile ...
[10:46] <kefu> for librados.
[10:46] <kefu> okay. =)
[10:46] <kefu> good luck
[10:46] <erikh> thanks :)
[10:47] <erikh> fwiw, I'm trying to write rbd map, which requires a lot of this data.
[10:47] * tahder (~tahder@130.123.179.59) has joined #ceph
[10:47] <kefu> but why would you need to find the address of monitor?
[10:48] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[10:48] <kefu> okay. though librados would suffice.
[10:48] <kefu> s/though/thought/
[10:49] <erikh> wow I had the link here
[10:49] <kefu> or librbd
[10:49] <erikh> you have to write to /sys/bus/rbd/add with a specific syntax to map
[10:49] * aarcane (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[10:49] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[10:49] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[10:50] <erikh> <ip of monitor> name=uname,secret=cephx rbd imagename
[10:50] <kefu> ahh. i c.
[10:50] <kefu> but the monitor could be down ?
[10:50] <erikh> there's a good doc on it that I seem to have lost, heh
[10:50] <kefu> okay.
[10:50] <erikh> yeah, that's handled with write errno
[10:53] <erikh> ok, it's getting late here, I think I'm heading to bed
[10:53] <erikh> thank you very much for your help kefu
[10:56] <kefu> erikh: good night. yw =)
[10:56] * derjohn_mobi (~aj@2001:6f8:1337:0:7c59:7582:95a7:1863) has joined #ceph
[10:57] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[10:57] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Remote host closed the connection)
[10:57] * root________ (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[10:58] * Nacer (~Nacer@80.12.59.248) Quit (Ping timeout: 480 seconds)
[10:59] * Teddybareman (~Lite@5NZAAEW41.tor-irc.dnsbl.oftc.net) Quit ()
[10:59] * pepzi (~pepzi@166.70.181.109) has joined #ceph
[11:02] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[11:04] * aarcane_ (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[11:06] * jks (~jks@178.155.151.121) has joined #ceph
[11:11] * ingslovak (~peto@cloud.vps.websupport.sk) has joined #ceph
[11:12] * ingslovak (~peto@cloud.vps.websupport.sk) Quit ()
[11:12] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[11:14] * jaakko (~jaakko@host-56-147.lasipalatsi.fi) has joined #ceph
[11:16] * derjohn_mobi (~aj@2001:6f8:1337:0:7c59:7582:95a7:1863) Quit (Ping timeout: 480 seconds)
[11:18] * derjohn_mobi (~aj@2001:6f8:1337:0:7c59:7582:95a7:1863) has joined #ceph
[11:22] <jaakko> Hello, we ran in to some trouble when trying to upgrade our cluster. It seems that ceph.com/git ain't no longer valid and git.ceph.com seems to lack ipv6 address. Is there any other way getting that gpg key when using ceph-deploy?
[11:22] * Nacer (~Nacer@80.12.59.248) has joined #ceph
[11:28] * kefu (~kefu@114.92.106.47) Quit (Max SendQ exceeded)
[11:28] <jcsp> jaako: hmm, dunno. probably worth dropping a message to the ceph-users list as most of the US-based people who deal with those servers are still asleep
[11:28] * kefu (~kefu@li750-169.members.linode.com) has joined #ceph
[11:28] <jcsp> *jaakko
[11:28] * kefu is now known as kefu|afk
[11:29] * pepzi (~pepzi@7R2AACHVG.tor-irc.dnsbl.oftc.net) Quit ()
[11:29] * Arfed (~LRWerewol@relay-a.tor-exit.network) has joined #ceph
[11:33] * kefu|afk (~kefu@li750-169.members.linode.com) Quit (Max SendQ exceeded)
[11:34] * kefu (~kefu@li750-169.members.linode.com) has joined #ceph
[11:35] * kefu (~kefu@li750-169.members.linode.com) Quit ()
[11:35] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[11:36] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[11:40] * Nacer (~Nacer@80.12.59.248) Quit (Ping timeout: 480 seconds)
[11:44] * Nacer (~Nacer@80.12.59.248) has joined #ceph
[11:49] * derjohn_mobi (~aj@2001:6f8:1337:0:7c59:7582:95a7:1863) Quit (Ping timeout: 480 seconds)
[11:58] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[11:59] * Arfed (~LRWerewol@5NZAAEW6X.tor-irc.dnsbl.oftc.net) Quit ()
[11:59] * Silentkillzr (~phyphor@185.36.100.145) has joined #ceph
[12:00] * derjohn_mobi (~aj@b2b-94-79-172-98.unitymedia.biz) has joined #ceph
[12:00] * flisky (~Thunderbi@106.39.60.34) has joined #ceph
[12:02] * Nacer (~Nacer@80.12.59.248) Quit (Read error: Connection reset by peer)
[12:02] * Nacer (~Nacer@80.12.59.248) has joined #ceph
[12:08] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[12:09] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Read error: Connection reset by peer)
[12:10] * jcsp (~Adium@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[12:12] * Nacer (~Nacer@80.12.59.248) Quit (Ping timeout: 480 seconds)
[12:14] * thomnico (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) has joined #ceph
[12:17] * Nacer (~Nacer@80.12.59.248) has joined #ceph
[12:17] * fam is now known as fam_away
[12:19] * derjohn_mobi (~aj@b2b-94-79-172-98.unitymedia.biz) Quit (Ping timeout: 480 seconds)
[12:21] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[12:21] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[12:26] * thomnico (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) Quit (Quit: Ex-Chat)
[12:29] * Silentkillzr (~phyphor@7R2AACHWK.tor-irc.dnsbl.oftc.net) Quit ()
[12:29] * QuantumBeep (~TehZomB@tor2e1.privacyfoundation.ch) has joined #ceph
[12:29] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[12:30] * Miouge (~Miouge@94.136.92.20) Quit ()
[12:32] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[12:32] * derjohn_mobi (~aj@2001:6f8:1337:0:7c59:7582:95a7:1863) has joined #ceph
[12:32] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[12:36] * zhaochao (~zhaochao@111.161.77.233) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 38.1.0/20150703062643])
[12:40] * haomaiwang (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[12:40] * haomaiwang (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) has joined #ceph
[12:40] * haomaiwang (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[12:40] * bgleb (~bgleb@77.88.2.37-spb.dhcp.yndx.net) has joined #ceph
[12:41] * LeaChim (~LeaChim@host81-157-90-38.range81-157.btcentralplus.com) has joined #ceph
[12:43] * rakesh (~rakesh@121.244.87.124) Quit (Remote host closed the connection)
[12:45] * Debesis (~0x@233.128.140.82.mobile.mezon.lt) has joined #ceph
[12:45] * davidz (~davidz@cpe-23-242-27-128.socal.res.rr.com) has joined #ceph
[12:45] * smerz1 (~ircircirc@37.74.194.90) Quit (Remote host closed the connection)
[12:46] * derjohn_mobi (~aj@2001:6f8:1337:0:7c59:7582:95a7:1863) Quit (Ping timeout: 480 seconds)
[12:48] * bgleb (~bgleb@77.88.2.37-spb.dhcp.yndx.net) Quit (Ping timeout: 480 seconds)
[12:48] * davidz1 (~davidz@2605:e000:1313:8003:791e:1ea2:ce22:df81) Quit (Ping timeout: 480 seconds)
[12:54] * bgleb (~bgleb@77.88.2.37-spb.dhcp.yndx.net) has joined #ceph
[12:55] * bgleb_ (~bgleb@130.193.40.17-vpna.dhcp.yndx.net) has joined #ceph
[12:59] * QuantumBeep (~TehZomB@9S0AAB20X.tor-irc.dnsbl.oftc.net) Quit ()
[12:59] * smerz (~ircircirc@37.74.194.90) has joined #ceph
[13:02] * bgleb (~bgleb@77.88.2.37-spb.dhcp.yndx.net) Quit (Ping timeout: 480 seconds)
[13:03] * derjohn_mob (~aj@2001:6f8:1337:0:7c59:7582:95a7:1863) has joined #ceph
[13:03] * Roy (~Plesioth@edwardsnowden1.torservers.net) has joined #ceph
[13:05] * jmunhoz (~jmunhoz@149.pool85-61-146.dynamic.orange.es) has joined #ceph
[13:07] * Nacer (~Nacer@80.12.59.248) Quit (Ping timeout: 480 seconds)
[13:13] * flisky (~Thunderbi@106.39.60.34) Quit (Quit: flisky)
[13:22] * derjohn_mob (~aj@2001:6f8:1337:0:7c59:7582:95a7:1863) Quit (Ping timeout: 480 seconds)
[13:25] * brutuscat (~brutuscat@151.Red-83-50-63.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[13:26] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) Quit (Ping timeout: 480 seconds)
[13:27] * rwheeler (~rwheeler@pool-173-48-214-9.bstnma.fios.verizon.net) Quit (Quit: Leaving)
[13:31] * boichev (~boichev@213.169.56.130) has joined #ceph
[13:32] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[13:33] * Roy (~Plesioth@5NZAAEW90.tor-irc.dnsbl.oftc.net) Quit ()
[13:34] * derjohn_mob (~aj@b2b-94-79-172-98.unitymedia.biz) has joined #ceph
[13:34] * sage__ (~quassel@pool-100-0-197-246.bstnma.fios.verizon.net) Quit (Read error: Connection reset by peer)
[13:35] * sage__ (~quassel@pool-100-0-197-246.bstnma.fios.verizon.net) has joined #ceph
[13:35] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) has joined #ceph
[13:38] * ingslovak (~peto@office.websupport.sk) has joined #ceph
[13:38] * MKoR (~DoDzy@h-213.61.149.100.host.de.colt.net) has joined #ceph
[13:39] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[13:40] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[13:43] * treenerd (~treenerd@85.193.140.98) Quit (Quit: Verlassend)
[13:43] * rlrevell (~leer@184.52.129.221) Quit (Read error: Connection reset by peer)
[13:49] * ira (~ira@nat-pool-rdu-u.redhat.com) has joined #ceph
[13:55] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[13:57] * Miouge (~Miouge@static-213-115-57-18.sme.bredbandsbolaget.se) has joined #ceph
[14:00] * nardial (~ls@dslb-088-076-092-027.088.076.pools.vodafone-ip.de) Quit (Quit: Leaving)
[14:01] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) Quit (Ping timeout: 480 seconds)
[14:01] * elder_ (~elder@50.250.6.142) Quit (Quit: Leaving)
[14:04] * Miouge (~Miouge@static-213-115-57-18.sme.bredbandsbolaget.se) Quit (Quit: Miouge)
[14:04] * sankarsh_ (~sankarsha@106.206.156.30) Quit (Quit: Leaving...)
[14:05] * Miouge (~Miouge@static-213-115-57-18.sme.bredbandsbolaget.se) has joined #ceph
[14:06] * Nacer (~Nacer@80.12.43.138) has joined #ceph
[14:08] * MKoR (~DoDzy@7R2AACHYK.tor-irc.dnsbl.oftc.net) Quit ()
[14:08] * ggg (~kalmisto@relay-d.tor-exit.network) has joined #ceph
[14:08] * shylesh (~shylesh@121.244.87.124) Quit (Remote host closed the connection)
[14:10] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) has joined #ceph
[14:12] * Miouge (~Miouge@static-213-115-57-18.sme.bredbandsbolaget.se) Quit (Quit: Miouge)
[14:13] * Miouge (~Miouge@static-213-115-57-18.sme.bredbandsbolaget.se) has joined #ceph
[14:19] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[14:19] * kostikas (~kostikas@2001:648:2320:1:d63d:7eff:febc:c3e1) has left #ceph
[14:19] * Miouge (~Miouge@static-213-115-57-18.sme.bredbandsbolaget.se) Quit (Quit: Miouge)
[14:21] * Miouge (~Miouge@static-213-115-57-18.sme.bredbandsbolaget.se) has joined #ceph
[14:23] * derjohn_mob (~aj@b2b-94-79-172-98.unitymedia.biz) Quit (Ping timeout: 480 seconds)
[14:29] * MikePar (~mparson@neener.bl.org) has joined #ceph
[14:30] * Miouge (~Miouge@static-213-115-57-18.sme.bredbandsbolaget.se) Quit (Quit: Miouge)
[14:30] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[14:31] * bvivek (~bvivek@idp01webcache6-z.apj.hpecore.net) Quit (Quit: Leaving)
[14:31] * Miouge (~Miouge@static-213-115-57-18.sme.bredbandsbolaget.se) has joined #ceph
[14:33] * pressureman (~pressurem@62.217.45.26) has joined #ceph
[14:33] * derjohn_mob (~aj@2001:6f8:1337:0:7c59:7582:95a7:1863) has joined #ceph
[14:36] * nsoffer (~nsoffer@bzq-109-65-255-114.red.bezeqint.net) has joined #ceph
[14:37] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[14:38] * ggg (~kalmisto@9S0AAB23H.tor-irc.dnsbl.oftc.net) Quit ()
[14:38] * SaneSmith (~demonspor@7R2AACH0N.tor-irc.dnsbl.oftc.net) has joined #ceph
[14:38] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Remote host closed the connection)
[14:39] * shyu (~Shanzhi@119.254.120.66) Quit (Remote host closed the connection)
[14:39] * brutuscat (~brutuscat@151.Red-83-50-63.dynamicIP.rima-tde.net) has joined #ceph
[14:43] * Miouge (~Miouge@static-213-115-57-18.sme.bredbandsbolaget.se) Quit (Quit: Miouge)
[14:44] * derjohn_mob (~aj@2001:6f8:1337:0:7c59:7582:95a7:1863) Quit (Remote host closed the connection)
[14:44] * Miouge (~Miouge@static-213-115-57-18.sme.bredbandsbolaget.se) has joined #ceph
[14:44] * Miouge (~Miouge@static-213-115-57-18.sme.bredbandsbolaget.se) Quit ()
[14:45] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[14:46] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[14:46] * rendar (~I@host182-180-dynamic.7-87-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[14:46] * Miouge (~Miouge@static-213-115-57-18.sme.bredbandsbolaget.se) has joined #ceph
[14:48] * ira (~ira@nat-pool-rdu-u.redhat.com) Quit (Ping timeout: 480 seconds)
[14:48] * kevinperks (~Adium@2606:a000:80ad:1300:895c:68ef:c283:be8a) has joined #ceph
[14:50] * rendar (~I@host182-180-dynamic.7-87-r.retail.telecomitalia.it) has joined #ceph
[14:50] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) has joined #ceph
[14:51] * jrankin (~jrankin@d53-64-170-236.nap.wideopenwest.com) has joined #ceph
[14:52] * dennis_ (~dennis@2a00:801:7:1:1a03:73ff:fed6:ffec) Quit (Quit: L??mnar)
[14:52] * dennis_ (~dennis@2a00:801:7:1:1a03:73ff:fed6:ffec) has joined #ceph
[14:52] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[14:54] * georgem (~Adium@fwnat.oicr.on.ca) Quit ()
[14:54] * tupper (~tcole@2001:420:2280:1272:8900:f9b8:3b49:567e) has joined #ceph
[14:54] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[14:54] * Miouge (~Miouge@static-213-115-57-18.sme.bredbandsbolaget.se) Quit (Ping timeout: 480 seconds)
[14:56] * nils_ (~nils@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[14:58] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[14:58] * dennis_ (~dennis@2a00:801:7:1:1a03:73ff:fed6:ffec) Quit (Quit: L??mnar)
[15:00] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[15:01] * ira (~ira@nat-pool-bos-t.redhat.com) has joined #ceph
[15:02] * derjohn_mob (~aj@b2b-94-79-172-98.unitymedia.biz) has joined #ceph
[15:03] * linjan_ (~linjan@176.195.189.85) has joined #ceph
[15:08] * SaneSmith (~demonspor@7R2AACH0N.tor-irc.dnsbl.oftc.net) Quit ()
[15:09] * linjan (~linjan@176.195.10.145) Quit (Ping timeout: 480 seconds)
[15:09] * shaunm (~shaunm@74.215.76.114) has joined #ceph
[15:10] * haomaiwang (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) has joined #ceph
[15:11] * haomaiwang (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[15:11] * haomaiwang (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) has joined #ceph
[15:12] * Freddy (~Sliker@relay-h.tor-exit.network) has joined #ceph
[15:14] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[15:15] * elder_ (~elder@50.250.6.142) has joined #ceph
[15:16] * squizzi (~squizzi@nat-pool-rdu-t.redhat.com) Quit (Remote host closed the connection)
[15:18] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) has joined #ceph
[15:19] * treenerd (~treenerd@77.119.133.32.wireless.dyn.drei.com) has joined #ceph
[15:23] * kefu (~kefu@114.92.106.47) has joined #ceph
[15:23] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[15:24] * bgleb_ (~bgleb@130.193.40.17-vpna.dhcp.yndx.net) Quit (Remote host closed the connection)
[15:25] * Hemanth (~Hemanth@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:26] <Anticimex> anyone familiar with https://github.com/stackforge/puppet-ceph here?
[15:26] <Anticimex> if i want to run multiple clusters, it seems i have to do that in different puppet environments
[15:26] * Nacer (~Nacer@80.12.43.138) Quit (Ping timeout: 480 seconds)
[15:26] <Anticimex> simply based on https://github.com/stackforge/puppet-ceph/blob/master/examples/common.yaml
[15:28] * Nacer (~Nacer@176.31.89.99) has joined #ceph
[15:28] * yanzheng1 (~zhyan@182.139.207.212) Quit (Quit: This computer has gone to sleep)
[15:29] * yanzheng1 (~zhyan@182.139.207.212) has joined #ceph
[15:31] * sleinen1 (~Adium@2001:620:0:69::101) has joined #ceph
[15:33] * danieagle (~Daniel@189-47-91-152.dsl.telesp.net.br) has joined #ceph
[15:35] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) Quit (Read error: Connection reset by peer)
[15:38] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[15:41] * Miouge (~Miouge@static-213-115-57-18.sme.bredbandsbolaget.se) has joined #ceph
[15:42] * Freddy (~Sliker@7R2AACH2H.tor-irc.dnsbl.oftc.net) Quit ()
[15:42] * Azru (~FierceFor@176.10.99.202) has joined #ceph
[15:45] * yanzheng1 (~zhyan@182.139.207.212) Quit (Quit: This computer has gone to sleep)
[15:47] * treenerd (~treenerd@77.119.133.32.wireless.dyn.drei.com) Quit (Ping timeout: 480 seconds)
[15:50] * vbellur (~vijay@122.172.223.76) has joined #ceph
[15:54] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[15:55] * treenerd (~treenerd@91.141.4.86.wireless.dyn.drei.com) has joined #ceph
[15:59] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[16:00] * dyasny (~dyasny@104.158.33.70) has joined #ceph
[16:06] * Miouge (~Miouge@static-213-115-57-18.sme.bredbandsbolaget.se) Quit (Quit: Miouge)
[16:06] * Miouge (~Miouge@static-213-115-57-18.sme.bredbandsbolaget.se) has joined #ceph
[16:10] * ade (~abradshaw@tmo-100-84.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[16:11] * Miouge (~Miouge@static-213-115-57-18.sme.bredbandsbolaget.se) Quit ()
[16:12] * Miouge (~Miouge@static-213-115-57-18.sme.bredbandsbolaget.se) has joined #ceph
[16:12] * Azru (~FierceFor@9S0AAB264.tor-irc.dnsbl.oftc.net) Quit ()
[16:12] * rikai1 (~mps@216.218.134.12) has joined #ceph
[16:12] * kefu (~kefu@114.92.106.47) Quit (Max SendQ exceeded)
[16:12] * Miouge (~Miouge@static-213-115-57-18.sme.bredbandsbolaget.se) Quit ()
[16:13] * marrusl (~mark@cpe-67-247-9-253.nyc.res.rr.com) Quit (Quit: bye!)
[16:13] * kefu (~kefu@li750-169.members.linode.com) has joined #ceph
[16:13] * marrusl (~mark@cpe-67-247-9-253.nyc.res.rr.com) has joined #ceph
[16:14] <ska> Is it possible to access the metric data from the Calamari server without access to Graphite? Is it stored in the Pgsql?
[16:16] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[16:18] * treenerd (~treenerd@91.141.4.86.wireless.dyn.drei.com) Quit (Ping timeout: 480 seconds)
[16:19] * sjm (~sjm@pool-100-1-115-73.nwrknj.fios.verizon.net) has joined #ceph
[16:19] * trociny (~mgolub@93.183.239.2) Quit (Read error: No route to host)
[16:20] * derjohn_mob (~aj@b2b-94-79-172-98.unitymedia.biz) Quit (Ping timeout: 480 seconds)
[16:20] * DV (~veillard@2001:41d0:1:d478::1) Quit (Remote host closed the connection)
[16:21] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[16:21] * bgleb (~bgleb@130.193.35.11-vpna.dhcp.yndx.net) has joined #ceph
[16:26] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[16:28] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[16:28] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[16:29] <Anticimex> guess i can just use multiple guilds or similar to manage multiple clusters
[16:30] * derjohn_mob (~aj@2001:6f8:1337:0:5ccf:5ea1:6da6:b0b2) has joined #ceph
[16:30] * kefu_ (~kefu@114.92.106.47) has joined #ceph
[16:30] * vbellur (~vijay@122.172.223.76) Quit (Ping timeout: 480 seconds)
[16:31] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[16:34] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[16:34] * kefu (~kefu@li750-169.members.linode.com) Quit (Ping timeout: 480 seconds)
[16:36] * nardial (~ls@dslb-088-076-092-027.088.076.pools.vodafone-ip.de) has joined #ceph
[16:36] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[16:36] <ska> What is the "path" in cephfs command? How do I find it? Its not clear in the man page.
[16:39] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[16:40] * rlrevell (~leer@vbo1.inmotionhosting.com) has left #ceph
[16:40] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[16:42] * rikai1 (~mps@7R2AACH45.tor-irc.dnsbl.oftc.net) Quit ()
[16:42] * jacoo (~KrimZon@tor-exit.eecs.umich.edu) has joined #ceph
[16:45] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[16:48] * derjohn_mob (~aj@2001:6f8:1337:0:5ccf:5ea1:6da6:b0b2) Quit (Ping timeout: 480 seconds)
[16:49] * amote (~amote@121.244.87.116) Quit (Quit: Leaving)
[16:50] <championofcyrodi> http://i.imgur.com/SaaE7BP.png (On the right side is my monitor with tcpdump of 3 bonded NICs. On the left side is my slave with a single NIC. Both show arp requests encapsulated in vlan id 201. Any idea why they are not able to resolve MAC addresses from each other? Im starting to think it has to do with my switches handling of the link aggregate group (LAG) for the monitor's bonded NICs.)
[16:50] <championofcyrodi> they are physically connected to the same switch
[16:51] * vata (~vata@cable-21.246.173-197.electronicbox.net) Quit (Quit: Leaving.)
[16:52] * vata (~vata@208.88.110.46) has joined #ceph
[16:53] <m0zes> championofcyrodi: so what does the configuration for the nics look like.
[16:53] <m0zes> ?
[16:53] <championofcyrodi> the bonded, the single slave, or all 4?
[16:54] <m0zes> the bonded one is the harder one to configure, but all 4 would be nice to see.
[16:55] <championofcyrodi> 3 bonded: http://paste.openstack.org/show/362134/
[16:56] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[16:56] * bgleb (~bgleb@130.193.35.11-vpna.dhcp.yndx.net) Quit (Remote host closed the connection)
[16:57] <championofcyrodi> Single Slave NIC: http://paste.openstack.org/show/362153/
[16:57] <championofcyrodi> m0zes^
[16:57] * bgleb (~bgleb@130.193.35.11-vpna.dhcp.yndx.net) has joined #ceph
[16:58] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[16:58] <m0zes> did you define the vlan interfaces?
[16:58] * sjusthm (~sam@96-39-232-68.dhcp.mtpk.ca.charter.com) has joined #ceph
[17:00] <Anticimex> championofcyrodi: can you pastebin an "ip a s"?
[17:02] <championofcyrodi> Anticimex: sure, but it's cluttered: http://pastebin.com/jMxmsiSp (slave)
[17:04] <Anticimex> when you are calling the left hand node a "slave", what do you really mean?
[17:04] <Anticimex> because slave is a bonding/lacp term also
[17:05] <Anticimex> and i was more interested in the monitor
[17:05] <Anticimex> because it's the monitor's config that's wrong from what i can tell
[17:05] <championofcyrodi> sorry, by slave i mean it is where the osd is running and i have 3 other 'nodes' that are the same config.
[17:06] <Anticimex> roger
[17:06] <championofcyrodi> the 4 nodes can actually see each other just fine.
[17:06] <championofcyrodi> its just monitor<->osds that is failing.
[17:06] <championofcyrodi> where my monitor has bonded 802.3ad
[17:06] * bgleb (~bgleb@130.193.35.11-vpna.dhcp.yndx.net) Quit (Ping timeout: 480 seconds)
[17:07] <championofcyrodi> this switch is not allowing me to assign VLANs to the LAG itself, but instead marks the LAG 'excluded' and that they can be allowed via GVRP, which i enabled at one point... maybe i need to try to fiddling w/ that again/
[17:07] <championofcyrodi> http://screenshots.portforward.com/routers/Linksys/SRW2048/Ports_to_VLAN.htm
[17:07] <championofcyrodi> it's a POS switch btw..
[17:08] <kfox1111> joshd1: thanks.
[17:08] <Anticimex> championofcyrodi: still waiting to see the ip a s output
[17:09] * Nacer (~Nacer@176.31.89.99) Quit (Read error: Connection reset by peer)
[17:09] <Anticimex> also, you can show the /proc/net/bonding/$bondname
[17:09] * TheSov (~TheSov@cip-248.trustwave.com) has joined #ceph
[17:09] <championofcyrodi> Anticimex: http://pastebin.com/jMxmsiSp (slave)
[17:09] <Anticimex> the bond should have 1 mac address visible in "ip a s" for all interfaces
[17:09] <Anticimex> championofcyrodi: not the slave
[17:09] <Anticimex> the monitor
[17:09] <championofcyrodi> http://pastebin.com/wXNgW9HG (monitor)
[17:09] <Anticimex> thx
[17:10] <TheSov> is ALB a valid way to bond against multiple switch stacks?
[17:10] <Anticimex> championofcyrodi: ok that looks ok, now let's see /proc/net/bonding
[17:11] <Anticimex> ALB?
[17:12] * jacoo (~KrimZon@5NZAAEXI2.tor-irc.dnsbl.oftc.net) Quit ()
[17:13] <championofcyrodi> fyi, i just determined the MAC addresses in the screen shot are the MACs for the br-mgmt
[17:14] <championofcyrodi> fetching bond output...
[17:14] <championofcyrodi> http://pastebin.com/eb2JcRiJ.
[17:15] * alram (~alram@206.169.83.146) has joined #ceph
[17:15] <Anticimex> looks ok
[17:15] <Anticimex> afaict
[17:16] <championofcyrodi> yea... the fact that the ARP packets are just disappearing, leads me to believe the switch is doing something weird
[17:17] <championofcyrodi> since i had to assign the 3 ports as a 'lag'
[17:18] <TheSov> ALB is mode 6 on the linux network bonding driver
[17:18] <TheSov> Algorythmic Load Balance
[17:19] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:20] <championofcyrodi> ah yea...
[17:21] <championofcyrodi> no, i'm using 802.3ad (mode 4?
[17:21] <Anticimex> as long as the lacp is properly operational, iirc, the sender's can use whatever hashing algo they please
[17:21] <TheSov> ALB is not lacp
[17:21] <TheSov> lacp is mode 4 and requires switch configuration
[17:21] * Mraedis (~AluAlu@relay-h.tor-exit.network) has joined #ceph
[17:22] <TheSov> well i think new cisco's can autodetect that now actually
[17:22] <Anticimex> oh, ok, according to the centos wiki balance-alb does arp rewriting
[17:22] <Anticimex> so it's switch independent
[17:23] <TheSov> right
[17:23] <Anticimex> we have some juniper's that can autodetect lacp also, in theory
[17:23] <TheSov> wow so you can just plug in
[17:23] <TheSov> that is awesome
[17:23] <TheSov> i avoid lacp for that reason
[17:23] <TheSov> it would be amazing to just plug in and go
[17:25] * vbellur (~vijay@122.171.181.56) has joined #ceph
[17:26] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:26] * ingslovak (~peto@office.websupport.sk) Quit (Quit: Leaving.)
[17:27] <championofcyrodi> only mode 4 is switch dependent.
[17:28] <TheSov> right, im trying to build my production ceph cluster so i can use multiple switches
[17:28] <championofcyrodi> my switch will 'auto detect' lacp, but it gives each link a seperate aggregate ID.
[17:28] * linjan_ (~linjan@176.195.189.85) Quit (Ping timeout: 480 seconds)
[17:28] <championofcyrodi> which makes them not actually part of a group
[17:28] <TheSov> but something is not quite right when i add 4 nics to a alb bond
[17:28] <TheSov> trying to figure out why
[17:29] <championofcyrodi> so then i have to define a LAG (link aggregate group, and check that the group is LACP enabled)
[17:32] * An_T_oine (~Antoine@192.93.37.4) Quit (Quit: Leaving)
[17:33] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[17:33] * moore (~moore@64.202.160.88) has joined #ceph
[17:33] * dgurtner (~dgurtner@178.197.231.188) Quit (Ping timeout: 480 seconds)
[17:33] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[17:35] <championofcyrodi> anyone familiar with GARP and whether or not i should enable it in this case? http://screenshots.portforward.com/routers/Linksys/SRW2048/GVRP.htm
[17:37] <championofcyrodi> for now i'm going to try GRE tunneling instead of VLANs... I'm sure it will be worse.
[17:38] * madkiss (~madkiss@2001:6f8:12c3:f00f:f897:3fb4:a79c:c28f) has joined #ceph
[17:38] * Nacer (~Nacer@176.31.89.99) has joined #ceph
[17:41] * bitserker (~toni@88.87.194.130) Quit (Quit: Leaving.)
[17:42] * derjohn_mob (~aj@x590cae02.dyn.telefonica.de) has joined #ceph
[17:44] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[17:46] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[17:46] <TheSov> gre? wow
[17:47] <TheSov> you can only have 1 gre tunnel from 1 point to 1 destination
[17:47] <TheSov> huge limitation
[17:48] <TheSov> so if you have a an ip 10.0.0.1 and destination is 10.5.0.1 you can only have 1 tunnel between those
[17:49] <kfox1111> question... Package 1:python-ceph-0.87.2-0.el7.centos.x86_64 is obsoleted by 1:python-rados-0.80.7-2.el7.x86_64 which is already installed
[17:49] <kfox1111> giant seems to be blocked by the firefly provided by epel.
[17:49] <kfox1111> whats the best way to resolve that?
[17:51] * Mraedis (~AluAlu@5NZAAEXK2.tor-irc.dnsbl.oftc.net) Quit ()
[17:51] * Pettis (~SEBI@relay-h.tor-exit.network) has joined #ceph
[17:53] * nardial (~ls@dslb-088-076-092-027.088.076.pools.vodafone-ip.de) Quit (Quit: Leaving)
[17:57] <darkfader> championofcyrodi: if you want your switch to learn vlans your linux boxes or other switches announce
[17:58] <darkfader> then you need to turn it on, once globally and once for each port
[17:58] <darkfader> and you also need to turn on the announcment on linux side (using ip link set something)
[17:59] <darkfader> i can't swear it's going to work, last time i tried it was in a sad and shitty state, especially learning vlans on linux was bad back then, but also it didn't send the announces when configured to
[17:59] <darkfader> there's some boneheads who didn't like open networking standards, that's why people end up with GRE ;)
[17:59] <kfox1111> wow. not even a yum transaction will resolve the obsolete. :/
[18:00] <championofcyrodi> heh, thanks darkfader.
[18:00] * Miouge (~Miouge@h-72-233.a163.priv.bahnhof.se) has joined #ceph
[18:00] <championofcyrodi> my switch is old and i don't want to muck w/ it anymore, so i'm going to try gre. also, gre will help me grock GRE w/ docker a bit, which i need to be schooled up on.
[18:01] * Nacer (~Nacer@176.31.89.99) Quit (Remote host closed the connection)
[18:02] <darkfader> sounds like a good plan
[18:03] * MentalRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[18:05] * branto (~branto@ip-213-220-232-132.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[18:09] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[18:10] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Read error: Connection reset by peer)
[18:11] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[18:11] * jordanP (~jordan@213.215.2.194) Quit (Quit: Leaving)
[18:11] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[18:13] * Nacer (~Nacer@176.31.89.99) has joined #ceph
[18:13] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Read error: Connection reset by peer)
[18:14] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[18:14] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[18:17] * elder_ (~elder@50.250.6.142) Quit (Quit: Leaving)
[18:19] * burley (~khemicals@cpe-98-28-239-78.cinci.res.rr.com) Quit (Quit: burley)
[18:21] * Pettis (~SEBI@5NZAAEXNG.tor-irc.dnsbl.oftc.net) Quit ()
[18:21] * Pommesgabel (~Swompie`@129.41.159.22) has joined #ceph
[18:29] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[18:33] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[18:35] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[18:36] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[18:37] * ibravo (~ibravo@72.198.142.104) has joined #ceph
[18:39] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Remote host closed the connection)
[18:50] * Nacer (~Nacer@176.31.89.99) Quit (Remote host closed the connection)
[18:51] * Pommesgabel (~Swompie`@9S0AAB3FA.tor-irc.dnsbl.oftc.net) Quit ()
[18:52] * Miouge (~Miouge@h-72-233.a163.priv.bahnhof.se) Quit (Quit: Miouge)
[18:53] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[18:53] * Miouge (~Miouge@h-72-233.a163.priv.bahnhof.se) has joined #ceph
[18:57] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[19:00] <ska> Is there one osd daemon per OSD?
[19:00] <ska> If I have 3 osd's on a server, I'll have 3 OSD daemons?
[19:01] <gleam> yes
[19:04] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[19:05] * trociny (~Mikolaj@91.225.202.178) has joined #ceph
[19:08] * MentalRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: This computer has gone to sleep)
[19:08] * pressureman (~pressurem@62.217.45.26) Quit (Quit: Ex-Chat)
[19:10] * brutuscat (~brutuscat@151.Red-83-50-63.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[19:10] * zaitcev (~zaitcev@2001:558:6001:10:61d7:f51f:def8:4b0f) has joined #ceph
[19:11] * jrankin (~jrankin@d53-64-170-236.nap.wideopenwest.com) Quit (Quit: Leaving)
[19:12] * ira (~ira@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[19:12] * mjevans (~mjevans@li984-246.members.linode.com) Quit (Quit: Reconnecting)
[19:12] * mjevans (~mjevans@li984-246.members.linode.com) has joined #ceph
[19:14] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[19:16] * snakamoto (~Adium@192.16.26.2) has joined #ceph
[19:26] * Ralth (~isaxi@192.42.115.101) has joined #ceph
[19:30] * nardial (~ls@dslb-088-076-092-027.088.076.pools.vodafone-ip.de) has joined #ceph
[19:30] * Lyncos (~lyncos@208.71.184.41) has joined #ceph
[19:30] <Lyncos> Hi
[19:30] <Lyncos> Just a quick question... where can I look to figure out why ceph is not putting my OSD out when it's down ?? is there any sort of delay ?
[19:31] * sleinen1 (~Adium@2001:620:0:69::101) Quit (Ping timeout: 480 seconds)
[19:39] <Lyncos> I guess I should set that parameter: mon osd down out interval
[19:42] <ska> gleam: thanks.. I was thinking that an OSD was both a daemon and a storage object but its really both.
[19:43] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:43] <snakamoto> Lyncos: I believe default delay is 5 minutes
[19:44] <Lyncos> Isn't that a bit long ? let say I loose a node.. my cluster will hang for 5 minutes ?
[19:45] * nardial (~ls@dslb-088-076-092-027.088.076.pools.vodafone-ip.de) Quit (Remote host closed the connection)
[19:47] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[19:47] <snakamoto> It won't hang, the secondary OSD should respond if the primary is down
[19:48] <Lyncos> In my case it is hanging
[19:48] <snakamoto> what's your replica count?
[19:48] <Lyncos> 2
[19:48] <Lyncos> and I have 2 rack
[19:48] <Lyncos> and 6 nodes
[19:48] <Lyncos> 3 in each rack
[19:49] <Lyncos> min_size is set to 1
[19:49] <Lyncos> and I'm testing loosing 1 full rack
[19:51] <Lyncos> as soon as I loose a 2nd node in a the same rack it start hanging
[19:51] <snakamoto> "as soon as I loose a 2nd node in the same rack"
[19:51] <snakamoto> so you're taking down 1 rack, a node at a time?
[19:51] <Lyncos> yeah
[19:52] <Lyncos> but
[19:52] <Lyncos> no
[19:52] <snakamoto> ?
[19:52] <Lyncos> I want to be able to loose 1 full rack at once
[19:52] <Lyncos> and troubleshooting why it's not working... I shutdown node 1 .. still works.. when I shutdown 2nd node stop working
[19:52] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) has joined #ceph
[19:52] <Lyncos> I guess the distribution is not right
[19:52] <snakamoto> do you have failure zones configured?
[19:53] <Lyncos> http://pastebin.com/VipBn6NX
[19:53] <Lyncos> this is the rule I'm using... I'm not quite sure it is correct
[19:54] <snakamoto> can you put a ceph osd tree in pastebin?
[19:54] <Lyncos> sure
[19:54] <Lyncos> http://pastebin.com/9cppxept
[19:56] * Ralth (~isaxi@5NZAAEXRV.tor-irc.dnsbl.oftc.net) Quit ()
[19:56] * Arfed (~Xerati@hessel0.torservers.net) has joined #ceph
[19:57] <Lyncos> I'm really not sure with the rule...
[20:00] * wenjunhuang (~wenjunhua@111.161.63.110) Quit (Remote host closed the connection)
[20:00] * wenjunhuang (~wenjunhua@111.161.63.110) has joined #ceph
[20:01] <snakamoto> It looks right to me, but I haven't worked with different pool types in the crush map
[20:02] <snakamoto> I only have experience with simple configurations
[20:02] <Lyncos> hmmm
[20:02] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[20:03] * vbellur (~vijay@122.171.181.56) Quit (Ping timeout: 480 seconds)
[20:03] <snakamoto> Maybe spot check a couple PGs and verify that their primary and secondary OSDs don't exist within the same rack?
[20:03] <Lyncos> yeah
[20:03] <Lyncos> I'll do a dump
[20:03] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit ()
[20:04] * Miouge (~Miouge@h-72-233.a163.priv.bahnhof.se) Quit (Quit: Miouge)
[20:05] * alram (~alram@206.169.83.146) Quit (Ping timeout: 480 seconds)
[20:05] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[20:05] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[20:05] <ska> Can a Pool's PG's be migrated out of a Pool?
[20:05] * xarses (~xarses@12.164.168.117) has joined #ceph
[20:11] * aarcane (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[20:11] * alram (~alram@206.169.83.146) has joined #ceph
[20:12] * Miouge (~Miouge@h-72-233.a163.priv.bahnhof.se) has joined #ceph
[20:16] <Lyncos> My rule was bad
[20:16] <Lyncos> it's working now
[20:16] <Lyncos> I just changed it to only 1 step choose: step chooseleaf firstn 2 type rack
[20:17] * root________ (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[20:18] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[20:19] <via> can an osd d still function after getting a journal wiped? i know the docs say its best to just rewipe it, but will it still contain data that is just behind and could be caught up?
[20:19] * ibravo (~ibravo@72.198.142.104) Quit (Quit: This computer has gone to sleep)
[20:20] * ibravo (~ibravo@72.198.142.104) has joined #ceph
[20:21] * linjan_ (~linjan@broadband-178-140-21-64.nationalcablenetworks.ru) has joined #ceph
[20:22] * jmunhoz (~jmunhoz@149.pool85-61-146.dynamic.orange.es) Quit (Ping timeout: 480 seconds)
[20:22] * alram (~alram@206.169.83.146) Quit (Quit: Lost terminal)
[20:22] * alram (~alram@206.169.83.146) has joined #ceph
[20:24] * snakamoto (~Adium@192.16.26.2) Quit (Quit: Leaving.)
[20:26] * Arfed (~Xerati@7R2AACIEE.tor-irc.dnsbl.oftc.net) Quit ()
[20:26] * kalmisto (~Yopi@h-213.61.149.100.host.de.colt.net) has joined #ceph
[20:26] * ibravo (~ibravo@72.198.142.104) Quit (Quit: This computer has gone to sleep)
[20:28] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[20:29] * ibravo (~ibravo@72.198.142.104) has joined #ceph
[20:31] * stiopa (~stiopa@cpc73828-dals21-2-0-cust630.20-2.cable.virginm.net) has joined #ceph
[20:32] * ibravo (~ibravo@72.198.142.104) Quit ()
[20:34] * ibravo (~ibravo@75-148-30-105-WashingtonDC.hfc.comcastbusiness.net) has joined #ceph
[20:38] * snakamoto (~Adium@192.16.26.2) has joined #ceph
[20:38] <snakamoto> lyncos: awesome!
[20:40] * ibravo (~ibravo@75-148-30-105-WashingtonDC.hfc.comcastbusiness.net) Quit (Quit: This computer has gone to sleep)
[20:41] * kefu_ (~kefu@114.92.106.47) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[20:42] * ibravo (~ibravo@75-148-30-105-WashingtonDC.hfc.comcastbusiness.net) has joined #ceph
[20:45] * ibravo (~ibravo@75-148-30-105-WashingtonDC.hfc.comcastbusiness.net) Quit ()
[20:47] * ibravo (~ibravo@75-148-30-105-WashingtonDC.hfc.comcastbusiness.net) has joined #ceph
[20:49] * TheSov2 (~TheSov@cip-248.trustwave.com) has joined #ceph
[20:50] * dalegaard-39554 (~dalegaard@vps.devrandom.dk) Quit (Remote host closed the connection)
[20:50] * dalegaard-39554 (~dalegaard@vps.devrandom.dk) has joined #ceph
[20:51] * TheSov (~TheSov@cip-248.trustwave.com) Quit (Ping timeout: 480 seconds)
[20:53] * shaunm (~shaunm@74.215.76.114) Quit (Ping timeout: 480 seconds)
[20:54] * davidzlap (~Adium@2605:e000:1313:8003:ace6:6ab9:8390:1fe7) has joined #ceph
[20:56] * kalmisto (~Yopi@9S0AAB3KK.tor-irc.dnsbl.oftc.net) Quit ()
[20:56] * Lattyware (~Tenk@7R2AACIGF.tor-irc.dnsbl.oftc.net) has joined #ceph
[20:59] * snakamoto (~Adium@192.16.26.2) Quit (Quit: Leaving.)
[21:00] * Lyncos (~lyncos@208.71.184.41) has left #ceph
[21:01] * snakamoto (~Adium@192.16.26.2) has joined #ceph
[21:05] * kevinperks (~Adium@2606:a000:80ad:1300:895c:68ef:c283:be8a) Quit (Quit: Leaving.)
[21:11] * linjan_ (~linjan@broadband-178-140-21-64.nationalcablenetworks.ru) Quit (Ping timeout: 480 seconds)
[21:13] <ska> Can OSD's be migrated???
[21:14] <snakamoto> I don't know but I'm curious what you mean
[21:17] <ska> It may be possible but from a configuration point of view it seems much easier to just remove an OSD, create a new one, then allow Ceph to do its magic on the PGs.
[21:18] <m0zes> it is possible to migrate osds between hosts.
[21:18] * ibravo (~ibravo@75-148-30-105-WashingtonDC.hfc.comcastbusiness.net) Quit (Quit: This computer has gone to sleep)
[21:18] <m0zes> i.e. physical disks.
[21:18] <ska> Also, we're modeling the Ceph system for our monitoring system, and we're wondering if it makes sense to think of the OSD data and daemon as two separate objects or a single one..
[21:19] <ska> The first question is related to our second one..
[21:19] <m0zes> if the disk was prepared with ceph-disk, simply moving the disk from the old host to the new one (with all the ceph pre-reqs installed) should have udev trigger the mounting and starting of the moved osd.
[21:20] <m0zes> *if* the journal is moved with the osd. i.e. if the journal is on the osd disk, great. if it isn't, the journal device needs to move too.
[21:20] * bgleb (~bgleb@94.19.146.224) has joined #ceph
[21:21] * aarcane (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) Quit (Read error: Connection reset by peer)
[21:21] <ska> Sure.. We prep our disks with Journal on-disk.
[21:21] <m0zes> then it should be no problem to simply stop an osd, pop out the disk and put it in somewhere else.
[21:22] <m0zes> ceph should also move it in the crushmap to the new host.
[21:22] <ska> Can an OSD ever have 2 daemons that service the same OSD data???
[21:22] <m0zes> if you haven't disabled that "feature"
[21:22] * nardial (~ls@dslb-088-076-092-027.088.076.pools.vodafone-ip.de) has joined #ceph
[21:23] <ska> m0zes: what feature allows CM rebuild?
[21:23] <m0zes> osd crush update on start
[21:24] <m0zes> we set it false, as we maintain our own maps. it would probably be better for us to create a 'trigger' script to put them in the right place.
[21:24] <m0zes> there should be 1 osd daemon accessing the "data" on the osd disk.
[21:26] * Lattyware (~Tenk@7R2AACIGF.tor-irc.dnsbl.oftc.net) Quit ()
[21:26] * Aramande_ (~Dinnerbon@9S0AAB3MS.tor-irc.dnsbl.oftc.net) has joined #ceph
[21:26] * rendar (~I@host182-180-dynamic.7-87-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[21:28] <TheSov2> can i have multiple public network interfaces on ceph?
[21:28] <TheSov2> for each osd?
[21:29] * rendar (~I@host182-180-dynamic.7-87-r.retail.telecomitalia.it) has joined #ceph
[21:31] <m0zes> I don't believe so.
[21:32] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Ping timeout: 480 seconds)
[21:34] * snakamoto (~Adium@192.16.26.2) Quit (Quit: Leaving.)
[21:38] * davidzlap (~Adium@2605:e000:1313:8003:ace6:6ab9:8390:1fe7) Quit (Quit: Leaving.)
[21:46] * davidzlap (~Adium@2605:e000:1313:8003:ace6:6ab9:8390:1fe7) has joined #ceph
[21:47] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[21:48] * Miouge (~Miouge@h-72-233.a163.priv.bahnhof.se) Quit (Remote host closed the connection)
[21:50] <ska> Can Calamari handle more than one cluster?
[21:51] * bgleb (~bgleb@94.19.146.224) Quit (Remote host closed the connection)
[21:51] <ska> I can have 3 clusters being monitored on Calamari?
[21:51] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) Quit (Ping timeout: 480 seconds)
[21:54] * davidzlap (~Adium@2605:e000:1313:8003:ace6:6ab9:8390:1fe7) Quit (Quit: Leaving.)
[21:55] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[21:56] * Aramande_ (~Dinnerbon@9S0AAB3MS.tor-irc.dnsbl.oftc.net) Quit ()
[21:56] * jwandborg (~AG_Scott@tor.het.net) has joined #ceph
[21:56] * rlrevell (~leer@vbo1.inmotionhosting.com) has left #ceph
[21:58] * snakamoto (~Adium@192.16.26.2) has joined #ceph
[22:04] <ska> I guess since Ceph supports multiple clusters, Calamari is just interpreting that information..
[22:05] * nardial (~ls@dslb-088-076-092-027.088.076.pools.vodafone-ip.de) Quit (Remote host closed the connection)
[22:05] * nardial (~ls@dslb-088-076-092-027.088.076.pools.vodafone-ip.de) has joined #ceph
[22:08] * davidzlap (~Adium@cpe-23-242-27-128.socal.res.rr.com) has joined #ceph
[22:10] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[22:11] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[22:13] * Hemanth (~Hemanth@117.192.237.130) has joined #ceph
[22:14] * nardial (~ls@dslb-088-076-092-027.088.076.pools.vodafone-ip.de) Quit (Remote host closed the connection)
[22:15] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[22:15] * davidzlap (~Adium@cpe-23-242-27-128.socal.res.rr.com) Quit (Quit: Leaving.)
[22:26] * jwandborg (~AG_Scott@9S0AAB3NX.tor-irc.dnsbl.oftc.net) Quit ()
[22:26] * phyphor (~elt@relay-a.tor-exit.network) has joined #ceph
[22:26] <MikePar> Anyone have any expertise in getting Ceph & Openstack playing together? I have functional ceph, I *think* I have glance backing into Ceph, but I can't seem to get Cinder working.
[22:29] * TheSov2 (~TheSov@cip-248.trustwave.com) Quit (Quit: Leaving)
[22:32] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[22:32] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[22:32] * bgleb (~bgleb@94.19.146.224) has joined #ceph
[22:33] * bgleb (~bgleb@94.19.146.224) Quit (Remote host closed the connection)
[22:33] * bgleb (~bgleb@94.19.146.224) has joined #ceph
[22:34] * kevinperks (~Adium@2606:a000:80ad:1300:c086:3d26:8c47:23cb) has joined #ceph
[22:35] * bitserker (~toni@188.87.126.67) has joined #ceph
[22:39] * alram (~alram@206.169.83.146) Quit (Quit: leaving)
[22:42] * rlrevell1 (~leer@vbo1.inmotionhosting.com) has joined #ceph
[22:44] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[22:45] * qhartman (~qhartman@den.direwolfdigital.com) has joined #ceph
[22:46] <ska> MikePar: Have you seen: http://ceph.com/docs/master/rbd/rbd-openstack/
[22:46] * sjm (~sjm@pool-100-1-115-73.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[22:47] * bgleb_ (~bgleb@130.193.35.49-vpna.dhcp.yndx.net) has joined #ceph
[22:48] * ibravo (~ibravo@72.83.69.64) has joined #ceph
[22:52] * sjm (~sjm@pool-100-1-115-73.nwrknj.fios.verizon.net) has joined #ceph
[22:53] * Hemanth (~Hemanth@117.192.237.130) Quit (Quit: Leaving)
[22:54] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) Quit (Remote host closed the connection)
[22:54] * bgleb (~bgleb@94.19.146.224) Quit (Ping timeout: 480 seconds)
[22:54] <MikePar> yeah
[22:54] <qhartman> Anyone have any suggestions for discovering which rbd volumes in a pool are the busiest?
[22:54] <MikePar> those are the docs I've been working from
[22:54] <qhartman> I'm trying to track down which VMs might be using more IO than I'd prefer
[22:56] * phyphor (~elt@9S0AAB3O7.tor-irc.dnsbl.oftc.net) Quit ()
[22:57] * davidzlap (~Adium@cpe-23-242-27-128.socal.res.rr.com) has joined #ceph
[22:58] * rwheeler (~rwheeler@pool-173-48-214-9.bstnma.fios.verizon.net) has joined #ceph
[22:58] * ibravo (~ibravo@72.83.69.64) Quit (Quit: Leaving)
[23:00] * shaunm (~shaunm@74.215.76.114) has joined #ceph
[23:00] * AG_Scott (~Inuyasha@89.105.194.70) has joined #ceph
[23:01] <m0zes> is there a significant benefit for monitor storage being fast SSD?
[23:02] * ingslovak (~peto@cloud.vps.websupport.sk) has joined #ceph
[23:03] <qhartman> m0zes, Mon dumps a lot of logs. If that's all the machine is doing a spinner should be fine, but if you have other things running as well, an SSD would probably be good
[23:03] <qhartman> Since you don't need a lot of storage volume and SSDs are getting so cheap, it would be hard to recommend anything else unless you have other requirements or constraints that contradict that.
[23:06] * davidzlap (~Adium@cpe-23-242-27-128.socal.res.rr.com) Quit (Quit: Leaving.)
[23:06] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[23:06] * tupper (~tcole@2001:420:2280:1272:8900:f9b8:3b49:567e) Quit (Ping timeout: 480 seconds)
[23:07] * sleinen1 (~Adium@2001:620:0:82::102) has joined #ceph
[23:07] * rlrevell1 (~leer@vbo1.inmotionhosting.com) Quit (Ping timeout: 480 seconds)
[23:09] <m0zes> well, here is the situation. we purchased a bunch of machines (24x dell r730xds) for ceph storage (all serve mons and 18x osds). unbenknownst to me, we purchased "read-optimized" SSDs for the journals/OS. We're trying to find/fix potential bottlenecks in the system without having to purchase 48 new s3710 ssds to replace all of the "read-optimized" ones. I was thinking of splitting monitors and MDS
[23:09] <m0zes> out to seperate boxes, but my boss is wanting to just buy faster SSDs for a subset of the nodes and restrict mon/mds functions to them.
[23:10] <qhartman> There's really zero reason to be running mons on all of them
[23:10] <qhartman> What are these "read optimized" SSDs?
[23:10] <m0zes> true, coworker set that up. I didn't realize he was going to until it was done. I haven't changed it yet...
[23:11] <qhartman> but yeah, running mons on separate boxes from your OSDs is almost certainl ya good idea
[23:11] <m0zes> Lite-on ECT-480N9S
[23:11] <qhartman> and they can be cheap
[23:12] <qhartman> I assume these are small for OS and journal?
[23:12] <m0zes> partitioned up for os and journal. os is a software raid-1.
[23:13] * trociny (~Mikolaj@91.225.202.178) Quit (Quit: away)
[23:13] * snakamoto (~Adium@192.16.26.2) Quit (Quit: Leaving.)
[23:14] <qhartman> so I assume you have the 24-bay model, with 18 disks for OSD, leaving 6 for journals / OS?
[23:14] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[23:14] <m0zes> nope, the 16+2 model.
[23:15] <m0zes> 12x hot-swap 3.5" disks, 4x internal 3.5" disks, 2x SSDs
[23:15] <qhartman> ok, and you're runnign single OSDs on each spinner then the two SSD?
[23:16] <qhartman> so, that's 16 OSDs, is the 18 above a typo?
[23:16] <m0zes> the SSD are serving an osd themselves, but it is unused at the moment
[23:16] <qhartman> what size are the SSDs?
[23:16] <m0zes> the original thought was to use them as a cachepool too, but I wasn't holding my breath on that.
[23:17] <m0zes> 480GB
[23:18] <qhartman> So looking at those SSD specs, I would expect them to be fine
[23:18] <qhartman> My biggest concern would be having that many OSDs journal to only two SSDs
[23:18] <m0zes> 50G for os, 50G for ssd osd, 64G swap, 8x 35G parts for journals.
[23:19] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) has joined #ceph
[23:19] <m0zes> when I ran the ssd benchmarking, the only way I could get the numbers claimed in the spec sheet was to run it with 12x writers.
[23:19] <qhartman> Yeah, I'm not concerned with size, I'm more concerned w/ bandwidth on the bus and with write endurance
[23:19] <m0zes> s/numbers/iops/
[23:20] <qhartman> I have single SSDs for journals in my OSD boxes and after having them up for about 4 months in their current config, I'm going to burn out the SSDs in only a couple years at the current rate
[23:20] <qhartman> and I'm only running 3 OSDs to each of them
[23:20] <m0zes> they are so oversubscribed, and slow, that I had to turn off trim because it was causing the spinners to suicide timeout.
[23:21] <qhartman> I'm running samsung 850 Pros
[23:21] * davidzlap (~Adium@cpe-23-242-27-128.socal.res.rr.com) has joined #ceph
[23:21] <qhartman> The liteon specs read a bit weird, so I'm not sure how they compare
[23:21] <m0zes> unfortunately it wasn't really my choice on the already purchased ssds.
[23:21] <qhartman> rught
[23:22] * davidzlap (~Adium@cpe-23-242-27-128.socal.res.rr.com) Quit ()
[23:22] <qhartman> but back to your question regarding mons, the very first thing I'd do is pare them back to at most 7, and more practically 3
[23:23] <m0zes> okay. will do.
[23:23] <qhartman> and then see how it rides if you don't have hosts to dedicate to them
[23:23] * bitserker (~toni@188.87.126.67) Quit (Quit: Leaving.)
[23:23] <qhartman> I'm concerned about how heavily over subbed those SSDs are, and I don't think getting faster ones will help all that much
[23:24] <qhartman> these one's aren't "great", but they aren't that slow
[23:27] <qhartman> Assuming you've got 10Gb or faster interconnect, the journal traffic to those things could get pretty intense, enough to saturate the bus I would think
[23:27] * sjm (~sjm@pool-100-1-115-73.nwrknj.fios.verizon.net) has left #ceph
[23:28] <m0zes> 40Gb. wanted it for latency...
[23:28] <qhartman> sure
[23:28] <qhartman> then you can easily soak those SSDs running that many OSDs to them
[23:29] <m0zes> yes.
[23:29] * kevinperks (~Adium@2606:a000:80ad:1300:c086:3d26:8c47:23cb) Quit (Quit: Leaving.)
[23:29] <qhartman> and the only real answer to that (imho) is either fewer OSDs per box or more SSDs.
[23:29] <qhartman> I don't think replacing the existing ones will buy you much
[23:30] <qhartman> but I'd be happy to be proven wrong
[23:30] * AG_Scott (~Inuyasha@5NZAAEXZJ.tor-irc.dnsbl.oftc.net) Quit ()
[23:30] * Joppe4899 (~SEBI@exit1.ipredator.se) has joined #ceph
[23:31] * kutija (~kutija@95.180.90.38) has joined #ceph
[23:31] <qhartman> if you do get dedicated hardware for your mons, SSDs for them probably would be a good idea. Your cluster is goign to be large neoug hthat it's goign to be generating a lot of log traffic on the mons
[23:34] * kevinperks (~Adium@cpe-75-177-32-14.triad.res.rr.com) has joined #ceph
[23:35] * bgleb_ (~bgleb@130.193.35.49-vpna.dhcp.yndx.net) Quit (Remote host closed the connection)
[23:39] <kutija> if I'm stuck with spinning disks, 250MB/s each, is using RAID1/10 a good idea in order to get better performance?
[23:39] <kutija> let's say I will have 2 storage servers
[23:39] <kutija> I know that RAID-1 and data resiliency is a bad mix
[23:40] * erikh (~erikh@c-73-223-105-145.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[23:40] <m0zes> s/1/0/
[23:40] <kutija> oh yeah
[23:40] <kutija> I always mix them
[23:40] <m0zes> I doubt it would have better performance over ceph's native striping...
[23:40] <kutija> hm
[23:40] * m0zes remembers it by the % chance you have of keeping your data ;)
[23:40] <kutija> so getting better storage in terms of spinnings disks or ssd's is the only way to get more IO performance
[23:41] <kutija> for example I have OpenStack cloud with 2 CEPH servers, 8 OSD
[23:41] <kutija> and each disk is around 250MB/s
[23:41] <kutija> and my cloud asks for more
[23:42] <kutija> and I'm kinda stuck with disks for now
[23:42] <kutija> and wonder what should I do to get more IO
[23:44] <kutija> should I expand my storage by adding 2 more servers to get replication of factor 4 or to make them independent
[23:45] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[23:45] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[23:48] <qhartman> kutija, it kinda depends on whether or not you actually need more throughput or more IO from the groups you are serving
[23:48] <qhartman> assuming you are stuck on Gb links, you are going to saturate those long before you run out of throughput from your disks.
[23:48] <qhartman> IO is almost always a bigger concern
[23:48] <kutija> actually I have 10G in a private network
[23:49] <kutija> so that is not an issue for now :)
[23:49] <qhartman> is that 8 OSD total?
[23:49] <kutija> currently yes
[23:49] <kutija> 2 servers, 8 OSD
[23:50] <kutija> and the disks that are almost 100% utilized
[23:50] <qhartman> in terms of IO?
[23:50] <qhartman> like, you have very deep request queues and the like?
[23:50] <kutija> yes, I have a high IOwait, between 35 and 55%
[23:50] <qhartman> yeah, sounds like my situtation, but worse
[23:51] <qhartman> the only answer is to either add more spindles to spread the requests around, or to switch to SSDs
[23:51] <qhartman> you might be able to buy some headroom by moving "unimportant" data t oa pool with size=2
[23:51] <qhartman> that will reduce some of the overhead, since each request will only generate 2 writes instead of three
[23:52] <qhartman> but that's a bandaid at best
[23:52] <qhartman> historically of course more spindles was far more affordable
[23:53] <qhartman> but with SSD prices doing what they are, that may not be true anymore if your storage capacity needs aren't huge
[23:53] <kutija> so for example I add 2 more servers with the same amount of disks and extend current CEPH cluster and get size=4 or create a new one with size=2
[23:53] <kutija> well...
[23:53] <kutija> for example I have 27TB of data now
[23:53] <qhartman> I wouldn't go to size=4
[23:54] <qhartman> I assume you're on 3 now, correct?
[23:54] <qhartman> I'm talking about the setting that determines how many copies of data that ceph stores
[23:54] <qhartman> it's not related to the number of servers or OSDs you have
[23:54] <kutija> if you mean replication factor
[23:54] <kutija> it's 2
[23:54] <qhartman> yeah
[23:55] <qhartman> ok, so my other suggestion above won't help
[23:56] <kutija> so either i get faster spindles or I need to get a budget for SSD
[23:56] <qhartman> to be overly specific, I'm referring to the pool "size" key referred to on this page: http://ceph.com/docs/master/rados/operations/pools/
[23:56] <qhartman> not neccesarily faster, but more of them
[23:57] <qhartman> as you add more spindles the IO load get s spread around
[23:57] <qhartman> so if you were to doulbe your current config like you suggest, you'd have twice the iops resrouces avaialble
[23:57] <kutija> and I can stick with the same replication factor
[23:58] <kutija> of two or three if I want
[23:58] <qhartman> at 27TB of data, SSDs would be very costly, so that's probably a better approach
[23:58] <qhartman> right
[23:58] <kutija> hmmm
[23:58] <qhartman> I personally don't trust anything that matters to replication of 2
[23:58] <kutija> me neither but baby steps
[23:59] <kutija> and limited budget makes my life hard
[23:59] <qhartman> if anything goes sideways, ceph has no way of knowing which copy is right, and you get into a pretty bad state
[23:59] <kutija> and the nights pretty long

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.