#ceph IRC Log

Index

IRC Log for 2014-07-22

Timestamps are in GMT/BST.

[0:02] * wrencsok1 (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[0:02] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (Ping timeout: 480 seconds)
[0:02] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[0:07] * rwheeler (~rwheeler@nat-pool-bos-u.redhat.com) Quit (Quit: Leaving)
[0:08] * sarob (~sarob@67.23.204.226) has joined #ceph
[0:09] * jtaguinerd (~jtaguiner@203.215.116.134) Quit (Quit: Leaving.)
[0:09] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Quit: Leaving.)
[0:09] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[0:10] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[0:11] * benner (~benner@162.243.49.163) Quit (Remote host closed the connection)
[0:11] * analbeard (~shw@host86-155-107-195.range86-155.btcentralplus.com) has joined #ceph
[0:13] * gleam (gleam@dolph.debacle.org) Quit (Ping timeout: 480 seconds)
[0:14] * analbeard (~shw@host86-155-107-195.range86-155.btcentralplus.com) Quit ()
[0:16] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: leaving)
[0:20] * rendar (~I@host105-109-dynamic.49-79-r.retail.telecomitalia.it) Quit ()
[0:22] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[0:22] * gleam (gleam@dolph.debacle.org) has joined #ceph
[0:25] * sarob (~sarob@67.23.204.226) Quit (Quit: Leaving...)
[0:25] * sarob (~sarob@67.23.204.226) has joined #ceph
[0:29] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[0:33] * benner (~benner@162.243.49.163) has joined #ceph
[0:37] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[0:38] * jameseck (~james@cpc68609-stav15-2-0-cust582.17-3.cable.virginm.net) has joined #ceph
[0:41] * vmx (~vmx@dslb-084-056-047-201.pools.arcor-ip.net) Quit (Quit: Leaving)
[0:42] * KindOne (~KindOne@107.170.17.75) has joined #ceph
[0:44] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:46] * sigsegv (~sigsegv@188.25.123.201) Quit (Quit: sigsegv)
[0:49] * baylight (~tbayly@204.15.85.169) has joined #ceph
[0:51] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) Quit (Read error: Operation timed out)
[0:51] * sarob (~sarob@67.23.204.226) Quit (Quit: Leaving...)
[0:51] * sarob (~sarob@67.23.204.226) has joined #ceph
[0:52] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) has joined #ceph
[0:56] * baylight (~tbayly@204.15.85.169) has left #ceph
[0:57] * dmsimard is now known as dmsimard_away
[0:58] * zack_dol_ (~textual@p843a5a.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[0:58] * bandrus (~Adium@216.57.72.205) Quit (Quit: Leaving.)
[1:00] * LeaChim (~LeaChim@host86-162-79-167.range86-162.btcentralplus.com) Quit (Read error: Operation timed out)
[1:09] * bandrus (~Adium@216.57.72.205) has joined #ceph
[1:14] * sverrest (~sverrest@cm-84.208.166.184.getinternet.no) Quit (Ping timeout: 480 seconds)
[1:15] * dmick (~dmick@2607:f298:a:607:a0ca:6924:741d:640e) has left #ceph
[1:15] * tristanz (~tzajonc@64.7.84.114) Quit (Quit: tristanz)
[1:19] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[1:22] * jtaguinerd (~jtaguiner@203.215.116.153) has joined #ceph
[1:25] * tristanz (~tzajonc@64.7.84.114) has joined #ceph
[1:28] * oms101 (~oms101@p20030057EA010E00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:30] * Steki (~steki@cable-94-189-160-74.dynamic.sbb.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:30] * Steki (~steki@cable-94-189-160-74.dynamic.sbb.rs) has joined #ceph
[1:37] * oms101 (~oms101@p20030057EA011100EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[1:40] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:41] * sarob (~sarob@67.23.204.226) Quit (Remote host closed the connection)
[1:42] * sarob (~sarob@67.23.204.226) has joined #ceph
[1:50] * sarob (~sarob@67.23.204.226) Quit (Ping timeout: 480 seconds)
[2:03] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[2:03] * tristanz (~tzajonc@64.7.84.114) Quit (Quit: tristanz)
[2:09] * dmsimard_away is now known as dmsimard
[2:10] * xarses (~andreww@12.164.168.117) Quit (Read error: Operation timed out)
[2:13] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) Quit (Ping timeout: 480 seconds)
[2:14] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[2:14] * lupu (~lupu@86.107.101.214) has left #ceph
[2:17] * zack_dolby (~textual@em111-188-60-170.pool.e-mobile.ne.jp) has joined #ceph
[2:18] * adamcrume (~quassel@2601:9:6680:47:dd76:987a:3f0f:9203) Quit (Remote host closed the connection)
[2:20] * dmsimard is now known as dmsimard_away
[2:21] * zack_dol_ (~textual@e0109-49-132-45-244.uqwimax.jp) has joined #ceph
[2:23] * zack_do__ (~textual@e0109-49-132-45-244.uqwimax.jp) has joined #ceph
[2:23] * zack_dol_ (~textual@e0109-49-132-45-244.uqwimax.jp) Quit (Read error: Connection reset by peer)
[2:23] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[2:24] * tdasilva (~quassel@dhcp-0-26-5a-b5-f8-68.cpe.townisp.com) has joined #ceph
[2:25] * dmsimard_away is now known as dmsimard
[2:25] * zack_dolby (~textual@em111-188-60-170.pool.e-mobile.ne.jp) Quit (Ping timeout: 480 seconds)
[2:27] * jameseck (~james@cpc68609-stav15-2-0-cust582.17-3.cable.virginm.net) Quit (Quit: Leaving)
[2:28] * tristanz (~tzajonc@c-24-5-38-61.hsd1.ca.comcast.net) has joined #ceph
[2:28] * lupu (~lupu@86.107.101.214) has joined #ceph
[2:28] * alram (~alram@38.122.20.226) Quit (Read error: Operation timed out)
[2:28] * alram (~alram@38.122.20.226) has joined #ceph
[2:33] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[2:34] * Meths (~meths@2.25.191.94) Quit (Ping timeout: 480 seconds)
[2:37] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:39] * Meths (~meths@2.27.83.136) has joined #ceph
[2:42] * sarob (~sarob@67.23.204.226) has joined #ceph
[2:49] * alram (~alram@38.122.20.226) Quit (Ping timeout: 480 seconds)
[2:49] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[2:49] * dmsimard is now known as dmsimard_away
[2:50] * sarob (~sarob@67.23.204.226) Quit (Ping timeout: 480 seconds)
[2:59] * shang (~ShangWu@220.135.203.169) has joined #ceph
[3:04] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:04] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[3:06] * Pedras (~Adium@216.207.42.129) Quit (Read error: Operation timed out)
[3:07] * vbellur (~vijay@c-76-19-134-77.hsd1.ma.comcast.net) has joined #ceph
[3:10] * narb (~Jeff@38.99.52.10) Quit (Quit: narb)
[3:34] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[3:35] * dmsimard_away is now known as dmsimard
[3:35] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[3:42] * zerick (~eocrospom@190.187.21.53) Quit (Ping timeout: 480 seconds)
[3:42] * dmsimard is now known as dmsimard_away
[3:42] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) has joined #ceph
[3:44] * bkopilov (~bkopilov@213.57.17.87) Quit (Read error: Operation timed out)
[3:45] * shang (~ShangWu@220.135.203.169) Quit (Read error: Operation timed out)
[3:50] * haomaiwang (~haomaiwan@118.186.129.94) has joined #ceph
[3:52] * haomaiwang (~haomaiwan@118.186.129.94) Quit (Remote host closed the connection)
[3:53] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[3:53] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit (Ping timeout: 480 seconds)
[3:55] * bkopilov (~bkopilov@213.57.16.15) has joined #ceph
[3:56] * tdasilva (~quassel@dhcp-0-26-5a-b5-f8-68.cpe.townisp.com) Quit (Remote host closed the connection)
[3:56] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[4:04] * ghartz (~ghartz@ircad17.u-strasbg.fr) has joined #ceph
[4:04] * aarcane (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[4:07] * joshwambua (~joshwambu@154.72.0.90) Quit (Remote host closed the connection)
[4:08] * haomaiwa_ (~haomaiwan@118.186.129.94) has joined #ceph
[4:08] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) Quit (Quit: Leaving.)
[4:09] * joshwambua (~joshwambu@154.72.0.90) has joined #ceph
[4:12] * haomaiwa_ (~haomaiwan@118.186.129.94) Quit (Remote host closed the connection)
[4:12] * haomaiwa_ (~haomaiwan@203.69.59.199) has joined #ceph
[4:13] * haomaiw__ (~haomaiwan@203.69.59.199) has joined #ceph
[4:13] * haomaiwa_ (~haomaiwan@203.69.59.199) Quit (Read error: Connection reset by peer)
[4:15] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Read error: Connection reset by peer)
[4:24] * The_Bishop (~bishop@2001:470:50b6:0:1c99:69da:ae5d:cce0) Quit (Ping timeout: 480 seconds)
[4:28] * zhaochao (~zhaochao@211.157.180.33) has joined #ceph
[4:29] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) Quit (Quit: Leaving.)
[4:29] * haomaiwang (~haomaiwan@118.186.129.94) has joined #ceph
[4:35] * haomaiw__ (~haomaiwan@203.69.59.199) Quit (Read error: Operation timed out)
[4:35] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Read error: Connection reset by peer)
[4:35] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[4:44] * dlan_ (~dennis@116.228.88.131) has joined #ceph
[4:46] * dlan (~dennis@116.228.88.131) Quit (Ping timeout: 480 seconds)
[4:46] * shang (~ShangWu@175.41.48.77) has joined #ceph
[4:49] * analbeard (~shw@host86-155-107-195.range86-155.btcentralplus.com) has joined #ceph
[4:54] * analbeard (~shw@host86-155-107-195.range86-155.btcentralplus.com) Quit (Quit: Leaving.)
[4:58] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[5:03] * jtaguinerd (~jtaguiner@203.215.116.153) Quit (Quit: Leaving.)
[5:04] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) has joined #ceph
[5:10] * The_Bishop (~bishop@e180207178.adsl.alicedsl.de) has joined #ceph
[5:10] * yguang11 (~yguang11@2406:2000:ef96:e:c0dc:de32:fed4:82be) has joined #ceph
[5:11] * lupu (~lupu@86.107.101.214) Quit (Ping timeout: 480 seconds)
[5:11] * michalefty (~micha@188-195-129-145-dynip.superkabel.de) has joined #ceph
[5:28] * i_m (~ivan.miro@gbibp9ph1--blueice4n1.emea.ibm.com) has joined #ceph
[5:38] * Vacum (~vovo@88.130.220.152) has joined #ceph
[5:45] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[5:45] * glzhao_ (~glzhao@123.125.124.17) has joined #ceph
[5:45] * Vacum_ (~vovo@i59F7A762.versanet.de) Quit (Ping timeout: 480 seconds)
[5:45] * lucas1 (~Thunderbi@222.240.148.130) has joined #ceph
[5:46] * glzhao (~glzhao@123.125.124.17) Quit (Quit: leaving)
[5:48] * yguang11 (~yguang11@2406:2000:ef96:e:c0dc:de32:fed4:82be) Quit (Remote host closed the connection)
[5:48] * yguang11 (~yguang11@2406:2000:ef96:e:c0dc:de32:fed4:82be) has joined #ceph
[5:54] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[5:58] * michalefty (~micha@188-195-129-145-dynip.superkabel.de) Quit (Quit: Leaving.)
[6:08] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) Quit (Quit: leaving)
[6:15] * dlan (~dennis@116.228.88.131) has joined #ceph
[6:16] * lucas1 (~Thunderbi@222.240.148.130) Quit (Ping timeout: 480 seconds)
[6:16] * dlan_ (~dennis@116.228.88.131) Quit (Ping timeout: 480 seconds)
[6:20] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:22] * Pedras1 (~Adium@50.185.218.255) has joined #ceph
[6:30] * haomaiwang (~haomaiwan@118.186.129.94) Quit (Remote host closed the connection)
[6:31] * haomaiwang (~haomaiwan@118.186.129.94) has joined #ceph
[6:40] * bandrus (~Adium@216.57.72.205) Quit (Quit: Leaving.)
[6:43] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[6:53] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[6:58] * jtaguinerd (~jtaguiner@112.198.77.97) has joined #ceph
[7:00] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Quit: Leaving.)
[7:00] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[7:00] * Pedras1 (~Adium@50.185.218.255) Quit (Quit: Leaving.)
[7:00] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Remote host closed the connection)
[7:02] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[7:05] * rdas (~rdas@121.244.87.115) has joined #ceph
[7:17] * michalefty (~micha@p20030071CE5753034894062EBF7EE935.dip0.t-ipconnect.de) has joined #ceph
[7:27] * jobewan (~jobewan@c-75-65-191-17.hsd1.la.comcast.net) has joined #ceph
[7:29] * jobewan (~jobewan@c-75-65-191-17.hsd1.la.comcast.net) Quit (Quit: Leaving)
[7:30] * haomaiwang (~haomaiwan@118.186.129.94) Quit (Remote host closed the connection)
[7:30] * haomaiwang (~haomaiwan@124.248.205.19) has joined #ceph
[7:32] * analbeard (~shw@host86-155-107-195.range86-155.btcentralplus.com) has joined #ceph
[7:35] * thb (~me@2a02:2028:2bb:8a91:fcd8:7721:a19c:2bed) has joined #ceph
[7:54] * haomaiwa_ (~haomaiwan@118.186.129.94) has joined #ceph
[7:54] * jtaguinerd (~jtaguiner@112.198.77.97) Quit (Quit: Leaving.)
[7:58] * bkopilov (~bkopilov@213.57.16.15) Quit (Ping timeout: 480 seconds)
[8:00] * haomaiwang (~haomaiwan@124.248.205.19) Quit (Read error: Operation timed out)
[8:00] * haomaiwa_ (~haomaiwan@118.186.129.94) Quit (Remote host closed the connection)
[8:01] * haomaiwang (~haomaiwan@124.248.205.19) has joined #ceph
[8:02] * haomaiwa_ (~haomaiwan@118.186.129.94) has joined #ceph
[8:03] * jtaguinerd (~jtaguiner@112.198.77.97) has joined #ceph
[8:05] * jtaguinerd1 (~jtaguiner@203.115.183.18) has joined #ceph
[8:07] * jtaguinerd (~jtaguiner@112.198.77.97) Quit (Read error: Connection reset by peer)
[8:08] * haomaiwang (~haomaiwan@124.248.205.19) Quit (Read error: Operation timed out)
[8:14] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[8:22] * analbeard (~shw@host86-155-107-195.range86-155.btcentralplus.com) Quit (Quit: Leaving.)
[8:32] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) Quit (Quit: Leaving.)
[8:37] * Qu310 (~Qu310@ip-121-0-1-110.static.dsl.onqcomms.net) Quit ()
[8:39] * aldavud (~aldavud@213.55.176.148) has joined #ceph
[8:41] * oomkiller (oomkiller@d.clients.kiwiirc.com) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[8:42] * fretb (~fretb@drip.frederik.pw) Quit (Ping timeout: 480 seconds)
[8:48] * thomnico (~thomnico@2a01:e35:8b41:120:d168:491d:a98e:805) has joined #ceph
[8:48] * Sysadmin88 (~IceChat77@94.4.20.0) Quit (Quit: It's a dud! It's a dud! It's a du...)
[8:49] * rendar (~I@host50-179-dynamic.56-79-r.retail.telecomitalia.it) has joined #ceph
[8:50] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[8:51] * hyperbaba (~hyperbaba@private.neobee.net) has joined #ceph
[8:51] * cok (~chk@2a02:2350:18:1012:9cbb:88f4:86ab:222d) has joined #ceph
[8:54] * thomnico (~thomnico@2a01:e35:8b41:120:d168:491d:a98e:805) Quit (Remote host closed the connection)
[8:58] * thomnico (~thomnico@2a01:e35:8b41:120:4087:1220:7475:7154) has joined #ceph
[9:05] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[9:08] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[9:10] * AbyssOne is now known as a1-away
[9:17] * andreask (~andreask@gw2.cgn3.hosteurope.de) has joined #ceph
[9:17] * ChanServ sets mode +v andreask
[9:17] * a1-away is now known as AbyssOne
[9:18] * fretb (~fretb@drip.frederik.pw) has joined #ceph
[9:20] * andreask (~andreask@gw2.cgn3.hosteurope.de) has left #ceph
[9:26] * cok (~chk@2a02:2350:18:1012:9cbb:88f4:86ab:222d) Quit (Quit: Leaving.)
[9:26] * cok (~chk@2a02:2350:18:1012:9cbb:88f4:86ab:222d) has joined #ceph
[9:28] * ikrstic (~ikrstic@93-86-38-56.dynamic.isp.telekom.rs) has joined #ceph
[9:30] * analbeard (~shw@support.memset.com) has joined #ceph
[9:33] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit (Ping timeout: 480 seconds)
[9:33] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[9:34] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[9:36] * aldavud (~aldavud@213.55.176.148) Quit (Ping timeout: 480 seconds)
[9:37] * i_m (~ivan.miro@gbibp9ph1--blueice4n1.emea.ibm.com) Quit (Quit: Leaving.)
[9:39] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:41] * andreask (~andreask@gw2.cgn3.hosteurope.de) has joined #ceph
[9:41] * ChanServ sets mode +v andreask
[9:43] * jtaguinerd1 (~jtaguiner@203.115.183.18) Quit (Quit: Leaving.)
[9:44] * ghartz (~ghartz@ircad17.u-strasbg.fr) has joined #ceph
[9:45] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[9:56] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Quit: Leaving.)
[10:04] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:04] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Connection reset by peer)
[10:05] * jordanP (~jordan@185.23.92.11) has joined #ceph
[10:07] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[10:10] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[10:12] * zack_do__ (~textual@e0109-49-132-45-244.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[10:13] * cok (~chk@2a02:2350:18:1012:9cbb:88f4:86ab:222d) Quit (Quit: Leaving.)
[10:13] * circ-user-dh10h (~circuser-@mail2.hofburg.com) has joined #ceph
[10:16] * saurabh (~saurabh@121.244.87.117) has joined #ceph
[10:16] * lcavassa (~lcavassa@89.184.114.246) has joined #ceph
[10:18] <circ-user-dh10h> good morning, unfortunately I have a problem with "ceph-deploy mon create-initial". I always get the warning "unable to find /var/lib/ceph/bootstrap-mds/ceph.keyring, the same with bootstrap-osd and ceph.client.admin.keyring. I googled and found this bugreport: http://tracker.ceph.com/issues/4924 This was the version 0.67.3 and has the status "solved". I am running the latest version firefly. Could it be, that the bug creeped into again? Runnin
[10:25] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[10:26] * fretb (~fretb@drip.frederik.pw) Quit (Remote host closed the connection)
[10:40] * Xiol (~Xiol@shrike.daneelwell.eu) Quit (Ping timeout: 480 seconds)
[10:43] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:43] * thomnico (~thomnico@2a01:e35:8b41:120:4087:1220:7475:7154) Quit (Ping timeout: 480 seconds)
[10:45] * jtaguinerd (~jtaguiner@203.115.183.18) has joined #ceph
[10:52] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[10:53] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:53] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[10:53] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:59] * kapil (~ksharma@2620:113:80c0:5::2222) Quit (Read error: No route to host)
[10:59] * jtaguinerd1 (~jtaguiner@112.198.77.97) has joined #ceph
[11:02] * jtaguinerd2 (~jtaguiner@203.115.183.18) has joined #ceph
[11:02] * jtaguinerd1 (~jtaguiner@112.198.77.97) Quit (Read error: Connection reset by peer)
[11:03] * jtaguinerd (~jtaguiner@203.115.183.18) Quit (Read error: Connection reset by peer)
[11:04] * danieljh (~daniel@0001b4e9.user.oftc.net) Quit (Quit: Lost terminal)
[11:04] * fretb (~fretb@drip.frederik.pw) has joined #ceph
[11:04] * fretb_ (~fretb@drip.frederik.pw) has joined #ceph
[11:04] * fretb_ (~fretb@drip.frederik.pw) Quit ()
[11:16] * andreask (~andreask@gw2.cgn3.hosteurope.de) has left #ceph
[11:20] * zack_dolby (~textual@e0109-49-132-45-244.uqwimax.jp) has joined #ceph
[11:27] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit (Quit: No Ping reply in 180 seconds.)
[11:28] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[11:31] * sverrest (~sverrest@cm-84.208.166.184.getinternet.no) has joined #ceph
[11:31] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit ()
[11:31] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[11:33] * haomaiwa_ (~haomaiwan@118.186.129.94) Quit (Remote host closed the connection)
[11:34] * haomaiwang (~haomaiwan@124.248.205.19) has joined #ceph
[11:34] * tristanz (~tzajonc@c-24-5-38-61.hsd1.ca.comcast.net) Quit (Quit: tristanz)
[11:35] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit ()
[11:35] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[11:39] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit ()
[11:40] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[11:43] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit ()
[11:44] * thb (~me@0001bd58.user.oftc.net) Quit (Remote host closed the connection)
[11:44] * thb (~me@0001bd58.user.oftc.net) has joined #ceph
[11:46] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[11:48] * stephan (~stephan@62.217.45.26) has joined #ceph
[11:51] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit (Quit: No Ping reply in 180 seconds.)
[11:51] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[11:51] * vmx (~vmx@dslb-084-056-021-195.pools.arcor-ip.net) has joined #ceph
[11:55] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit ()
[11:55] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[11:58] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit ()
[11:58] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[12:01] * pat (~pat@staff.0x50.net) has joined #ceph
[12:07] <pat> hello, I'd like to setup a public read bucket on my radosgw (version 0.80.2-1~bpo70+1), so I used python boto lib to set acl at 'public-read' on my bucket, bucket can be list put I have access denied when I try to download an object. Any idea why?
[12:08] <pat> but*
[12:10] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit (Quit: No Ping reply in 180 seconds.)
[12:11] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[12:14] <ghartz> there is a cluster above 3PB ?
[12:14] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[12:14] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit ()
[12:15] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[12:15] * lupu (~lupu@86.107.101.214) has joined #ceph
[12:18] <svg> ghartz: the largest I heard about is 5PB
[12:18] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit ()
[12:18] <ghartz> who ?
[12:18] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[12:19] <svg> I believe it was Cisco
[12:19] * haomaiwa_ (~haomaiwan@118.186.129.94) has joined #ceph
[12:19] <svg> someone told me ebay has some big stuff too, or plans to, but I have no further details
[12:20] <ghartz> ok :)
[12:21] <ghartz> i knew about CERN and dreamhow (~3PB)
[12:22] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[12:23] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit (Quit: No Ping reply in 180 seconds.)
[12:24] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[12:27] * haomaiwang (~haomaiwan@124.248.205.19) Quit (Ping timeout: 480 seconds)
[12:27] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit ()
[12:27] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[12:30] * Xiol (~Xiol@shrike.daneelwell.eu) has joined #ceph
[12:31] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit ()
[12:34] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[12:37] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit (Quit: No Ping reply in 180 seconds.)
[12:39] * shang (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[12:43] * zack_dolby (~textual@e0109-49-132-45-244.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[12:56] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Quit: No Ping reply in 180 seconds.)
[12:57] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[12:59] * diegows (~diegows@190.190.5.238) has joined #ceph
[13:00] * flaxy (~afx@78.130.171.69) Quit (Quit: boom)
[13:00] <circ-user-dh10h> ceph-deploy mon create-initial -> mon-node1 monitor has reached quorum! The problem is the node isn`t creating keys?!?
[13:03] <circ-user-dh10h> When i enter the command manually: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cephnode1.asok mon_status i get "getcwd() failed No such file or directory error ?
[13:08] * nljmo_ (~nljmo@5ED6C263.cm-7-7d.dynamic.ziggo.nl) Quit (Quit: Textual IRC Client: www.textualapp.com)
[13:15] * thomnico (~thomnico@15.203.178.35) has joined #ceph
[13:15] * dmsimard_away is now known as dmsimard
[13:21] * theanalyst (~abhi@49.32.0.51) has joined #ceph
[13:25] * flaxy (~afx@78.130.171.69) has joined #ceph
[13:25] * jtaguinerd2 (~jtaguiner@203.115.183.18) Quit (Quit: Leaving.)
[13:33] * circ-user-dh10h_ (~circuser-@mail2.hofburg.com) has joined #ceph
[13:36] * garphy`aw is now known as garphy
[13:36] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[13:37] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[13:38] * saurabh (~saurabh@121.244.87.117) Quit (Quit: Leaving)
[13:40] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[13:40] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[13:40] * circ-user-dh10h (~circuser-@mail2.hofburg.com) Quit (Ping timeout: 480 seconds)
[13:44] * thomnico (~thomnico@15.203.178.35) Quit (Quit: Ex-Chat)
[13:50] * zack_dolby (~textual@p8505b4.tokynt01.ap.so-net.ne.jp) has joined #ceph
[13:51] * vbellur (~vijay@c-76-19-134-77.hsd1.ma.comcast.net) Quit (Quit: Leaving.)
[13:57] * shang (~ShangWu@111-248-179-241.dynamic.hinet.net) has joined #ceph
[14:05] * thomnico (~thomnico@15.203.178.35) has joined #ceph
[14:10] * dlan (~dennis@116.228.88.131) Quit (Remote host closed the connection)
[14:10] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: No route to host)
[14:10] * dlan (~dennis@116.228.88.131) has joined #ceph
[14:10] * hyperbaba (~hyperbaba@private.neobee.net) Quit (Ping timeout: 480 seconds)
[14:10] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[14:11] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[14:11] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[14:12] <alfredodeza> circ-user-dh10h_: `create-initial` does create the keys for you
[14:12] <alfredodeza> you should be able to see it in the output
[14:12] <alfredodeza> you could try to run `ceph-deploy gatherkeys`
[14:13] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[14:14] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[14:26] * thb (~me@2a02:2028:2bb:8a90:fcd8:7721:a19c:2bed) has joined #ceph
[14:27] * ganders (~root@200-127-158-54.net.prima.net.ar) has joined #ceph
[14:39] * shang (~ShangWu@111-248-179-241.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[14:41] <circ-user-dh10h_> same error
[14:41] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) has joined #ceph
[14:41] <alfredodeza> circ-user-dh10h_: can you share some output?
[14:41] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit (Ping timeout: 480 seconds)
[14:45] * sigsegv (~sigsegv@188.25.123.201) has joined #ceph
[14:52] * ghartz (~ghartz@ircad17.u-strasbg.fr) has joined #ceph
[14:52] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[14:55] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[14:58] <circ-user-dh10h_> @alfredozea yes, in 30 minutes thanks
[15:03] * danieljh (~daniel@0001b4e9.user.oftc.net) has joined #ceph
[15:03] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[15:05] * analbeard (~shw@support.memset.com) Quit (Ping timeout: 480 seconds)
[15:14] * tdasilva (~quassel@nat-pool-bos-u.redhat.com) has joined #ceph
[15:16] * yuriw1 (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) has joined #ceph
[15:16] * brad_mssw (~brad@shop.monetra.com) has joined #ceph
[15:17] * analbeard (~shw@support.memset.com) has joined #ceph
[15:17] * zhaochao (~zhaochao@211.157.180.33) has left #ceph
[15:18] * rwheeler (~rwheeler@nat-pool-bos-u.redhat.com) has joined #ceph
[15:21] * yuriw (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[15:23] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) has joined #ceph
[15:23] * sjm (~sjm@143.115.158.238) has joined #ceph
[15:26] * markbby (~Adium@168.94.245.4) has joined #ceph
[15:31] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[15:33] * rdas (~rdas@121.244.87.115) Quit (Quit: Leaving)
[15:41] * michalefty (~micha@p20030071CE5753034894062EBF7EE935.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[15:48] <tnt_> wido_: it would be even better if from the 'ceph' command you could tell an osd to restart itself so you can do cluster wide sequential restart and wait for HEALTH_OK between each restart :p
[15:52] * michalefty (~micha@p20030071CE5753934894062EBF7EE935.dip0.t-ipconnect.de) has joined #ceph
[15:56] * analbeard (~shw@support.memset.com) Quit (Ping timeout: 480 seconds)
[16:01] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[16:01] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[16:02] <ganders> during the install of Calamari on Ubuntu 12.04 i got the following error message while trying to run the "dpkg-buildpackage" on calamari-clients script:
[16:02] <ganders> npm WARN package.json manage@0.0.0 No repository field.
[16:02] <ganders> then it failed at the phantomjs@1.9.7-14 install script
[16:02] <ganders> anybody kjnow
[16:03] <ganders> about this kind of error?
[16:05] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[16:05] * analbeard (~shw@support.memset.com) has joined #ceph
[16:05] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[16:10] * drankis (~drankis__@89.111.13.198) Quit (Ping timeout: 480 seconds)
[16:10] * rdas (~rdas@110.227.47.126) has joined #ceph
[16:14] * michalefty (~micha@p20030071CE5753934894062EBF7EE935.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[16:15] <ganders> or anybody had a procedure to install calamari in ubuntu 12.04 or 14.04 besides the one that is provided on the ceph website?
[16:15] * b0e1 (~aledermue@juniper1.netways.de) has joined #ceph
[16:17] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[16:21] * b0e (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[16:26] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) Quit (Ping timeout: 480 seconds)
[16:28] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[16:30] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[16:31] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[16:32] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[16:35] * jobewan (~jobewan@snapp.centurylink.net) has joined #ceph
[16:38] <seapasulli> ganders: nein so far. I have tried to install it for 20 minutes and hit the same error. Once I get my cluster back up I plan on giving it another go
[16:38] <seapasulli> Did you try to build it inside a virtual environment or are you installing installing it?
[16:38] <seapasulli> I tried installing it and assumed that is why I was having weird issues
[16:39] * rdas (~rdas@110.227.47.126) Quit (Quit: Leaving)
[16:39] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) Quit (Ping timeout: 480 seconds)
[16:39] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[16:39] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) has joined #ceph
[16:42] * ikrstic (~ikrstic@93-86-38-56.dynamic.isp.telekom.rs) Quit (Quit: Konversation terminated!)
[16:42] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) has joined #ceph
[16:44] <ganders> i follow this proc and installed on a VM the same that i used for cephdeploy: wordpress.com/2014/07/11/build-package-and-install-calamari-on-debian-weezy/ and also try to follow this other: http://calamari.readthedocs.org/en/latest/development/building_packages.html
[16:44] <circ-user-dh10h_> so back in action:
[16:45] <circ-user-dh10h_> [cephnode1][DEBUG ] connected to host: cephnode1 [cephnode1][DEBUG ] detect platform information from remote host [cephnode1][DEBUG ] detect machine type [ceph_deploy.mon][INFO ] distro info: Ubuntu 14.04 trusty [cephnode1][DEBUG ] determining if provided host has same hostname in remote [cephnode1][DEBUG ] get remote short hostname [cephnode1][DEBUG ] deploying mon to cephnode1 [cephnode1][DEBUG ] get remote short hostname [cephnode1][DEBUG
[16:46] <circ-user-dh10h_> THIS CODE ABOVE IS FOR alfredodeza
[16:46] <circ-user-dh10h_> sry for confusion
[16:47] <alfredodeza> circ-user-dh10h_: can you paste the whole thing in a paste site (e.g. fpaste.org)
[16:47] <alfredodeza> ceph-deploy can be very verbose :)
[16:47] <circ-user-dh10h_> of course, one moment pls
[16:49] <circ-user-dh10h_> never used this before: http://ur1.ca/ht265
[16:49] * joef (~Adium@2620:79:0:131:a4e3:c6db:adb6:c4f5) Quit (Remote host closed the connection)
[16:52] <circ-user-dh10h_> there is one point: status for monitor: mon.cephnode1 and there is "addr" 10.0.0.155. I am not sure if the ip address is correct for me, because the node has two interfaces, and the interface with ip adress 10.0.0.0 is the public one and the network only for the storage network is 192.168.1.2. So should there stand the public IP of the node or the Ip from the storage network of the node?
[16:53] * theanalyst (~abhi@49.32.0.51) Quit (Ping timeout: 480 seconds)
[16:55] * thomnico (~thomnico@15.203.178.35) Quit (Quit: Ex-Chat)
[16:56] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[16:56] * Kioob`Taff (~plug-oliv@89-156-97-235.rev.numericable.fr) has joined #ceph
[16:59] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[16:59] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[17:00] * cok (~chk@2a02:2350:18:1012:6570:a3d:6c0d:723d) has joined #ceph
[17:01] <alfredodeza> circ-user-dh10h_: having two interfaces is going to get you in trouble if they are not defined correctly in your ceph.conf
[17:02] * thomnico (~thomnico@15.203.178.35) has joined #ceph
[17:02] <alfredodeza> the part where the logs complain about this: " Unable to find /etc/ceph/ceph.client.admin.keyring " means that the daemon has not been able to create one (part of the init start process)
[17:04] <alfredodeza> circ-user-dh10h_: you can read about how to properly define the networks in this section http://ceph.com/docs/master/rados/configuration/network-config-ref/#ceph-daemons
[17:05] <circ-user-dh10h_> i think my ceph.conf is good configured. I followed the instructions from here: http://ceph.com/docs/master/start/quick-ceph-deploy/ and here is my config: http://fpaste.org/119844/6041497/raw/
[17:05] * flaxy (~afx@78.130.171.69) Quit (Quit: WeeChat 0.4.2)
[17:06] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) has joined #ceph
[17:06] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) has joined #ceph
[17:07] <alfredodeza> circ-user-dh10h_: have you looked at the log files in /var/log/ceph/*
[17:07] <alfredodeza> I am running out of ideas :(
[17:08] * jsfrerot (~jsfrerot@216.98.59.192) has joined #ceph
[17:10] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:10] <jsfrerot> hi, anyone know if the i386 packages for ubuntu have been discontinued on http://ceph.com/debian/ ?
[17:10] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[17:10] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) Quit (Quit: Leaving.)
[17:11] <jsfrerot> as I can't find ie86 packages for saucy or trusty...
[17:11] <jsfrerot> 13.xx/14.xx
[17:11] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) has joined #ceph
[17:12] * xarses (~andreww@12.164.168.117) has joined #ceph
[17:12] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:13] * allsystemsarego (~allsystem@5-12-240-175.residential.rdsnet.ro) has joined #ceph
[17:13] * KaZeR (~kazer@64.201.252.132) has joined #ceph
[17:13] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[17:14] * sarob (~sarob@66.193.45.14) has joined #ceph
[17:14] <circ-user-dh10h_> alfredodeza can you confirm that the config file is correct? and the log files /var/log/ceph/* seems to be that there are no errors: http://ur1.ca/ht2d2
[17:15] * sarob (~sarob@66.193.45.14) Quit (Read error: Connection reset by peer)
[17:15] * sarob (~sarob@66.193.45.14) has joined #ceph
[17:15] <alfredodeza> circ-user-dh10h_: you need to raise the verbosity for the mon daemons and restart them
[17:15] <jsfrerot> humm, packages are compiled for i386 but for precise (12.xx). So did someone forget to include i386 packages for the newer release of ubuntu ? or has it been decided not to compile them at all :S
[17:16] <alfredodeza> circ-user-dh10h_: in your [mon] section you would need to add this line:
[17:16] <alfredodeza> debug mon = 10
[17:16] <alfredodeza> and then restart the daemons while checking the logs
[17:17] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[17:17] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit ()
[17:18] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Read error: Connection reset by peer)
[17:19] <alfredodeza> jsfrerot: it looks like that is an omission/mistake on our end
[17:19] <alfredodeza> for Saucy and Trusty that is
[17:19] <alfredodeza> precise does have i386
[17:19] <jsfrerot> ok, it's kind of a good news for me :)
[17:20] <jsfrerot> ceph_0.80.4-1precise_i386.deb
[17:20] <jsfrerot> yep, ok, cause i'm using one old server 32bits... any idea when this could be fixed ?
[17:21] <jsfrerot> and i'm using trusty !
[17:21] <alfredodeza> issue 8896
[17:21] <kraken> alfredodeza might be talking about http://tracker.ceph.com/issues/8896 [missing i386 packages for Trusty and Saucy]
[17:21] <jsfrerot> so my cluster is kind in of a bad state as my 3rd mon can start with the old version
[17:21] <alfredodeza> jsfrerot: created an issue in the tracker to fix it ^ ^
[17:22] <alfredodeza> I think there is a point release coming soon (in the next few days probably) so once this is configured, packages will get there
[17:22] <jsfrerot> great ! Thank you :)
[17:24] * dmlb2000 (~dmlb2000@71-95-124-105.dhcp.mdfd.or.charter.com) Quit (Remote host closed the connection)
[17:26] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Remote host closed the connection)
[17:29] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[17:30] * narb (~Jeff@38.99.52.10) has joined #ceph
[17:31] * b0e1 (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[17:33] * adamcrume (~quassel@50.247.81.99) has joined #ceph
[17:37] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[17:38] * rweeks (~rweeks@pat.hitachigst.com) has joined #ceph
[17:40] <circ-user-dh10h_> is there anywhere a sample config file? I think my config file is a little bit wrong: http://ur1.ca/ht2k0
[17:44] * ircolle (~Adium@107.107.190.125) has joined #ceph
[17:45] * gillesMo (~gillesMo@00012912.user.oftc.net) has joined #ceph
[17:46] * rweeks (~rweeks@pat.hitachigst.com) Quit (Remote host closed the connection)
[17:46] * andreask (~andreask@212.117.76.229) has joined #ceph
[17:46] * ChanServ sets mode +v andreask
[17:49] * longguang_home (~chatzilla@98.126.23.10) has joined #ceph
[17:58] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[17:59] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[18:02] * rweeks (~rweeks@pat.hitachigst.com) has joined #ceph
[18:07] * ircolle (~Adium@107.107.190.125) Quit (Quit: Leaving.)
[18:10] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[18:11] * jlondon (~chatzilla@wsip-98-174-141-149.sd.sd.cox.net) has joined #ceph
[18:11] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Quit: Leaving.)
[18:12] * andreask (~andreask@212.117.76.229) has left #ceph
[18:13] <jlondon> Not sure if this will be better to ask on the mailing-list, but does anyone know if you have a federated gateway configuration... on the slave side, is the 'is_master' supposed to be set to false or true? If I set it to false, I get an error from radosgw-agent that the master region can't be found.
[18:13] <kraken> ???_???
[18:15] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Read error: Operation timed out)
[18:16] * tdasilva (~quassel@nat-pool-bos-u.redhat.com) Quit (Read error: Operation timed out)
[18:17] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:17] * cok (~chk@2a02:2350:18:1012:6570:a3d:6c0d:723d) has left #ceph
[18:21] * The_Bishop (~bishop@e180207178.adsl.alicedsl.de) Quit (Read error: Operation timed out)
[18:22] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[18:24] * madkiss (~madkiss@chello084112124211.20.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[18:30] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[18:31] * analbeard (~shw@host86-155-107-195.range86-155.btcentralplus.com) has joined #ceph
[18:34] * gillesMo (~gillesMo@00012912.user.oftc.net) Quit (Quit: Konversation terminated!)
[18:34] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:35] <longguang_home> what is function respond to Objecter's op_submit?
[18:37] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit (Ping timeout: 480 seconds)
[18:39] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[18:39] <longguang_home> how the request of op_submit is processed ? thanks
[18:40] * ikrstic (~ikrstic@93-86-38-56.dynamic.isp.telekom.rs) has joined #ceph
[18:41] * bandrus1 (~Adium@216.57.72.205) has joined #ceph
[18:41] * bandrus1 (~Adium@216.57.72.205) Quit ()
[18:42] * ikrstic (~ikrstic@93-86-38-56.dynamic.isp.telekom.rs) Quit ()
[18:43] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[18:44] * bkopilov (~bkopilov@213.57.17.40) has joined #ceph
[18:44] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[18:45] * lcavassa (~lcavassa@89.184.114.246) Quit (Quit: Leaving)
[18:48] * ghartz (~ghartz@ircad17.u-strasbg.fr) has joined #ceph
[18:48] <circ-user-dh10h_> thanks all for your help, wish a good evening
[18:48] * circ-user-dh10h_ (~circuser-@mail2.hofburg.com) Quit (Remote host closed the connection)
[18:50] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[18:50] * thomnico (~thomnico@15.203.178.35) Quit (Ping timeout: 480 seconds)
[18:50] * ikrstic (~ikrstic@93-86-38-56.dynamic.isp.telekom.rs) has joined #ceph
[18:52] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[18:52] * markbby (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[18:52] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[18:52] * markbby (~Adium@168.94.245.4) has joined #ceph
[18:53] * adamcrume (~quassel@50.247.81.99) Quit (Remote host closed the connection)
[18:55] * jordanP (~jordan@185.23.92.11) Quit (Quit: Leaving)
[18:59] * Gamekiller77 (~Gamekille@128-107-239-234.cisco.com) has joined #ceph
[18:59] <Gamekiller77> any one give RHEL 7 with BTRFS with ceph a go yet
[19:04] * flaxy (~afx@78.130.171.69) has joined #ceph
[19:10] * Cube (~Cube@66-87-66-149.pools.spcsdns.net) has joined #ceph
[19:10] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[19:12] * tdasilva (~quassel@nat-pool-bos-u.redhat.com) has joined #ceph
[19:20] * adamcrume (~quassel@2601:9:6680:47:4d88:69d7:f4e1:888b) has joined #ceph
[19:21] * longguang_home (~chatzilla@98.126.23.10) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 29.0.1/20140506152807])
[19:22] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[19:22] * ircolle (~Adium@107.107.190.125) has joined #ceph
[19:23] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) Quit (Read error: Operation timed out)
[19:23] * tristanz (~tzajonc@64.7.84.114) has joined #ceph
[19:27] * vmx (~vmx@dslb-084-056-021-195.pools.arcor-ip.net) Quit (Quit: Leaving)
[19:28] * jtang_ (~jtang@80.111.83.231) Quit (Ping timeout: 480 seconds)
[19:30] * vbellur (~vijay@nat-pool-bos-t.redhat.com) has joined #ceph
[19:32] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[19:34] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[19:46] * Pedras (~Adium@64.191.206.83) has joined #ceph
[19:54] <jobewan> so, we are looking to deploy a nice size ceph cluster and was wondering what the best practice was for the GWs and Mons. I know the 3+ Mon rule and GWs according to need, but there isn't much info as to whether it's ok to combine them on the same nodes
[19:54] <jobewan> meaning, we have 5 machines doing tiered OSD for the storage
[19:54] <jobewan> and we are wanting to turn up 3 GW and 3 Mon on 3 seperate HW controllers
[19:55] <jobewan> is that ok? or is there a reason to keep the GW on different nodes than the Mons ?
[19:55] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[19:56] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[19:59] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[19:59] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[20:05] <jobewan> is this the right place to ask this question?
[20:05] <janos_> it is
[20:05] <gregsfortytwo> rgw stuff usually does better on the mailing list though
[20:05] <gregsfortytwo> I can't think of any issues off-hand
[20:05] <jobewan> is it typical practice?
[20:05] <gregsfortytwo> as long as you've got the cpu for it
[20:06] <gregsfortytwo> no, all the deployments I'm aware of use blessed nodes for the monitors and separate blessed nodes for the gateways
[20:06] <jobewan> yea, plenty of cpu, if more cpu is needed, then we'd add another gw...
[20:06] <jobewan> ok, so best practice is 3 seperate HW for Mons and X amount Seperate for GWs...
[20:07] <gregsfortytwo> I'd say "typical practice"
[20:07] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Operation timed out)
[20:07] <gregsfortytwo> like I said, I don't foresee any issues, but I don't know that anybody's done it either, so adjust according to how adventurous you are versus how resource-constrained ;)
[20:07] <jobewan> ok, second question... where is the best place to export the rbd via nfs or iscsi? the GTW? or another seperate node?
[20:08] <gregsfortytwo> uh, no idea on that one
[20:08] <jobewan> :) ok
[20:08] <jobewan> do the Gateways see the volumes as direct mount points?
[20:08] <gregsfortytwo> again I'd expect it to just be whatever the cpu/ram/network limitations are, but I've never even touched an iSCSI export process
[20:08] <jobewan> ex: /dev/rbd0 ?
[20:09] <gregsfortytwo> the gateways are just using the userspace librados library to talk directly to the servers; they don't have anything to do with rbd or volumes as such
[20:09] <jobewan> ok, so they are pure obj proxies pretty much right?
[20:09] * analbeard (~shw@host86-155-107-195.range86-155.btcentralplus.com) Quit (Quit: Leaving.)
[20:10] <gregsfortytwo> yeah; they do some mangling and indexing (all stored in RADOS), but they're basically proxies
[20:11] <jobewan> our intent is to use this for both our openstack and vmware platform... so maybe we just need gateways dedicated to RBD (running pacemaker) and the standard OBJ GWs...
[20:11] <jobewan> might be something to play around with at least
[20:11] <gregsfortytwo> oh, yeah, the only relationship rbd and rgw have is that they use RADOS
[20:12] <jobewan> ok
[20:12] <jobewan> such a distributed platform... which is good, but daunting to plan
[20:12] <jobewan> thanks for the info
[20:17] <blynch> I see there are a couple 3PB(raw) clusters out there. Anyone know of a larger ceph cluster?
[20:19] <seapasulli> I don't know of any bigger installations currently. I would imagine Dreamhosts is pretty massive and Cern
[20:20] <seapasulli> http://www.slideshare.net/Inktank_Ceph/scaling-ceph-at-cern
[20:20] * mbjorling (~SilverWol@130.226.133.114) Quit (Ping timeout: 480 seconds)
[20:30] <blynch> seapasulli: Thanks. I imagine Dreamhost is larger than 3PB now, but I haven't seen any public information about it.
[20:33] <seapasulli> we are still working out kinks but we need to scale it above 3PB here (5-8) so hopefully we will have something published soon
[20:34] <janos_> nice
[20:34] <janos_> i love reading how people deal with various sized installations
[20:38] <seapasulli> blynch: did you see the census it's 'mysterious' but someone reported a 20PB cluster tied into openstack
[20:38] <seapasulli> http://ceph.com/community/results-from-the-ceph-census/
[20:41] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[20:42] * rendar (~I@host50-179-dynamic.56-79-r.retail.telecomitalia.it) Quit (Read error: Operation timed out)
[20:44] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[20:44] <Vacum> seapasulli: in case you would publish something, where could that be found? :)
[20:46] * rweeks (~rweeks@pat.hitachigst.com) Quit (Quit: Leaving)
[20:46] * flaxy (~afx@78.130.171.69) Quit (Quit: WeeChat 0.4.2)
[20:47] * flaxy (~afx@78.130.171.69) has joined #ceph
[20:47] * rendar (~I@host50-179-dynamic.56-79-r.retail.telecomitalia.it) has joined #ceph
[20:48] <blynch> seapasulli: ya.. I'm ignoring the 20PB report for now, since no one is claiming credit
[20:49] <seapasulli> I am the most incapable admin but if the company I work for keeps ceph it would be associated with us
[20:50] <blynch> good to know
[20:52] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[20:54] <seapasulli> If it all works out I promise I will post a link
[20:55] <blynch> seapasulli: thanks
[20:55] <Vacum> seapasulli: even if it does not, a post about "what went wrong" could be helpful?
[20:55] <Vacum> or, what issues you could resolve on the way
[20:56] <seapasulli> indeed
[20:56] <kraken> http://i.imgur.com/bQcbpki.gif
[20:56] <seapasulli> will do
[20:56] <Vacum> great, thanks! :)
[20:57] <seapasulli> Right now it's looking towards 'what went wrong' as half of my cluster is timing out poling the mon nodes (it seems) as ceph -s doesn't exit. Just hangs after it reports health
[20:57] <seapasulli> only at 1.5P of storage currently so eh.
[20:59] <Vacum> seapasulli: do you see 100% cpu usage of the leader mon process?
[21:01] <seapasulli> nope
[21:01] <kraken> http://i.imgur.com/foEHo.gif
[21:01] <seapasulli> cluster is healthy as I removed the downed nodes.
[21:02] <seapasulli> top - 14:02:07 up 1 day, 52 min, 4 users, load average: 0.66, 0.70, 0.76
[21:02] <Vacum> seapasulli: did you experience osds being wrongly marked down by others?
[21:03] <seapasulli> ah sort of. Friday all of the osds were marked down but on the node I saw all of the processes running. So I removed all of the osds from the node entirely from the cluster while I figure out why it's not working
[21:07] * michalefty (~micha@188-195-129-145-dynip.superkabel.de) has joined #ceph
[21:08] <Vacum> seapasulli: we had massive problems with inter-osd traffic, leading to wrongly marked osds, that resulting in high mon cpu usage. in the end it was cpu core0 being overwhelmed by irqs of the dual 10G ethernet interface...
[21:08] <Vacum> (on the osd nodes)
[21:09] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[21:11] * jsfrerot (~jsfrerot@216.98.59.192) has left #ceph
[21:12] <seapasulli> so far, while our cluster is healthy, the average load level is very reasonable.
[21:12] * ircolle (~Adium@107.107.190.125) Quit (Quit: Leaving.)
[21:13] <seapasulli> we are using E5-2650 here::
[21:13] <seapasulli> 12:35:01 PM 0 11038 0.49 0.88 0.81 2
[21:13] <seapasulli> 12:45:01 PM 0 10667 1.37 1.12 0.95 1
[21:13] <seapasulli> 12:55:02 PM 0 10832 2.51 2.68 1.72 0
[21:13] <seapasulli> 01:05:01 PM 0 11016 0.63 1.08 1.31 0
[21:14] <seapasulli> 01:15:01 PM 0 11031 0.76 0.80 1.03 1
[21:14] <seapasulli> 01:15:01 PM runq-sz plist-sz ldavg-1 ldavg-5 ldavg-15 blocked
[21:14] <seapasulli> 01:25:01 PM 0 11030 0.74 0.71 0.87 0
[21:14] <seapasulli> 01:35:01 PM 0 11031 0.94 1.02 0.96 0
[21:14] <seapasulli> 01:45:01 PM 2 11030 0.81 0.79 0.85 0
[21:14] <seapasulli> 01:55:01 PM 0 11035 0.91 0.83 0.83 0
[21:14] <seapasulli> 02:05:01 PM 13 11005 0.92 0.77 0.79 1
[21:14] <seapasulli> 02:15:01 PM 0 11010 0.39 0.60 0.71 0
[21:14] <seapasulli> 02:25:01 PM 0 11013 2.04 1.00 0.85 0
[21:14] <seapasulli> 02:35:01 PM 4 9356 8.71 5.78 2.88 1
[21:14] <seapasulli> 02:45:01 PM 2 10155 5.42 6.75 4.62 5
[21:14] <seapasulli> 02:55:02 PM 3 10467 13.29 7.43 5.23 9
[21:14] <seapasulli> sorry for the barf. It meant to go in a paste bin
[21:14] <seapasulli> that's the load when osds started dropping on our primary monitor node.
[21:15] <Vacum> seapasulli: interesting. the mon node is pretty much single threaded. do you have the mon leveldb on a separate ssd?
[21:16] <Vacum> the mon process I meant
[21:16] <Vacum> btw, is that dumpling or firefly?
[21:16] <seapasulli> nope. I thought about it but wanted to scale this up a bit before trying that
[21:17] <seapasulli> firefly. Upgraded to 0.82-524-gbf04897 about 2 weeks ago
[21:18] <Vacum> mh. we are on dumpling. we tried firefly for a brief period and were unlucky. perhaps too early, plus at that time we didn't had found out about our IRQ issue
[21:18] <seapasulli> haha indeed
[21:19] <seapasulli> So far firefly has been good except for the issue where Swift can not update acls on buckets. Thats why we upgraded
[21:19] * Steki (~steki@cable-94-189-160-74.dynamic.sbb.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[21:19] <seapasulli> http://tracker.ceph.com/issues/8428
[21:20] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[21:20] <Vacum> how many OSDs of which size per osd node do you have btw?
[21:20] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[21:21] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[21:22] <seapasulli> 34x4TB
[21:22] <seapasulli> you?
[21:22] <kraken> you are awesome (alfredodeza on 07/17/2014 04:04PM)
[21:22] <seapasulli> Vacum:
[21:22] <Vacum> 60*4TB
[21:22] <alfredodeza> hah
[21:22] <alfredodeza> kraken: forget you
[21:22] <kraken> will do
[21:22] <alfredodeza> you?
[21:22] <seapasulli> nice
[21:23] <alfredodeza> thanks kraken
[21:23] * kraken is overthrown by the vehement elation of goodness
[21:23] * The_Bishop (~bishop@e180207178.adsl.alicedsl.de) has joined #ceph
[21:23] * ircolle (~Adium@107.107.190.125) has joined #ceph
[21:24] <seapasulli> Vacum: I was under the impression that you needed to have 1 proc per osd so anything over say 40 disks would be too much
[21:24] <Vacum> seapasulli: its more like 1GHz Xeon per OSD
[21:25] <Vacum> seapasulli: but, yes, the 60 drives is pretty much at the limit. we are currently in the process of tweaking those boxes as much as possible.
[21:27] <seapasulli> how did you derive that it was indeed an IRQ issue for your monitor nodes? Are they separated for you or are you running them on the 60drive servers as well?
[21:27] <Vacum> seapasulli: no, the irq issue was on the osd nodes
[21:27] <Vacum> seapasulli: effectively dropping many tcp packets, until osd ping timeouts happened
[21:28] <seapasulli> yup
[21:28] <seapasulli> Ah that's a pretty big issue.
[21:29] <seapasulli> misunderstood though. Thought you mentioned that it was on the monitor nodes. This makes more sense but I am still very surprised.
[21:30] <Vacum> seapasulli: with 2*10Gb LACP you have really many interrupts happening :)
[21:31] <Vacum> plus the ones from the client interfaces
[21:31] * sigsegv (~sigsegv@188.25.123.201) has left #ceph
[21:34] <Vacum> once we had that solved we saw >800MB/s recovery into one single node that had lost a bunch of osds
[21:36] * michalefty (~micha@188-195-129-145-dynip.superkabel.de) Quit (Quit: Leaving.)
[21:37] * aldavud (~aldavud@213.55.184.134) has joined #ceph
[21:38] * Nacer_ (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[21:38] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Read error: Connection reset by peer)
[21:41] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[21:44] * LeaChim (~LeaChim@host86-162-79-167.range86-162.btcentralplus.com) has joined #ceph
[21:45] <seapasulli> wow
[21:45] * ganders (~root@200-127-158-54.net.prima.net.ar) Quit (Quit: WeeChat 0.4.1)
[21:47] <seapasulli> I max out at around 833MB/s to a single node in my tests currently. We are only using a single 10G for both public and private though. The new HW/nodes we spec'd out have 2 2x10g cards so I need to keep an eye out for this irq issue
[21:49] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[21:49] * Sysadmin88 (~IceChat77@94.4.20.0) has joined #ceph
[21:50] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[21:51] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[21:56] <Serbitar> Vacum: do you hve the option of doing jumboframe ethernet? it will drop the number of interrupts significantly
[21:57] <Vacum> Serbitar: we are thinking about it and will probably test it tomorrow.
[21:58] <Vacum> otoh by distributing the irqs over all cores of the relevant cpu the problem has been solved too. but, right, reducing the total interrupts is of course a good thing too :)
[21:58] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[21:58] <Serbitar> so you wibble things in bios to get interrupts allocated sensibly?
[21:59] <Serbitar> or was this physically shuffling pci-e cards?
[22:01] <Vacum> no, its done on Linux layer. for reasons I don't understand, linux "ignores" the /proc/irq/<irqnumber>/smp_affinity bitmask of fff....ff and puts all 32 irqs of the 32 txrx queues to core0. if you change that bitmask (manually or with irqbalance) for each irq to a single core (ie 0x00010000), it works
[22:01] <Serbitar> ah so, if a card is sending an interrupt it will interrupt ALL cores?
[22:01] <Vacum> no!
[22:01] <Vacum> :)
[22:02] <Serbitar> or it just interrupts one of them al the time
[22:02] <Serbitar> i see
[22:02] <brad_mssw> don't most distros run irqbalance by default?
[22:02] <Vacum> 10G cards (and 1G too nowadays) have "MSI-X". upon initialisation, they create many IRQs, upto as many as you have cores
[22:03] <Vacum> well, there is a dispute going on in some distros about irqbalance. it seems it was pretty borked years ago. then got better. then had some issues with NUMA systems. and now is back fine (at least for us)
[22:03] <Serbitar> ah, is this a system where the card is aware which core has send its a dma requst and it only interrupts the individual core once send is compelte etc?
[22:03] <Vacum> so it wasn't installed on our boxes
[22:05] <Vacum> Serbitar: for eth cards msi-x works like this (our intel does it that way) : calculate a hash over src-ip/src-port/dst-ip/dst-port (for tcp/udp), then modulo by the number of txrx queues. put the packet into the resulting rx queue. so all packets of the same tcp connection will end up on the same IRQ
[22:05] <Serbitar> make sense, sounds like a nice optimisation for parallel nodes
[22:06] <Vacum> yes. if the incoming packets would just be round-robin distributes over multiple irqs, the tcp stack's state machine would have very high cpu cache misses
[22:09] <Vacum> on numa systems the irqs should then only be distributed over the cores of the cpu where that pci-x lane is connected too, otherwise half of the irqs will need to cross the cpu-interconnect. which is also sub-optimal. hence the 32 queues of our 10G card are distributed over the cores of one cpu. the current irqbalance seems to be aware of this on sandy bridge based architectures
[22:09] * Serbitar nods
[22:09] <brad_mssw> Vacum: ok, so you're using irqbalance still and not the numad instead
[22:09] <Serbitar> would it be possible with a second card to hook that up on the other cpu
[22:10] <Serbitar> and have your system balance that too?
[22:10] <Vacum> brad_mssw: currently yes. we are still going forward on that topic :)
[22:10] <Vacum> Serbitar: sure. our SAS-HBA card is connected to the other CPU. those irqs are balanced accordingly.
[22:11] <Serbitar> ah ok, so you have a crossflow of traffic
[22:11] <Serbitar> across the cpu interconnect
[22:11] <Serbitar> but you have specifically balanced those interrupts yourselves
[22:11] <Vacum> Serbitar: at least not for the irq handlers. but via the OSD processes: yes
[22:12] * Serbitar ponders how this is set up on his toy lustre cluster
[22:12] <Vacum> the OSD process is reading the data from the tcp stack, processes it and writes it back to disk. so it crosses the cpu interlink at least once, yes.
[22:13] <Serbitar> be interesting to see the performance cost of that
[22:13] <Serbitar> if it is indeed a cost
[22:13] <Vacum> there is no other way to do it. if you put all interrupts to the same cpu and you have osd processes on the other cpu, it will cross the interlink twice
[22:14] <Serbitar> oh of course
[22:14] <Vacum> but the interlink isn't the issue on that layer. ram access it slower IMO
[22:14] <Vacum> well, not IMO. pretty sure :)
[22:14] <Serbitar> hmm yeah ok
[22:14] <Serbitar> i was thinkomg of a more symmetrical system
[22:14] <Serbitar> where you have HBA and ethernet on both cpus
[22:15] <Serbitar> so the processes can be self sufficient and nto need the interconnect
[22:16] <Serbitar> separate card for each
[22:16] <Serbitar> cpu
[22:17] <Vacum> Serbitar: mh. you would need different IPs for your OSDs then?
[22:17] * Serbitar ponders
[22:17] <Serbitar> i suppose you may
[22:17] <Serbitar> br0 for cpu0 and br1 for cpu1
[22:17] <Serbitar> doesnt sound nice
[22:17] <Vacum> Serbitar: well, one IP per card. and I'm not sure that will work out
[22:18] <Serbitar> it probably could if you were very carful about how you set cpu task affinity
[22:18] <Serbitar> cgroups etc
[22:18] * jlondon (~chatzilla@wsip-98-174-141-149.sd.sd.cox.net) Quit (Ping timeout: 480 seconds)
[22:19] <Serbitar> of course, anyone coming to look at your system for the first time will have a brain meltdown
[22:25] * aldavud (~aldavud@213.55.184.134) Quit (Read error: Connection reset by peer)
[22:25] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) Quit (Quit: Leaving.)
[22:27] <Vacum> if i calculated that correctly, the QPI interlink between two E5-2620 is 14.4GB/s in each direction
[22:30] <Serbitar> and suitably small latency i suppose
[22:30] * Pauline_ (~middelink@77-59-139-224.dclient.hispeed.ch) Quit (Read error: Connection reset by peer)
[22:30] * Pauline_ (~middelink@bigbox.ch.polyware.nl) has joined #ceph
[22:30] <Serbitar> we are about to get 240 e5-2650s in 120 compute nodes
[22:31] <Serbitar> which will probaby wipe the floor with our older kit
[22:31] <Vacum> compute nodes? openstack?
[22:31] <Serbitar> university HPC stuff
[22:31] <Vacum> ah :)
[22:32] <Serbitar> so various things between large parallel jobs with mpi/infiniband down to crappy single thread long running stuff
[22:32] <Serbitar> but no ceph storage :(
[22:33] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) has joined #ceph
[22:33] * garphy is now known as garphy`aw
[22:34] <Vacum> Serbitar: with 120 nodes, you can put 4*4TB drives into each node and have a sweet ceph cluster with ~2PB raw storage :)
[22:35] * jlondon (~chatzilla@wsip-98-174-141-149.sd.sd.cox.net) has joined #ceph
[22:40] * andreask (~andreask@212.117.76.229) has joined #ceph
[22:40] * ChanServ sets mode +v andreask
[22:46] * analbeard (~shw@support.memset.com) has joined #ceph
[22:46] * ircolle (~Adium@107.107.190.125) Quit (Quit: Leaving.)
[22:47] * ircolle (~Adium@107.107.190.125) has joined #ceph
[22:48] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[22:48] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[22:50] * ircolle (~Adium@107.107.190.125) Quit (Read error: Connection reset by peer)
[22:50] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[22:51] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[22:59] * jlondon (~chatzilla@wsip-98-174-141-149.sd.sd.cox.net) Quit (Ping timeout: 480 seconds)
[22:59] * allsystemsarego (~allsystem@5-12-240-175.residential.rdsnet.ro) Quit (Quit: Leaving)
[23:04] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[23:05] * Pedras (~Adium@64.191.206.83) Quit (Quit: Leaving.)
[23:05] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[23:05] <seapasulli> anyone else seeing ceph.com taking 28+ seconds to respond?
[23:06] <Serbitar> Vacum: heh that would be nice, disks wouldnt fit though
[23:06] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Remote host closed the connection)
[23:06] <Serbitar> might get one in each
[23:07] <Serbitar> seapasulli: just loaded in 5s for me
[23:08] * skullone (~skullone@shell.skull-tech.com) has joined #ceph
[23:09] * markbby (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[23:09] * markbby (~Adium@168.94.245.4) has joined #ceph
[23:16] * reed (~reed@66.193.45.14) has joined #ceph
[23:16] * reed (~reed@66.193.45.14) Quit (Read error: Connection reset by peer)
[23:19] * brad_mssw (~brad@shop.monetra.com) Quit (Quit: Leaving)
[23:27] * dmsimard is now known as dmsimard_away
[23:31] * JC1 (~JC@AMontpellier-651-1-372-123.w81-251.abo.wanadoo.fr) has joined #ceph
[23:36] * sjm (~sjm@143.115.158.238) has left #ceph
[23:37] * andreask (~andreask@212.117.76.229) has left #ceph
[23:38] * JC (~JC@AMontpellier-651-1-372-123.w81-251.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[23:44] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[23:44] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[23:50] * ikrstic (~ikrstic@93-86-38-56.dynamic.isp.telekom.rs) Quit (Quit: Konversation terminated!)
[23:53] * Nacer_ (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[23:54] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[23:56] <cookednoodles> seapasulli, I presume the main site is using cephfs
[23:56] * cookednoodles runs off
[23:58] * dpippenger (~Adium@66-192-9-78.static.twtelecom.net) has joined #ceph
[23:58] <dpippenger> any idea how I set this now? this seems to fail on firefly
[23:58] <dpippenger> ceph tell osd.0 injectargs '--recovery-max-active 10'
[23:58] <dpippenger> failed to parse arguments: --recovery-max-active,10
[23:59] <lurbs> Tried it as '--osd_recovery_max_active 1'?
[23:59] <lurbs> s/1/10/, even
[23:59] <kraken> lurbs meant to say: Tried it as '--osd_recovery_max_active 10/, even'?

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.