#ceph IRC Log

Index

IRC Log for 2014-07-18

Timestamps are in GMT/BST.

[0:06] * hufman (~hufman@cpe-184-58-235-28.wi.res.rr.com) Quit (Quit: leaving)
[0:09] * vbellur (~vijay@122.178.202.219) Quit (Read error: Operation timed out)
[0:12] * sputnik1_ (~sputnik13@client64-63.sdsc.edu) has joined #ceph
[0:13] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[0:20] * sputnik1_ (~sputnik13@client64-63.sdsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[0:21] * ikrstic (~ikrstic@109-92-251-185.dynamic.isp.telekom.rs) Quit (Quit: Konversation terminated!)
[0:22] * vbellur (~vijay@122.178.200.196) has joined #ceph
[0:26] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[0:30] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[0:33] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) Quit (Quit: Leaving)
[0:35] * WhIteSidE (~christoph@contractor.accretive-networks.net) has joined #ceph
[0:36] * zerick (~eocrospom@190.187.21.53) Quit (Remote host closed the connection)
[0:36] <WhIteSidE> I just upgraded a cluster to Firefly and it went *mostly* smoothly, except for one problem
[0:36] <WhIteSidE> I have a host which had an rbd drive mounted
[0:37] <WhIteSidE> I updated ceph on this client and things seemed fine
[0:37] <WhIteSidE> Then I rebooted to apply a new kernel
[0:37] <WhIteSidE> Now ceph is just cycling through "mon0 <ip> connect error -101" for all of mon ips
[0:38] <WhIteSidE> But running ceph status on any other node everything looks fine
[0:38] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:39] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[0:40] <ponyofdeath> hi, how can i find out what the chunk size is set to on my cluster
[0:45] * dmsimard is now known as dmsimard_away
[0:45] * kevinc (~kevinc__@client65-40.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[0:49] * kevinc (~kevinc__@client65-40.sdsc.edu) has joined #ceph
[1:03] * zack_dolby (~textual@p67f6b6.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:06] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[1:09] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Read error: Operation timed out)
[1:11] * tserong_ is now known as tserong
[1:12] * vmx (~vmx@dslb-084-056-051-165.pools.arcor-ip.net) Quit (Quit: Leaving)
[1:14] * _Tass4dar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Ping timeout: 480 seconds)
[1:19] * andreask (~andreask@zid-vpnn038.uibk.ac.at) has joined #ceph
[1:19] * ChanServ sets mode +v andreask
[1:24] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[1:29] <seapasulli> ponyofdeath: doesn't ceph use a 4MB chunk size?
[1:29] <seapasulli> https://www.usenix.org/legacy/events/fast09/wips_posters/molina_wip.pdf
[1:31] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Ping timeout: 480 seconds)
[1:33] * oms101 (~oms101@p20030057EA02D100EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:35] * kevinc (~kevinc__@client65-40.sdsc.edu) Quit (Quit: Leaving)
[1:38] * PureNZ (~paul@122-62-45-132.jetstream.xtra.co.nz) has joined #ceph
[1:42] * oms101 (~oms101@p20030057EA023500EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[1:44] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[1:51] * fsimonce (~simon@host50-69-dynamic.46-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[1:55] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Read error: Operation timed out)
[2:00] * adamcrume (~quassel@50.247.81.99) Quit (Remote host closed the connection)
[2:01] * flaxy (~afx@78.130.171.69) Quit (Quit: WeeChat 0.4.2)
[2:03] * zack_dolby (~textual@e0109-49-132-45-244.uqwimax.jp) has joined #ceph
[2:05] * Pedras (~Adium@216.207.42.140) Quit (Ping timeout: 480 seconds)
[2:05] * xarses (~andreww@12.164.168.117) Quit (Read error: Operation timed out)
[2:08] * flaxy (~afx@78.130.171.69) has joined #ceph
[2:09] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Remote host closed the connection)
[2:16] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[2:19] * rmoe (~quassel@12.164.168.117) Quit (Read error: Operation timed out)
[2:21] * Tamil2 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[2:22] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Remote host closed the connection)
[2:23] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[2:28] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Remote host closed the connection)
[2:28] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[2:31] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[2:33] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Remote host closed the connection)
[2:34] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[2:34] <dmick> ponyofdeath: it depends. clients control the operation size
[2:42] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Ping timeout: 480 seconds)
[2:44] * rotbeard (~redbeard@2a02:908:df19:9900:76f0:6dff:fe3b:994d) Quit (Ping timeout: 480 seconds)
[2:46] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[2:47] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[2:49] * tdasilva (~quassel@dhcp-0-26-5a-b5-f8-68.cpe.townisp.com) Quit (Remote host closed the connection)
[2:53] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[2:53] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit ()
[2:53] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[2:55] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[2:56] * reed (~reed@75-101-54-131.dsl.static.sonic.net) Quit (Quit: Ex-Chat)
[2:59] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[3:00] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[3:05] * carif (~mcarifio@pool-173-76-155-34.bstnma.fios.verizon.net) has joined #ceph
[3:05] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Remote host closed the connection)
[3:06] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[3:09] * narb (~Jeff@38.99.52.10) Quit (Quit: narb)
[3:10] * LeaChim (~LeaChim@host86-162-79-167.range86-162.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[3:11] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit (Quit: valeech)
[3:12] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Remote host closed the connection)
[3:12] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[3:14] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) Quit (Quit: koleosfuscus)
[3:14] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[3:23] * andreask (~andreask@zid-vpnn038.uibk.ac.at) Quit (Ping timeout: 480 seconds)
[3:31] * sputnik1_ (~sputnik13@207.8.121.241) has joined #ceph
[3:34] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Ping timeout: 480 seconds)
[3:34] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[3:35] * sputnik1_ (~sputnik13@207.8.121.241) Quit ()
[3:35] * sputnik1_ (~sputnik13@207.8.121.241) has joined #ceph
[3:38] * PureNZ (~paul@122-62-45-132.jetstream.xtra.co.nz) Quit (Read error: Operation timed out)
[3:39] * sputnik1_ (~sputnik13@207.8.121.241) Quit (Remote host closed the connection)
[3:39] * sputnik1_ (~sputnik13@207.8.121.241) has joined #ceph
[3:42] * sigsegv (~sigsegv@188.25.123.201) has left #ceph
[3:50] * drankis (~drankis__@89.111.13.198) Quit (Ping timeout: 480 seconds)
[3:55] * flaxy (~afx@78.130.171.69) Quit (Quit: WeeChat 0.4.2)
[4:04] <longguang> does redhat have ite own ceph repo?
[4:05] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Ping timeout: 480 seconds)
[4:06] * haomaiwa_ (~haomaiwan@118.186.129.94) Quit (Remote host closed the connection)
[4:06] * haomaiwang (~haomaiwan@124.248.205.19) has joined #ceph
[4:09] * sputnik1_ (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[4:09] * sputnik1_ (~sputnik13@207.8.121.241) has joined #ceph
[4:09] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Remote host closed the connection)
[4:10] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[4:10] * sputnik1_ (~sputnik13@207.8.121.241) Quit ()
[4:11] * jtaguinerd (~jtaguiner@112.198.79.117) has joined #ceph
[4:13] <dmick> not that I'm aware of
[4:16] * zhaochao (~zhaochao@123.151.134.234) has joined #ceph
[4:22] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Read error: Connection reset by peer)
[4:22] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[4:29] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Remote host closed the connection)
[4:32] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit ()
[4:35] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[4:39] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[4:43] * vbellur (~vijay@122.178.200.196) Quit (Read error: Connection reset by peer)
[4:43] <jtaguinerd> hi is there a way slow down recovery and focus on serving IO to clients
[4:45] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Remote host closed the connection)
[4:45] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[4:47] * brunoleon (~quassel@ARennes-658-1-5-66.w83-199.abo.wanadoo.fr) Quit (Read error: Operation timed out)
[4:48] * brunoleon (~quassel@ARennes-658-1-172-188.w2-14.abo.wanadoo.fr) has joined #ceph
[4:52] * WhIteSidE (~christoph@contractor.accretive-networks.net) Quit (Remote host closed the connection)
[4:52] <gchristensen> jtaguinerd: this might be what you need - http://ceph.com/docs/master/rados/configuration/osd-config-ref/#recovery
[4:54] <jtaguinerd> hi ghristensen, yeah we've followed that..current config is set to
[4:54] <jtaguinerd> [osd]
[4:54] <jtaguinerd> filestore merge threshold = 40
[4:54] <jtaguinerd> filestore split multiple = 8
[4:54] <jtaguinerd> osd op threads = 20
[4:54] <jtaguinerd> osd recovery max active = 1
[4:54] <jtaguinerd> osd max backfills = 4
[4:54] <jtaguinerd> osd recovery op priority = 10
[4:54] <jtaguinerd> osd client op priority = 63
[4:55] <jtaguinerd> but still it won't serve IO to clients
[4:56] <gchristensen> is your cluster degraded to the point of data loss? maybe its not serving clients because the dataset is broken
[4:56] <gchristensen> jtaguinerd: fair warning: I haven't deployed ceph in 9 months, I might not have great answers.
[4:56] * haomaiwa_ (~haomaiwan@118.186.129.94) has joined #ceph
[4:58] * vbellur (~vijay@122.167.245.60) has joined #ceph
[5:03] * haomaiwang (~haomaiwan@124.248.205.19) Quit (Ping timeout: 480 seconds)
[5:04] <jtaguinerd> ghristensen: cluster is in degraded mode because of reweight but we haven't loss any osd so i think dataset should be complete
[5:04] * haomaiwa_ (~haomaiwan@118.186.129.94) Quit (Remote host closed the connection)
[5:05] * haomaiwang (~haomaiwan@124.248.205.19) has joined #ceph
[5:06] * AfC (~andrew@120.18.1.210) has joined #ceph
[5:08] * \ask (~ask@oz.develooper.com) Quit (Quit: Bye)
[5:09] * \ask (~ask@oz.develooper.com) has joined #ceph
[5:10] * carif (~mcarifio@pool-173-76-155-34.bstnma.fios.verizon.net) Quit (Quit: Ex-Chat)
[5:12] <gchristensen> jtaguinerd: what does ceph health say?
[5:12] * Pedras1 (~Adium@50.185.218.255) has joined #ceph
[5:12] <jtaguinerd> gchristensen: HEALTH_WARN 188 pgs backfill; 163 pgs backfill_toofull; 16 pgs backfilling; 52 pgs degraded; 2 pgs recovery_wait; 357 pgs stuck unclean; 2 requests are blocked > 32 sec; 1 osds have slow requests; recovery 3058920/36388635 degraded (8.406%); recovering 35 o/s, 138MB/s; 2 near full osd(s)
[5:13] * AfC (~andrew@120.18.1.210) Quit (Quit: Leaving.)
[5:14] * _Tass4dar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[5:14] * shang (~ShangWu@175.41.48.77) has joined #ceph
[5:15] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Ping timeout: 480 seconds)
[5:16] <gchristensen> jtaguinerd: I don't know, I wish I had answers :(
[5:17] * analbeard (~shw@support.memset.com) Quit (Remote host closed the connection)
[5:17] * brunoleon (~quassel@ARennes-658-1-172-188.w2-14.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[5:17] <jtaguinerd> gchristensen: thanks for trying to help :)
[5:17] <gchristensen> for sure
[5:18] * analbeard (~shw@support.memset.com) has joined #ceph
[5:18] <gchristensen> jtaguinerd: maybe the best solution is to lean in: allow your nodes to focus on recovery
[5:23] * _Tass4dar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Remote host closed the connection)
[5:23] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[5:28] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Remote host closed the connection)
[5:30] * brunoleon (~quassel@ARennes-658-1-8-190.w83-199.abo.wanadoo.fr) has joined #ceph
[5:30] * vbellur (~vijay@122.167.245.60) Quit (Ping timeout: 480 seconds)
[5:34] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[5:40] * AfC (~andrew@120.18.128.210) has joined #ceph
[5:41] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[5:43] * Vacum (~vovo@i59F7A703.versanet.de) has joined #ceph
[5:45] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[5:46] * _Tass4dar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[5:48] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Ping timeout: 480 seconds)
[5:50] * Vacum_ (~vovo@88.130.194.9) Quit (Ping timeout: 480 seconds)
[5:51] * joef1 (~Adium@2601:9:2a00:690:e0c0:a2a8:f87:24f0) has joined #ceph
[5:51] * _Tass4dar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Remote host closed the connection)
[5:52] * joef (~Adium@2620:79:0:131:2878:68e5:c2ed:ab5) Quit (Remote host closed the connection)
[5:52] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[5:54] * lupu (~lupu@86.107.101.214) Quit (Ping timeout: 480 seconds)
[5:58] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[6:02] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[6:12] * haomaiwa_ (~haomaiwan@118.186.129.94) has joined #ceph
[6:17] * AfC (~andrew@120.18.128.210) Quit (Quit: Leaving.)
[6:19] * haomaiwang (~haomaiwan@124.248.205.19) Quit (Ping timeout: 480 seconds)
[6:22] * wschulze1 (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:25] * dlan_ (~dennis@116.228.88.131) has joined #ceph
[6:25] * bandrus (~oddo@216.57.72.205) Quit (Read error: Connection reset by peer)
[6:26] * jtaguinerd1 (~jtaguiner@203.215.116.25) has joined #ceph
[6:26] * jtaguinerd (~jtaguiner@112.198.79.117) Quit (Read error: Connection reset by peer)
[6:26] * joef1 (~Adium@2601:9:2a00:690:e0c0:a2a8:f87:24f0) Quit (Ping timeout: 480 seconds)
[6:27] * dlan (~dennis@116.228.88.131) Quit (Ping timeout: 480 seconds)
[6:27] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) Quit (Quit: Leaving.)
[6:28] * theanalyst (~abhi@49.32.3.72) has joined #ceph
[6:29] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[6:34] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Ping timeout: 480 seconds)
[6:39] * Pedras1 (~Adium@50.185.218.255) Quit (Quit: Leaving.)
[6:39] * Pedras1 (~Adium@216.207.42.140) has joined #ceph
[6:40] * zerick (~eocrospom@190.187.21.53) Quit (Ping timeout: 480 seconds)
[6:40] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[6:40] * lucas1 (~Thunderbi@218.76.25.66) Quit (Ping timeout: 480 seconds)
[6:41] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[7:03] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[7:05] * Pedras1 (~Adium@216.207.42.140) Quit (Quit: Leaving.)
[7:12] * rdas (~rdas@121.244.87.115) has joined #ceph
[7:12] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[7:14] * michalefty (~micha@p20030071CE058660413F0CC8E6BF19EC.dip0.t-ipconnect.de) has joined #ceph
[7:16] * haomaiwa_ (~haomaiwan@118.186.129.94) Quit (Remote host closed the connection)
[7:16] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[7:22] * haomaiwa_ (~haomaiwan@118.186.129.94) has joined #ceph
[7:28] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Ping timeout: 480 seconds)
[7:34] * michalefty (~micha@p20030071CE058660413F0CC8E6BF19EC.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[7:34] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Read error: Connection reset by peer)
[7:36] * manohar (~manohar@103.252.140.101) has joined #ceph
[7:39] * michalefty (~micha@p20030071CE0586601A3DA2FFFE07E324.dip0.t-ipconnect.de) has joined #ceph
[7:39] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[7:40] * vbellur (~vijay@121.244.87.117) has joined #ceph
[7:45] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[7:46] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Remote host closed the connection)
[7:54] * Lulz (~Lulz@65.111.180.201) Quit (Quit: Lost terminal)
[7:54] * oomkiller (oomkiller@d.clients.kiwiirc.com) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[7:55] * oomkiller (oomkiller@d.clients.kiwiirc.com) has joined #ceph
[7:56] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[7:57] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[8:02] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Remote host closed the connection)
[8:03] * drankis (~drankis__@89.111.13.198) has joined #ceph
[8:09] * Cube1 (~Cube@66-87-130-255.pools.spcsdns.net) has joined #ceph
[8:09] * Cube (~Cube@66-87-65-86.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[8:09] * davidzlap1 (~Adium@cpe-23-242-12-23.socal.res.rr.com) Quit (Quit: Leaving.)
[8:10] * zack_dolby (~textual@e0109-49-132-45-244.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[8:11] * davidzlap (~Adium@cpe-23-242-12-23.socal.res.rr.com) has joined #ceph
[8:18] * saurabh (~saurabh@121.244.87.117) has joined #ceph
[8:19] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) has joined #ceph
[8:22] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Remote host closed the connection)
[8:22] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[8:23] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[8:24] * zack_dolby (~textual@e0109-49-132-45-244.uqwimax.jp) has joined #ceph
[8:27] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[8:27] * lcavassa (~lcavassa@94.165.132.190) has joined #ceph
[8:33] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Ping timeout: 480 seconds)
[8:38] <theanalyst> how do I make radosgw log more debug-info ... trying to debug swift keystone auth
[8:41] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[8:42] * rendar (~I@host134-177-dynamic.251-95-r.retail.telecomitalia.it) has joined #ceph
[8:47] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[8:48] * hyperbaba__ (~hyperbaba@private.neobee.net) has joined #ceph
[8:48] * zidarsk8 (~zidar@2001:1470:fffd:101c:ea11:32ff:fe9a:870) has joined #ceph
[8:52] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[8:57] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[8:59] * fireD (~fireD@93-142-197-254.adsl.net.t-com.hr) has joined #ceph
[9:00] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Ping timeout: 480 seconds)
[9:01] * hybrid512 (~walid@195.200.167.70) has joined #ceph
[9:04] * hybrid512 (~walid@195.200.167.70) Quit ()
[9:05] * zack_dolby (~textual@e0109-49-132-45-244.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[9:10] * davidzlap (~Adium@cpe-23-242-12-23.socal.res.rr.com) Quit (Quit: Leaving.)
[9:11] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Ping timeout: 480 seconds)
[9:13] * zack_dolby (~textual@49.132.45.244) has joined #ceph
[9:14] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[9:14] * ChanServ sets mode +v andreask
[9:16] * madkiss (~madkiss@213162068089.public.t-mobile.at) has joined #ceph
[9:24] * Cube (~Cube@66-87-130-255.pools.spcsdns.net) has joined #ceph
[9:24] * Cube1 (~Cube@66-87-130-255.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[9:28] * Cube (~Cube@66-87-130-255.pools.spcsdns.net) Quit ()
[9:29] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[9:30] * AfC (~andrew@120.23.128.135) has joined #ceph
[9:30] * AfC (~andrew@120.23.128.135) Quit (Read error: Connection reset by peer)
[9:31] * aldavud (~aldavud@213.55.176.164) has joined #ceph
[9:31] * AfC (~andrew@120.23.128.135) has joined #ceph
[9:35] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Remote host closed the connection)
[9:35] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[9:36] <ibuclaw> Hi, I'm setting up a testing platform, 3 mon VMs distributed around the place, and 1 OSD host with 12 disks.
[9:37] <ibuclaw> would I need to tweak the crushmap for consideration of osd spread, rather than host spread?
[9:38] <oomkiller> ibuclaw yes default is to use host as bucket with replication size 3
[9:38] <ibuclaw> I'm looking at: step chooseleaf firstn 0 type host
[9:38] * hybrid512 (~walid@195.200.167.70) has joined #ceph
[9:41] <ibuclaw> oomkiller, would that be setting replicated_ruleset
[9:41] <ibuclaw> or creating a separate bucket for each osd?
[9:41] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[9:42] <andreask> you want more than one copy in your test platform?
[9:43] <oomkiller> ibuclaw didn't do that before but according to documentation you just need to change type host to type osd
[9:43] <ibuclaw> ideally at least 2 spread across the two disk bays
[9:43] <oomkiller> the replication count you can manipulate per pool with ceph osd pool set
[9:44] * stephan (~stephan@62.217.45.26) has joined #ceph
[9:44] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[9:44] <ibuclaw> data isn't so important, but it's really just proof of concept
[9:45] <ibuclaw> (production with have 3 similarly spec'd osd host instead of one)
[9:45] <oomkiller> to distribute beetween your two disk bays you best define 2 host in crushmap host1bay1 und host1bay2 and put the osds to one host or the other as its correct and stay with type host
[9:46] <oomkiller> and set replication count to 2
[9:46] * thomnico (~thomnico@2a01:e35:8b41:120:2d4c:ab7e:22be:56e4) has joined #ceph
[9:47] <oomkiller> I tried that before to "simulate" some being ssds and some being hdds and get 2 pools, one only with the ssds and one only with the hdds
[9:47] <oomkiller> worked flawlessly :)
[9:47] <ibuclaw> oomkiller, cool
[9:47] * zack_dol_ (~textual@49.132.45.244) has joined #ceph
[9:48] * ibuclaw has 2 60GB ssds for journals
[9:50] * zack_dolby (~textual@49.132.45.244) Quit (Ping timeout: 480 seconds)
[9:52] * zidarsk8 (~zidar@2001:1470:fffd:101c:ea11:32ff:fe9a:870) Quit (Ping timeout: 480 seconds)
[9:53] * _Tass4dar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[9:54] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Ping timeout: 480 seconds)
[9:57] <oomkiller> thats another option for caching
[9:59] * zack_dol_ (~textual@49.132.45.244) Quit (Ping timeout: 480 seconds)
[10:02] * zidarsk8 (~zidar@46.54.226.50) has joined #ceph
[10:05] * zack_dolby (~textual@e0109-49-132-45-244.uqwimax.jp) has joined #ceph
[10:05] * brunoleon (~quassel@ARennes-658-1-8-190.w83-199.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[10:08] * steki (~steki@91.195.39.5) has joined #ceph
[10:09] * rendar (~I@host134-177-dynamic.251-95-r.retail.telecomitalia.it) Quit (Read error: Operation timed out)
[10:09] * AfC (~andrew@120.23.128.135) Quit (Ping timeout: 480 seconds)
[10:11] * BManojlovic (~steki@cable-94-189-160-74.dynamic.sbb.rs) Quit (Remote host closed the connection)
[10:12] * ssejourne (~ssejourne@37.187.216.206) has joined #ceph
[10:16] * Nats_ (~Nats@2001:8000:200c:0:e480:7cc3:38bd:9b7b) has joined #ceph
[10:17] * zidarsk8 (~zidar@46.54.226.50) Quit (Ping timeout: 480 seconds)
[10:21] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[10:23] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[10:23] * Nats (~Nats@2001:8000:200c:0:e480:7cc3:38bd:9b7b) Quit (Ping timeout: 480 seconds)
[10:23] * andreask (~andreask@zid-vpnn025.uibk.ac.at) has joined #ceph
[10:23] * ChanServ sets mode +v andreask
[10:25] * cok (~chk@2a02:2350:1:1202:2037:f8d:bab8:c594) has joined #ceph
[10:27] * zidarsk8 (~zidar@2001:1470:fffd:101c:ea11:32ff:fe9a:870) has joined #ceph
[10:34] * thomnico (~thomnico@2a01:e35:8b41:120:2d4c:ab7e:22be:56e4) Quit (Quit: Ex-Chat)
[10:34] * thomnico (~thomnico@2a01:e35:8b41:120:2d4c:ab7e:22be:56e4) has joined #ceph
[10:36] * aldavud (~aldavud@213.55.176.164) Quit (Ping timeout: 480 seconds)
[10:41] * _Tass4dar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Remote host closed the connection)
[10:45] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[10:47] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:48] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:48] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Connection reset by peer)
[10:54] * cok (~chk@2a02:2350:1:1202:2037:f8d:bab8:c594) Quit (Quit: Leaving.)
[10:55] * cok (~chk@technet1.gc.dk.net.one.com) has joined #ceph
[10:56] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:58] * hyperbaba__ (~hyperbaba@private.neobee.net) Quit ()
[10:59] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[10:59] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit ()
[10:59] * lupu (~lupu@86.107.101.214) has joined #ceph
[11:00] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[11:03] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) Quit (Ping timeout: 480 seconds)
[11:04] * hyperbaba (~hyperbaba@private.neobee.net) has joined #ceph
[11:06] * _Tassadar (~tassadar@D57DEE42.static.ziggozakelijk.nl) has joined #ceph
[11:07] * zidarsk8 (~zidar@2001:1470:fffd:101c:ea11:32ff:fe9a:870) has left #ceph
[11:08] * analbeard (~shw@host86-155-107-195.range86-155.btcentralplus.com) has joined #ceph
[11:15] * haomaiwa_ (~haomaiwan@118.186.129.94) Quit (Remote host closed the connection)
[11:15] * haomaiwang (~haomaiwan@124.248.205.19) has joined #ceph
[11:22] * r0r_taga (~nick@greenback.pod4.org) Quit (Ping timeout: 480 seconds)
[11:22] * r0r_tag (~nick@greenback.pod4.org) Quit (Ping timeout: 480 seconds)
[11:24] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) Quit (Quit: Leaving.)
[11:31] * aarcane_ (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[11:31] * RameshN (~rnachimu@101.222.253.17) has joined #ceph
[11:32] <RameshN> I am trying to build calamari for F19.
[11:32] <RameshN> Is there any document or wiki available to help ?
[11:38] * aarcane (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[11:41] * zhaochao_ (~zhaochao@211.157.180.42) has joined #ceph
[11:43] <longguang> hi
[11:44] <longguang> what does this mean 'peer_addr.addr = socket_addr.addr'? pipe.cc accept()
[11:44] * cok (~chk@technet1.gc.dk.net.one.com) Quit (Read error: Operation timed out)
[11:47] * zhaochao (~zhaochao@123.151.134.234) Quit (Ping timeout: 480 seconds)
[11:47] * zhaochao_ is now known as zhaochao
[11:48] * garphy`aw is now known as garphy
[11:51] <oomkiller> is there anything known if xen/vmware are supporting rbd or if they plan to do?
[11:51] * zhaochao (~zhaochao@211.157.180.42) Quit (Remote host closed the connection)
[11:52] <tnt_> oomkiller: xen does.
[11:53] <oomkiller> tnt_ the open source version and the citrix version? or only one of them?
[11:53] <tnt_> oomkiller: opensource version does. No idea about the citrix version, never used it ...
[11:54] <tnt_> oomkiller: basically now they support using qemu as a disk bacend, even for PV domains and so you can just have a recent enough qemu with rbd support and it will work.
[11:54] * BlackPanx (~kvirc@cpe-31-15-133-178.cable.telemach.net) Quit (Read error: No route to host)
[11:55] <oomkiller> ok thanks
[11:55] <tnt_> There is also a tapdisk driver for RBD that I developped but Xen is moving away from tapdisk do there is probably no future in that.
[11:55] <tnt_> (but if like me you still have some old Xen that needs RBD support, it's a way to do it).
[11:56] * madkiss (~madkiss@213162068089.public.t-mobile.at) Quit (Ping timeout: 480 seconds)
[11:57] * zhaochao (~zhaochao@123.151.134.234) has joined #ceph
[11:58] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[12:00] <longguang> who know the source code. :)
[12:02] <joao> longguang, I'm not very familiar with that code, but I would think that someone thought it would be best to have the address from the socket being assigned to the address of the peer
[12:02] <joao> which makes sense
[12:03] <joao> especially if it's on 'accept()'
[12:12] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[12:14] * haomaiwa_ (~haomaiwan@118.186.129.94) has joined #ceph
[12:15] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Read error: Operation timed out)
[12:18] <longguang> yes. but
[12:18] <longguang> port = peer_addr.get_port();
[12:18] <longguang> peer_addr.addr = socket_addr.addr;
[12:18] <longguang> peer_addr.set_port(port);
[12:18] <longguang> here socket_addr is peer addr, peer_addr is local addr. what is local's port + peer's addr?
[12:21] * haomaiwang (~haomaiwan@124.248.205.19) Quit (Ping timeout: 480 seconds)
[12:22] * dlan_ (~dennis@116.228.88.131) Quit (Remote host closed the connection)
[12:22] * dlan (~dennis@116.228.88.131) has joined #ceph
[12:24] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: Textual IRC Client: www.textualapp.com)
[12:25] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[12:27] * analbeard (~shw@host86-155-107-195.range86-155.btcentralplus.com) Quit (Quit: Leaving.)
[12:38] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[12:46] * vmx (~vmx@dslb-084-056-027-038.pools.arcor-ip.net) has joined #ceph
[12:49] * vbellur (~vijay@121.244.87.124) has joined #ceph
[12:50] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[12:55] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[12:56] * b0e (~aledermue@juniper1.netways.de) Quit (Quit: Leaving.)
[12:56] * andreask (~andreask@zid-vpnn025.uibk.ac.at) Quit (Read error: Connection reset by peer)
[12:57] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[13:01] * zhaochao (~zhaochao@123.151.134.234) has left #ceph
[13:05] * zack_dolby (~textual@e0109-49-132-45-244.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[13:06] * cok (~chk@2a02:2350:18:1012:acb4:a269:6d24:24b1) has joined #ceph
[13:08] * flaxy (~afx@78.130.171.69) has joined #ceph
[13:12] * ikrstic (~ikrstic@109-92-251-185.dynamic.isp.telekom.rs) has joined #ceph
[13:16] * aldavud (~aldavud@wlandevil.nine.ch) has joined #ceph
[13:17] * bkopilov (~bkopilov@213.57.17.241) Quit (Remote host closed the connection)
[13:18] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[13:19] * theanalyst (~abhi@49.32.3.72) Quit (Ping timeout: 480 seconds)
[13:21] * saurabh (~saurabh@121.244.87.117) Quit (Quit: Leaving)
[13:29] * morse (~morse@supercomputing.univpm.it) Quit (Ping timeout: 480 seconds)
[13:33] * diegows (~diegows@190.190.5.238) has joined #ceph
[13:39] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[13:40] * lcavassa (~lcavassa@94.165.132.190) Quit (Quit: Leaving)
[13:43] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[14:00] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[14:00] * ChanServ sets mode +v andreask
[14:03] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[14:04] * jordanP (~jordan@185.23.92.11) has joined #ceph
[14:08] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[14:08] * vbellur (~vijay@121.244.87.124) Quit (Quit: Leaving.)
[14:08] * andreask (~andreask@zid-vpnn025.uibk.ac.at) has joined #ceph
[14:08] * ChanServ sets mode +v andreask
[14:22] * fghaas (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) has joined #ceph
[14:23] <RameshN> I get following error "Couldn't open file /home/jenkins-build/rhel64.box" when build calamri. Is there anything i am missing the document http://calamari.readthedocs.org/en/latest/development/building_packages.html
[14:30] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[14:31] * aldavud (~aldavud@wlandevil.nine.ch) Quit (Ping timeout: 480 seconds)
[14:32] * lupu (~lupu@86.107.101.214) Quit (Quit: Leaving.)
[14:32] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[14:35] * zidarsk8 (~zidar@46.54.226.50) has joined #ceph
[14:37] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[14:37] * zack_dolby (~textual@p67f6b6.tokynt01.ap.so-net.ne.jp) has joined #ceph
[14:48] * zidarsk8 (~zidar@46.54.226.50) has left #ceph
[14:49] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[14:52] * zack_dolby (~textual@p67f6b6.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[14:54] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) has joined #ceph
[14:56] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[14:57] * zack_dolby (~textual@p67f6b6.tokynt01.ap.so-net.ne.jp) has joined #ceph
[14:59] * michalefty (~micha@p20030071CE0586601A3DA2FFFE07E324.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[15:02] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit (Remote host closed the connection)
[15:04] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[15:06] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Remote host closed the connection)
[15:08] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[15:13] * andreask (~andreask@zid-vpnn025.uibk.ac.at) has left #ceph
[15:14] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[15:14] * ghartz (~ghartz@ircad17.u-strasbg.fr) has joined #ceph
[15:21] * johnfoo_ (~johnfoo@ip-133.net-89-3-152.rev.numericable.fr) has joined #ceph
[15:22] * johnfoo (~johnfoo@bro29-2-88-164-250-201.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[15:29] * hyperbaba (~hyperbaba@private.neobee.net) Quit (Ping timeout: 480 seconds)
[15:29] * brad_mssw (~brad@shop.monetra.com) has joined #ceph
[15:32] * tdasilva (~quassel@dhcp-0-26-5a-b5-f8-68.cpe.townisp.com) has joined #ceph
[15:34] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[15:34] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[15:37] * RameshN (~rnachimu@101.222.253.17) Quit (Read error: Operation timed out)
[15:37] * bandrus (~oddo@216.57.72.205) has joined #ceph
[15:37] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[15:38] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[15:52] * bandrus1 (~Adium@216.57.72.205) has joined #ceph
[15:58] * markbby (~Adium@168.94.245.4) has joined #ceph
[15:59] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) has joined #ceph
[16:03] * gregsfortytwo1 (~Adium@cpe-107-184-64-126.socal.res.rr.com) Quit (Quit: Leaving.)
[16:05] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[16:07] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[16:14] * drankis (~drankis__@89.111.13.198) Quit (Remote host closed the connection)
[16:21] * dmsimard_away is now known as dmsimard
[16:21] * PVi1 (~pvi@pvi.vnet.sk) has joined #ceph
[16:23] <PVi1> Hi all! What will be the SSD write performance if we use journal on SSD RAW partition, so trim cannot be used probably? We are using Intel DC s3700 100GB drives
[16:23] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Read error: Operation timed out)
[16:26] <oomkiller> why should trim not be used, when using the ssd as journal?
[16:27] * bandrus1 (~Adium@216.57.72.205) Quit (Quit: Leaving.)
[16:28] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[16:30] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[16:33] <iggy> it'll work fine, it's just hard to send TRIMs to a partition without a FS on it
[16:33] * rdas (~rdas@121.244.87.115) Quit (Quit: Leaving)
[16:35] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[16:39] * markbby (~Adium@168.94.245.2) has joined #ceph
[16:41] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[16:41] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[16:42] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[16:42] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[16:43] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[16:43] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[16:43] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[16:46] <burley> asked this question yesterday but hoping I'll have a new audience now...
[16:46] <burley> I have created a 3 OSD-deamon ceph cluster (3x Dell R720xd with 12x 3TB HDDs; journaling to those same drives, 3x replication) for PoC with 1 mon node. Sequential read from the cluster is at or near saturation of the network (9600Mbps) - where I'd expect. But sequential writes are lagging around 2100Mbps. I had expected those to run just shy of 1/6th the sum of the sequential write on all drives but its closer to 1/24th. Any suggestions? A
[16:46] <burley> my expectations out of line?
[16:52] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[16:53] <darkfader> burley: most people have much worse numbers, so, two things
[16:54] <darkfader> a) it might be possible to tune it but probably the shared journals are killing it
[16:54] <darkfader> b) most people would just be happy here and now with that
[16:54] <darkfader> especially since you get really nice read performance
[16:54] <burley> you think the shared journals are giving me a hit since its not actually sequential on the drives?
[16:56] * alram (~alram@38.122.20.226) has joined #ceph
[16:57] <PVi1> iggy: how can configure Ubuntu to send trim to raw partition? I have only found blkdiscard and hdparm commands but they need LBA address range to be trimmed.
[16:57] <darkfader> yeah at full load i would expect a lot of disk trashing
[16:57] <darkfader> if you look at the hdd leds it should show
[16:58] <darkfader> erratic blinkblinky instead of something like stripe writes
[16:58] * alram_ (~alram@38.122.20.226) has joined #ceph
[16:58] <darkfader> alternatively if you get very high %busy on the disks in sar, normally it is maybe 30% more for write than read at the same data rate
[16:59] * alram (~alram@38.122.20.226) Quit (Read error: Operation timed out)
[16:59] <darkfader> and if you see 95% busy for 1/10th you have something like a problem
[16:59] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[16:59] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[16:59] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[17:00] <darkfader> burley: i just missed saying, the trashing increases once the benchmark load hits a certain point (the 99% mark ;) - in realworld use you might not expose the problem as much
[17:00] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[17:00] * vbellur (~vijay@122.167.245.60) has joined #ceph
[17:00] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[17:00] <burley> yeah, true
[17:00] <darkfader> at least if the journal for OSD1 is not on the data disk of OSD1
[17:00] <darkfader> otherwise it'll always struggle
[17:01] * steki (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:02] * fghaas (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[17:02] <gchristensen> darkfader: isn't the term 'thrashing'? :)
[17:02] <darkfader> gchristensen: thats what i wrote ;)
[17:02] <gchristensen> ;)
[17:03] * markbby (~Adium@168.94.245.2) Quit (Quit: Leaving.)
[17:03] * joshd (~jdurgin@2602:306:c5db:310:d08f:8171:94b0:6500) Quit (Quit: Leaving.)
[17:04] <burley> darkfader: Thank you
[17:04] <burley> any suggestions for an inexpensive PCIE SSD with high write throughput for journaling? :)
[17:05] <burley> I hunted last week but came up empty, and we ejected the Crucial m500's for journaling already, since they couldn't keep up with the drives
[17:05] * darkfader walks by and whispers "ebay?" walks on like nothing ever happened
[17:06] <darkfader> burley: what performance would you consider good enough? it seems like you have some idea
[17:08] <burley> so, tbh what we have is probably good enough, but I'd like to optimize it as good as we can for reasonable cost -- hoping that we could saturate on reads and writes to any given chassis
[17:08] <burley> that is probably entirely unrealistic
[17:08] <burley> but I had thought we would get 1/6th the sum of all disks, and we're getting only a quarter of that in performance -- which was way off what I figured
[17:09] <burley> even if the result is good, its still way off
[17:09] <darkfader> burley: you could try to separate the osd disks, but you know they won't do >100MB/s each
[17:10] <burley> right now they are at 40MB/s
[17:10] <darkfader> the offiical number is to have 1:5 ssd / disk if you want more capacity than speed and that you should really really get something like an S3700 ssd
[17:10] <burley> we went 1:6 and it was bad
[17:10] <burley> really bad
[17:10] <darkfader> i did a lot of ssd benchmarks and they are the one where "stable performance" starts
[17:11] <darkfader> how bad is really bad? :)
[17:11] <burley> 25% of where we are now
[17:11] <darkfader> lol, damn
[17:12] <darkfader> for saturating on write there was a talk by a fujitsu guy at the ceph day, but i never found it online. or any of them. so it's probably me
[17:12] <darkfader> he had fusion io ssds in for journal
[17:12] <burley> the write specs on the intel are roughly the same
[17:12] <burley> as the crucial
[17:12] <PVi1> burley: we are using dc s3700 100gb 1:4 with 10gb journal partition and we are getting 100%utilization with 50MB/s writes and 60-100iops during rados bench
[17:12] <darkfader> burley: on paper
[17:12] <burley> right, I get that
[17:13] <burley> pvi: what disks for the OSD?
[17:13] <PVi1> when I do dd with oflag=direct and blocksize greater than 4kb we get 200MB/s as per specs
[17:13] <PVi1> Seagate 3TB constelation 7200rpm SATA
[17:14] <PVi1> they can do 160-175MB seq write with dd
[17:14] <gchristensen> could look into using UltraDIMMs :)
[17:14] * alram (~alram@38.122.20.226) has joined #ceph
[17:14] <PVi1> +
[17:14] * markbby (~Adium@168.94.245.4) has joined #ceph
[17:15] * alram_ (~alram@38.122.20.226) Quit (Read error: Operation timed out)
[17:15] <gchristensen> orr I guess they're ULLtra DIMMS
[17:15] <burley> Marvell no longer makes the dragonfly product line unfortunately :)
[17:16] * cok (~chk@2a02:2350:18:1012:acb4:a269:6d24:24b1) Quit (Quit: Leaving.)
[17:17] <burley> PVi1: those sata drives seem pretty similar, we went with the WD Black FZEX3003
[17:17] <burley> except that the constellation is enterprise
[17:17] <PVi1> right now rados bench gives us 1400-2600MB/s read and 140-180MB/s write from 3 nodes, each has 12x3TB sta with 4 intel dc s3700 100gb and 20gbit network connection
[17:20] <burley> we're not bonded so our reads I think peak out at network saturation
[17:20] <burley> Bandwidth (MB/sec): 267.113
[17:20] <burley> Stddev Bandwidth: 42.0878
[17:21] <burley> Max bandwidth (MB/sec): 392
[17:21] <burley> Min bandwidth (MB/sec): 0
[17:21] <burley> so that's a reasonable rados bench write result for this config then...
[17:23] * thomnico (~thomnico@2a01:e35:8b41:120:2d4c:ab7e:22be:56e4) Quit (Quit: Ex-Chat)
[17:25] <darkfader> finally hit the right google button
[17:25] <darkfader> http://www.slideshare.net/Inktank_Ceph/fj-20140227-cephbestpractisedistributedintelligentunifiedcloudstoragev4ksp
[17:25] <darkfader> talk is there too: http://www.inktank.com/cephdays/frankfurt/
[17:26] <darkfader> like, i think this went a little under the radar because the guy came from a big iron perspective
[17:27] <darkfader> but he did it and had nice 2GB/s sustained writes (using pcie ssd). he also did comparisms with ssd only etc and explained at what point you hit your io bw limit of the bus/cpu
[17:28] <darkfader> incredibly dry talk. but at least he did it and had nice io speed
[17:28] * stewiem2000 (~stewiem20@195.10.250.233) Quit (Read error: Connection reset by peer)
[17:28] * narb (~Jeff@38.99.52.10) has joined #ceph
[17:28] * stewiem2000 (~stewiem20@195.10.250.233) has joined #ceph
[17:29] <darkfader> ah it was intel ssds - those are cheaper ;)
[17:30] * thomnico (~thomnico@2a01:e35:8b41:120:2d4c:ab7e:22be:56e4) has joined #ceph
[17:30] <burley> looking at slides now
[17:30] <burley> will watch talk later
[17:30] <darkfader> the slide 24 is funny, the infiband graph up looking down at the other "watcha guys doing down there"
[17:31] * manohar (~manohar@103.252.140.101) Quit (Quit: manohar)
[17:31] * BManojlovic (~steki@cable-94-189-160-74.dynamic.sbb.rs) has joined #ceph
[17:32] * garphy is now known as garphy`aw
[17:34] * adamcrume (~quassel@50.247.81.99) has joined #ceph
[17:34] * kapil (~ksharma@2620:113:80c0:5::2222) Quit (Quit: Leaving)
[17:37] <burley> yeah, that would be a fun unnecessary toy
[17:37] * davidzlap (~Adium@cpe-23-242-12-23.socal.res.rr.com) has joined #ceph
[17:46] * madkiss (~madkiss@178.188.60.118) Quit (Quit: Leaving.)
[17:47] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) has joined #ceph
[17:49] * jordanP (~jordan@185.23.92.11) Quit (Remote host closed the connection)
[17:50] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[17:52] * thomnico (~thomnico@2a01:e35:8b41:120:2d4c:ab7e:22be:56e4) Quit (Quit: Ex-Chat)
[17:52] * thomnico (~thomnico@2a01:e35:8b41:120:2d4c:ab7e:22be:56e4) has joined #ceph
[17:55] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[18:02] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[18:02] * ChanServ sets mode +v andreask
[18:02] * joef (~Adium@2620:79:0:131:fc59:cf62:c12d:a0ea) has joined #ceph
[18:03] * fghaas (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) has joined #ceph
[18:04] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has left #ceph
[18:06] * sigsegv (~sigsegv@188.25.123.201) has joined #ceph
[18:06] * sigsegv (~sigsegv@188.25.123.201) has left #ceph
[18:07] * rendar (~I@host134-177-dynamic.251-95-r.retail.telecomitalia.it) has joined #ceph
[18:07] * sigsegv (~sigsegv@188.25.123.201) has joined #ceph
[18:10] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:19] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[18:20] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[18:34] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) has joined #ceph
[18:35] <jtaguinerd1> hi guys, my pg_num and pgp_num value is not matching. pg_num is 1200 and pgp_num is 64. I plan to make it match by i am certain there will be degradation..is advisable to immediately jump to 1200 or is it better to apply the changes little by little?
[18:47] <bandrus> jtaguinerd1: start with a rewlatively small jump to minimize potential impact. Take it to 128, watch the effect it has on your client operations, and then make your educated decision on continuing to gradually increase it, or to make the jump.
[18:48] * Sysadmin88 (~IceChat77@94.4.20.0) has joined #ceph
[18:49] <jtaguinerd1> bandrus: thanks for the suggestion
[18:49] * magicrobotmonkey (~abassett@ec2-50-18-55-253.us-west-1.compute.amazonaws.com) Quit (Ping timeout: 480 seconds)
[18:50] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[18:51] * adamcrume (~quassel@50.247.81.99) Quit (Remote host closed the connection)
[18:54] * hyperbaba (~hyperbaba@80.74.175.250) has joined #ceph
[18:57] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[18:57] * fghaas (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[18:58] * lupu (~lupu@86.107.101.214) has joined #ceph
[19:01] * xarses (~andreww@12.164.168.117) has joined #ceph
[19:02] * joshd (~jdurgin@38.122.20.226) has joined #ceph
[19:07] * hyperbaba (~hyperbaba@80.74.175.250) Quit (Ping timeout: 480 seconds)
[19:09] * thomnico (~thomnico@2a01:e35:8b41:120:2d4c:ab7e:22be:56e4) Quit (Quit: Ex-Chat)
[19:14] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[19:15] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:15] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[19:18] * houkouonchi-work (~linux@12.248.40.138) has joined #ceph
[19:21] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Operation timed out)
[19:22] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[19:23] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[19:25] * adamcrume (~quassel@c-71-204-162-10.hsd1.ca.comcast.net) has joined #ceph
[19:28] * sputnik1_ (~sputnik13@207.8.121.241) has joined #ceph
[19:36] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[19:39] * LPG (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) Quit (Remote host closed the connection)
[19:42] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[19:43] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[19:46] <ponyofdeath> hi, i am having a lot of disk io issues on my kvm + ceph nodes. it seems to be some kind of limiting on the kvm side. any ideas?
[19:47] * Discard (~discard@213-245-29-151.rev.numericable.fr) Quit (Ping timeout: 480 seconds)
[19:47] * thomnico (~thomnico@2a01:e35:8b41:120:2d4c:ab7e:22be:56e4) has joined #ceph
[19:52] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[19:59] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[20:04] <mtl1> Hi. I added another OSD node to my cluster this morning, and while letting the data moving around after adding the first new OSD to the cluster, I had another OSD go to 85%. I've removed some data and everything is back below 85% now, but my recovery seems stuck. Is there a way to give it a kick to get it moving again?
[20:15] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[20:18] <lincolnb> gregsfortytwo: Hi again, did you ever get a chance to take a peak at those logs I sent?
[20:19] * fghaas (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) has joined #ceph
[20:19] <lincolnb> my user was fortunately backing up their data \o/ .. just wondering if i should just start fresh at this point, if un-corrupting things will be difficult
[20:20] <gregsfortytwo> lincolnb: haven't got to it yet, but planning on it today :)
[20:20] <lincolnb> ok :)
[20:20] <lincolnb> no rush, it's friday :D
[20:21] <gregsfortytwo> next up on my list, actually, but I think there's lunch now and a meeting first
[20:26] * thomnico (~thomnico@2a01:e35:8b41:120:2d4c:ab7e:22be:56e4) Quit (Quit: Ex-Chat)
[20:41] * fghaas (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[20:50] * Pedras (~Adium@216.207.42.129) has joined #ceph
[20:52] <iggy> ponyofdeath: what makes you think it's limiting on qemu side?
[20:53] * angdraug (~angdraug@12.164.168.117) Quit (Remote host closed the connection)
[20:55] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[21:02] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[21:05] * fghaas (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) has joined #ceph
[21:08] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[21:08] * magicrob1tmonkey (~abassett@ec2-50-18-55-253.us-west-1.compute.amazonaws.com) has joined #ceph
[21:12] * diegows (~diegows@200.68.116.185) has joined #ceph
[21:17] * alram (~alram@38.122.20.226) Quit (Ping timeout: 480 seconds)
[21:18] * alram (~alram@38.122.20.226) has joined #ceph
[21:18] * fghaas (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[21:23] <ponyofdeath> iggy: well ceph hosts have minimal load
[21:30] * JayJ (~jayj@157.130.21.226) has joined #ceph
[21:31] * alram (~alram@38.122.20.226) Quit (Ping timeout: 480 seconds)
[21:33] <ponyofdeath> how much traffic would kvm hosts genertate on mon's as I am running those on 3 of my ceph osd hosts as well
[21:33] * diegows (~diegows@200.68.116.185) Quit (Ping timeout: 480 seconds)
[21:34] * burley (~khemicals@cpe-98-28-233-158.woh.res.rr.com) Quit (Quit: burley)
[21:39] * rweeks (~rweeks@pat.hitachigst.com) has joined #ceph
[21:39] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[21:45] * diegows (~diegows@190.190.5.238) has joined #ceph
[21:46] <iggy> there are ways to test your theory... shut down services on some nodes so that you only have kvm running on 1 host, mons on 3, and the rest OSDs like normal
[21:49] * sarob (~sarob@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) has joined #ceph
[21:52] * aldavud (~aldavud@213.55.184.145) has joined #ceph
[21:56] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[22:03] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) has joined #ceph
[22:06] * jrcresawn (~jrcresawn@150.135.211.226) has joined #ceph
[22:07] * sz0 (~sz0@141.196.36.197) has joined #ceph
[22:07] * joef (~Adium@2620:79:0:131:fc59:cf62:c12d:a0ea) Quit (Quit: Leaving.)
[22:10] * \ask (~ask@oz.develooper.com) Quit (Ping timeout: 480 seconds)
[22:12] * sputnik1_ (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[22:12] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[22:13] * joef (~Adium@2620:79:0:131:fc59:cf62:c12d:a0ea) has joined #ceph
[22:26] * markbby (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[22:30] * aldavud (~aldavud@213.55.184.145) Quit (Ping timeout: 480 seconds)
[22:32] * lofejndif (~lsqavnbok@tor-exit.eecs.umich.edu) has joined #ceph
[22:36] * brad_mssw (~brad@shop.monetra.com) Quit (Quit: Leaving)
[22:41] <lincolnb> we've been doing some rough benchmarking of the MDS, which raised a question. is MDS performance I/O bound or CPU bound?
[22:41] <gchristensen> lincolnb: what did you find?
[22:43] <lincolnb> gchristensen: haven't dug that deep into it yet, I was just thinking about that now and wondering if anyone observed anything either way
[22:43] <lincolnb> we did find that the single MDS started to have problems when we were doing things like stat'ing 10M tiny files in a directory
[22:43] <ponyofdeath> iggy: kvm hosts are already separate from the ceph osd nodes.
[22:44] <lincolnb> i was just looking at a small pile of SSDs in my spares cabinet and wondering if I could get some better performance out of the MDS by putting it on SSD
[22:45] * sputnik1_ (~sputnik13@207.8.121.241) has joined #ceph
[22:45] * vbellur (~vijay@122.167.245.60) Quit (Read error: Operation timed out)
[23:03] * fireD (~fireD@93-142-197-254.adsl.net.t-com.hr) has left #ceph
[23:07] <SpComb> ponyofdeath: like terrible-slow-takes-half-an-hour-to-boot slow or just slow?
[23:08] <SpComb> the former would be the `virsh vol-create --pool rbd` bug
[23:09] * flaxy (~afx@78.130.171.69) Quit (Quit: WeeChat 0.4.2)
[23:10] * flaxy (~afx@78.130.171.69) has joined #ceph
[23:16] <gregsfortytwo> lincolnb: I think mds is usually cpu bound
[23:16] <gregsfortytwo> although depending on workload it can also be io bound
[23:16] <gregsfortytwo> it doesn't use local disk, though, so if you wanted to improve things there you'd need to give it faster OSDs for the metadata pool :)
[23:19] <gregsfortytwo> (also: answered your email, not as simple as I'd hoped but you should be able to turn your cluster back on again)
[23:19] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) Quit (Read error: Connection reset by peer)
[23:19] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[23:20] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) has joined #ceph
[23:24] * rweeks (~rweeks@pat.hitachigst.com) Quit (Quit: Leaving)
[23:27] <lincolnb> gregsfortytwo: thanks for the info about MDS. i'll look at putting some SSDs in the metadata pool
[23:28] <lincolnb> also, ceph cluster back up! the tool worked :)
[23:32] * dmsimard is now known as dmsimard_away
[23:33] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[23:35] <ponyofdeath> iggy: /win5
[23:36] * ScOut3R (~ScOut3R@4E5C6CD5.dsl.pool.telekom.hu) has joined #ceph
[23:40] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit (Quit: valeech)
[23:41] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[23:42] * sputnik1_ (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[23:45] * flaxy (~afx@78.130.171.69) Quit (Quit: WeeChat 0.4.2)
[23:45] * flaxy (~afx@78.130.171.69) has joined #ceph
[23:49] * JayJ (~jayj@157.130.21.226) Quit (Quit: Computer has gone to sleep.)
[23:49] * JayJ (~jayj@157.130.21.226) has joined #ceph
[23:55] * JayJ (~jayj@157.130.21.226) Quit (Read error: Operation timed out)
[23:59] * ikrstic (~ikrstic@109-92-251-185.dynamic.isp.telekom.rs) Quit (Quit: Konversation terminated!)
[23:59] * joef (~Adium@2620:79:0:131:fc59:cf62:c12d:a0ea) Quit (Quit: Leaving.)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.