#ceph IRC Log

Index

IRC Log for 2014-06-19

Timestamps are in GMT/BST.

[0:00] <ponyofdeath> anyone using kvm + rbd + ceph cache teir pools with success?
[0:01] <paveraware> kindof ponyofdeath
[0:01] <paveraware> I???ve hit an issue where my cache tier is very angry and using 100% of the cpu it looks like
[0:01] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[0:02] * xarses (~andreww@12.164.168.117) Quit (Read error: Operation timed out)
[0:03] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) Quit (Quit: Leaving)
[0:04] <paveraware> basically it looks like the cache pool cpu usage scales with the number of objects in it
[0:04] <paveraware> so??? after about 2-3TB of data cpu is pegged
[0:05] * Sysadmin88 (~IceChat77@94.4.22.173) has joined #ceph
[0:05] * brad_mssw (~brad@shop.monetra.com) Quit (Quit: Leaving)
[0:07] * dmsimard is now known as dmsimard_away
[0:07] * adisney1 (~adisney1@2602:306:cddb:49d0:6888:937b:e35c:699f) Quit (Quit: Leaving)
[0:07] <ponyofdeath> paveraware: hmm i wonder why i am getting the qemu errors when trying to start up my vm
[0:08] <ponyofdeath> paveraware: is there a kernel patch or something i need?
[0:08] <ponyofdeath> paveraware: are u pointing the vm to the cache pool or the regular pool
[0:08] <paveraware> its all librbd, you???d need a current librbd, but you point to the cache pool
[0:09] <ponyofdeath> paveraware: ahh ok that maby is why
[0:09] <paveraware> you can only create/access rbd???s through the cache pool
[0:09] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) has joined #ceph
[0:10] * imriz (~imriz@5.29.200.177) Quit (Read error: Operation timed out)
[0:10] * rpowell (~rpowell@128.135.219.215) Quit (Quit: Leaving.)
[0:12] <ponyofdeath> paveraware: hmm changed it to the cache pool with same error error reading header
[0:14] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Read error: Operation timed out)
[0:15] * jksM (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[0:15] <ponyofdeath> paveraware: you can have mutiple pools goint through the cahce one correct
[0:15] <lurbs> Vaguely on topic: http://www.redhat.com/enovance/
[0:15] <lurbs> Looks like they're on a bit of a buying spree.
[0:18] * redcavalier (~redcavali@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[0:19] <redcavalier> Hi, I have problems with ceph and its network configuration, maybe you guys can help.
[0:19] * xarses (~andreww@12.164.168.117) has joined #ceph
[0:20] <redcavalier> I have 3 interfaces on 3 different networks on each ceph node. I have the management network, storage network and cluster network.
[0:21] <redcavalier> I'm trying to set it up so that all cluster traffic happens on the cluster network (this seems to be working), and that openstack can query it from the storage network.
[0:21] <redcavalier> However, Ceph doesn't seem to listen on the storage network for some reason and is listening on the management network, despite setting up the storage network as
[0:21] <redcavalier> "public network" in ceph config
[0:22] <lurbs> Which parts are listening on the management network? Monitors, OSDs, or both?
[0:23] <redcavalier> scratch that, nothing appears to be listening on management or storage
[0:23] <redcavalier> or so nmap tells me
[0:23] <mongo> what does your public_network line on the hosts /etc/ceph/ceph.conf look like?
[0:24] <redcavalier> mon initial members = ceph1
[0:24] <redcavalier> mon host = 172.18.0.1
[0:24] <redcavalier> public network = 192.168.24.0/24
[0:24] <redcavalier> cluster network = 172.18.0.0/24
[0:25] <mongo> the mon host should be on the public network IMHO
[0:25] <lurbs> Yeah, that.
[0:25] <redcavalier> Oh I see
[0:26] <redcavalier> Let me try, but I have a feeling this will break part of my setup
[0:30] * nljmo (~nljmo@64.125.103.162) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[0:31] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[0:33] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) Quit (Remote host closed the connection)
[0:33] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[0:33] * dis is now known as Guest14072
[0:34] * dis (~dis@109.110.66.173) has joined #ceph
[0:35] <sherry> does anyone know how would I be able to give an authentication of a pool to the client?
[0:37] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[0:39] * Guest14072 (~dis@109.110.67.182) Quit (Ping timeout: 480 seconds)
[0:45] <redcavalier> just to follow up on my earlier issue, when I have stuff like [mon.ceph1] and [mon.ceph2] in my config file, should I use the IP of the public network or the one of the cluster network?
[0:46] * sigsegv (~sigsegv@188.26.160.142) Quit (Quit: sigsegv)
[0:47] <mongo> mon addresses need to be on the public network, but it is better to just set a range.
[0:48] <mongo> ceph clients need to contact the mon deamons, clients should not connect to the culster network
[0:49] <redcavalier> yea, ok
[0:49] <redcavalier> thx
[0:49] <baylight> I'm trying to understand that rbd authorization paradigm. Imagine I have a large RADOS cluster which is serving thousands of individual rbd's to hundreds of Ceph clients. Each Ceph client is authenticated by Cephx, so each client is trusted. However, it seems to me that each client then has access to every single rbd in the cluster, is that correct?
[0:50] <baylight> Meaning, if someone manages to compromise any given ceph client they theoretically have access to every rbd in RADOS?
[0:50] <mongo> you can restrict it to a pool
[0:50] <mongo> but that is as fine grained as you can go.
[0:51] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Quit: Computer has gone to sleep.)
[0:51] * The_Bishop (~bishop@cable-86-56-95-128.cust.telecolumbus.net) Quit (Read error: Operation timed out)
[0:51] <baylight> OK. Is migrating an rbd from one pool to another a possible operation?
[0:52] * xinyi (~xinyi@2406:2000:ef96:3:5404:447f:80fc:f31b) has joined #ceph
[0:53] <mongo> http://ceph.com/docs/master/man/8/rbd/#image-name
[0:54] <mongo> snap to the other pool then flatten
[0:54] <mongo> you can't mv between pools
[0:56] * Hell_Fire_ (~HellFire@123-243-155-184.static.tpgi.com.au) has joined #ceph
[0:56] * Hell_Fire (~HellFire@123-243-155-184.static.tpgi.com.au) Quit (Read error: Connection reset by peer)
[0:57] <redcavalier> looks like changing the ip is causing issues with my monitor map
[0:57] <redcavalier> it's unable to startup osd.0 too
[0:58] * brambles (~xymox@s0.barwen.ch) has joined #ceph
[0:59] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[1:00] * xinyi (~xinyi@2406:2000:ef96:3:5404:447f:80fc:f31b) Quit (Ping timeout: 480 seconds)
[1:01] * nljmo (~nljmo@64.125.103.162) has joined #ceph
[1:03] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) has joined #ceph
[1:08] * The_Bishop (~bishop@2001:470:50b6:0:d9d0:8747:78e5:8cb9) has joined #ceph
[1:12] * nljmo (~nljmo@64.125.103.162) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[1:12] * zack_dolby (~textual@pdf8519e7.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:14] * [1]cephtron (~cephtron@115-186-178-141.nayatel.pk) has joined #ceph
[1:14] * leseb (~leseb@185.21.174.206) Quit (Killed (NickServ (Too many failed password attempts.)))
[1:14] * leseb (~leseb@185.21.174.206) has joined #ceph
[1:14] <mongo> change your public network to the 172 space, having a single mon makes things complex, just have both cluster and public on one IP space until you get your feet wet.
[1:15] <paveraware> ponyofdeath: sorry went afk??? I don???t know about multiple pools/cache pool, I???ve only ever done a single ec pool + replicated cache pool
[1:17] * paveraware (~tomc@216.51.73.42) Quit (Quit: paveraware)
[1:20] * fdmanana (~fdmanana@81.193.61.209) Quit (Quit: Leaving)
[1:21] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[1:21] * wido__ (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) has joined #ceph
[1:23] * finster_ (~finster@cmdline.guru) has joined #ceph
[1:23] * Knorrie- (knorrie@yoshi.kantoor.mendix.nl) has joined #ceph
[1:23] * Kioob (~kioob@2a01:e34:ec0a:c0f0:21e:8cff:fe07:45b6) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * finster (~finster@cmdline.guru) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * Knorrie (knorrie@yoshi.kantoor.mendix.nl) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * masterpe (~masterpe@2a01:670:400::43) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * guerby (~guerby@ip165-ipv6.tetaneutral.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * sep (~sep@2a04:2740:1:0:52e5:49ff:feeb:32) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * ccourtaut (~ccourtaut@2001:41d0:2:4a25::1) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * mo- (~mo@2a01:4f8:141:3264::3) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * dennis (dennis@tilaa.krul.nu) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * odi (~quassel@2a00:12c0:1015:136::9) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * mschiff (~mschiff@mx10.schiffbauer.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * Meistarin (~Dont@0001c3c8.user.oftc.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * Svedrin (svedrin@ketos.funzt-halt.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * mancdaz (~mancdaz@2a00:1a48:7807:102:94f4:6b56:ff08:886c) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * [1]cephtron (~cephtron@115-186-178-141.nayatel.pk) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * The_Bishop (~bishop@2001:470:50b6:0:d9d0:8747:78e5:8cb9) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * redcavalier (~redcavali@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * jksM (~jks@3e6b5724.rev.stofanet.dk) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * angdraug (~angdraug@12.164.168.117) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * lupu (~lupu@86.107.101.214) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * narb (~Jeff@38.99.52.10) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * adamcrume (~quassel@2601:9:6680:47:8c06:c13c:1870:a001) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * bkero (~bkero@216.151.13.66) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * mondkalbantrieb (~quassel@sama32.de) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * kwmiebach__ (~sid16855@id-16855.charlton.irccloud.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * Andreas-IPO (~andreas@2a01:2b0:2000:11::cafe) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * jnq (~jnq@0001b7cc.user.oftc.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * racingferret (~racingfer@81.144.225.214) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * themgt (~themgt@c-76-104-28-47.hsd1.va.comcast.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * phantomcircuit (~phantomci@2600:3c01::f03c:91ff:fe73:6892) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * funnel (~funnel@23.226.237.192) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * jdmason (~jon@134.134.139.74) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * amospalla (~amospalla@0001a39c.user.oftc.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * kevincox (~kevincox@4.s.kevincox.ca) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * jrcresawn (~jrcresawn@150.135.211.226) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * cowbar_ (~cow@ip-2607-F298-0001-0100-0000-0000-0000-FFFF.dreamhost.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * guppy (~quassel@guppy.xxx) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * higebu (~higebu@www3347ue.sakura.ne.jp) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * hughsaunders (~hughsaund@2001:4800:780e:510:fdaa:9d7a:ff04:4622) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * ccourtaut_ (~ccourtaut@2001:41d0:1:eed3::1) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * stj (~stj@2001:470:8b2d:bb8:21d:9ff:fe29:8a6a) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * mongo (~gdahlman@voyage.voipnw.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * eternaleye (~eternaley@50.245.141.73) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * bens (~ben@c-71-231-52-111.hsd1.wa.comcast.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * joerocklin (~joe@cpe-75-186-9-154.cinci.res.rr.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * lincolnb (~lincoln@c-67-165-142-226.hsd1.il.comcast.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * \ask (~ask@oz.develooper.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * gford (~fford@93.93.251.146) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * qhartman_ (~qhartman@75-151-85-53-Colorado.hfc.comcastbusiness.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * Fetch (fetch@gimel.cepheid.org) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * nwf_ (~nwf@67.62.51.95) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * fretb (~fretb@drip.frederik.pw) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * ismell (~ismell@host-64-17-89-79.beyondbb.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * beardo (~sma310@beardo.cc.lehigh.edu) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * cronix (~cronix@5.199.139.166) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * alexbligh1 (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * acaos (~zac@209.99.103.42) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * sadbox (~jmcguire@sadbox.org) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * tziOm (~bjornar@ns3.uniweb.no) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * Gugge-47527 (gugge@kriminel.dk) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * dmsimard_away (~dmsimard@198.72.122.121) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * purpleidea (~james@199.180.99.171) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * blynch (~blynch@vm-nat.msi.umn.edu) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * mdjp (~mdjp@2001:41d0:52:100::343) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * leseb_ (~leseb@88-190-214-97.rev.dedibox.fr) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * aarontc (~aarontc@aarontc-1-pt.tunnel.tserv14.sea1.ipv6.he.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * rturk|afk (~rturk@nat-pool-rdu-t.redhat.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * houkouonchi-home (~linux@houkouonchi-1-pt.tunnel.tserv15.lax1.ipv6.he.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * Georgyo (~georgyo@shamm.as) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * carter (~carter@li98-136.members.linode.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * KindOne (kindone@0001a7db.user.oftc.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * tcatm (~quassel@2a01:4f8:151:13c3:5054:ff:feff:cbce) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * Daviey__ (~DavieyOFT@bootie.daviey.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * kraken (~kraken@gw.sepia.ceph.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * Azrael (~azrael@terra.negativeblue.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * sage (~quassel@cpe-23-242-158-79.socal.res.rr.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * s3an2_ (~root@korn.s3an.me.uk) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * Guest13616 (~quassel@2607:f298:a:607:b871:797:7d9:8f8d) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * Hell_Fire_ (~HellFire@123-243-155-184.static.tpgi.com.au) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * masta (~masta@190.7.213.210) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * rmoe (~quassel@12.164.168.117) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * dignus (~jkooijman@t-x.dignus.nl) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * lyncos (~chatzilla@208.71.184.41) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * rturk-away (~rturk@ds2390.dreamservers.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * `10 (~10@69.169.91.14) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * Jahkeup (Jahkeup@gemini.ca.us.panicbnc.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * muhanpong (~povian@kang.sarang.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * ponyofdeath (~vladi@cpe-66-27-98-26.san.res.rr.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * saturnine (~saturnine@ashvm.saturne.in) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * hflai_ (~hflai@alumni.cs.nctu.edu.tw) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * Meths (~meths@2.25.214.22) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * KB (~oftc-webi@cpe-74-137-252-159.swo.res.rr.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * Guest13361 (~oftc-webi@fw.spring.de) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * brambles (~xymox@s0.barwen.ch) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * xarses (~andreww@12.164.168.117) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * gleam (gleam@dolph.debacle.org) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * cephtron (~cephtron@58-65-166-154.nayatel.pk) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * ivan` (~ivan`@000130ca.user.oftc.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * gregmark (~Adium@68.87.42.115) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * aarcane_ (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * kfei (~root@114-27-90-88.dynamic.hinet.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * allig8r (~allig8r@128.135.219.116) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * bkopilov (~bkopilov@213.57.18.60) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * mnash (~chatzilla@66-194-114-178.static.twtelecom.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * simulx (~simulx@vpn.expressionanalysis.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * pinguini (~pinguini@user-78-139-236-122.tomtelnet.ru) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * overclk_ (~venky@ov42.x.rootbsd.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * bitserker1 (~toni@63.pool85-52-240.static.orange.es) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * beardo_ (~sma310@208-58-255-215.c3-0.drf-ubr1.atw-drf.pa.cable.rcn.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * darkfader (~floh@88.79.251.60) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * tchmnkyz (~jeremy@0001638b.user.oftc.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * terje__ (~joey@184.96.158.62) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * ctd (~root@00011932.user.oftc.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * pingu (~christian@mail.ponies.io) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * singler_ (~singler@zeta.kirneh.eu) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * rBEL (robbe@november.openminds.be) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * jeffhung_ (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * clayg_ (~clayg@ec2-54-235-44-253.compute-1.amazonaws.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * winston-d (~ubuntu@ec2-54-244-213-72.us-west-2.compute.amazonaws.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * pmatulis (~peter@ec2-23-23-42-7.compute-1.amazonaws.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * lollipop_ (~s51itxsyc@23.94.38.19) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * tnt_ (~tnt@ec2-54-200-98-43.us-west-2.compute.amazonaws.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * benner (~benner@162.243.49.163) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * designated (~rroberts@host-177-39-52-24.midco.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * mdxi_ (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * jackhill (~jackhill@bog.hcoop.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * fouxm (~foucault@ks3363630.kimsufi.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * houkouonchi-work (~linux@12.248.40.138) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * _FL1SK (~quassel@159.118.92.60) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * devicenull (sid4013@ealing.irccloud.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * cce (~cce@50.56.54.167) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * portante (~portante@nat-pool-bos-t.redhat.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * oblu- (~o@62.109.134.112) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * shk (sid33582@charlton.irccloud.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * philips (~philips@ec2-54-196-103-51.compute-1.amazonaws.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * zz_hitsumabushi (~hitsumabu@175.184.30.148) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * eightyeight (~atoponce@pinyin.ae7.st) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * SpamapS (~clint@xencbyrum2.srihosting.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * iggy (~iggy@66.220.1.110) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * cephalobot` (~ceph@ds2390.dreamservers.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * [cave] (~quassel@208.123.82.202) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * loicd (~loicd@54.242.96.84.rev.sfr.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * Sysadmin88 (~IceChat77@94.4.22.173) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * zerick (~eocrospom@190.187.21.53) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * kapil (~ksharma@2620:113:80c0:5::2222) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * BranchPredictor (branch@predictor.org.pl) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * cookednoodles (~eoin@eoin.clanslots.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * Vacum_ (~vovo@i59F79493.versanet.de) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * jamespd (~mucky@mucky.socket7.org) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * drankis (~drankis__@89.111.13.198) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * sherry (~sherry@mike-alien.esc.auckland.ac.nz) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * plantain (~plantain@106.187.96.118) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * jpierre03_ (~jpierre03@voyage.prunetwork.fr) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * mlausch (~mlausch@2001:8d8:1fe:7:d35:3e9f:6f82:eaf) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * mfournier (~marc@2001:4b98:dc2:41:216:3eff:fe6d:dc0b) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * verdurin (~adam@2001:8b0:281:78ec:e2cb:4eff:fe01:f767) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * jpierre03 (~jpierre03@voyage.prunetwork.fr) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * tjikkun_ (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * leseb (~leseb@185.21.174.206) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * rotbeard (~redbeard@2a02:908:df11:9480:76f0:6dff:fe3b:994d) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * ufven (~ufven@130-229-28-186-dhcp.cmm.ki.se) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * tobiash (~quassel@mail.bmw-carit.de) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * blSnoopy (~snoopy@miram.persei.mw.lg.virgo.supercluster.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * ifur (~osm@hornbill.csc.warwick.ac.uk) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * meis3 (~meise@oglarun.3st.be) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * sileht (~sileht@gizmo.sileht.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * pieterl (~pieterl@194.134.32.8) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * grifferz (~andy@bitfolk.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * rektide_ (~rektide@eldergods.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * lurbs (user@uber.geek.nz) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * mastamind (~mm@193.171.234.167) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * nwrk (~nwrk@202.22.228.248) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * capri (~capri@212.218.127.222) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * Clbh (~benoit@cyllene.anchor.net.au) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * midekra (~dennis@ariel.xs4all.nl) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * flaxy (~afx@78.130.171.69) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * chutz (~chutz@rygel.linuxfreak.ca) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * classicsnail (~David@2600:3c01::f03c:91ff:fe96:d3c0) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * Amto_res (~amto_res@ks312256.kimsufi.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * eqhmcow (~eqhmcow@cpe-075-177-132-024.nc.res.rr.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * garphyx (~garphy@frank.zone84.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * swills (~swills@mouf.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * markl (~mark@knm.org) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * mfa298 (~mfa298@gateway.yapd.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * alfredodeza (~alfredode@198.206.133.89) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * WintermeW_ (~WintermeW@212-83-158-61.rev.poneytelecom.eu) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * brother (foobaz@vps1.hacking.dk) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * Elbandi (~ea333@elbandi.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * LCF (ball8@193.231.broadband16.iol.cz) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * Psi-Jack (~psi-jack@psi-jack.user.oftc.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * grepory (uid29799@uxbridge.irccloud.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * vhasi (vhasi@vha.si) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * med (~medberry@00012b50.user.oftc.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * mmgaggle (~kyle@cerebrum.dreamservers.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * MaZ- (~maz@00016955.user.oftc.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * saaby_ (~as@mail.saaby.com) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * destrudo (~destrudo@this.is.a.d4m4g3d.net) Quit (charon.oftc.net reticulum.oftc.net)
[1:23] * mschiff (~mschiff@mx10.schiffbauer.net) has joined #ceph
[1:23] * masterpe (~masterpe@2a01:670:400::43) has joined #ceph
[1:23] * dennis_ (dennis@tilaa.krul.nu) has joined #ceph
[1:24] * mo- (~mo@2a01:4f8:141:3264::3) has joined #ceph
[1:24] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit ()
[1:25] * leseb (~leseb@185.21.174.206) has joined #ceph
[1:25] * cephtron (~cephtron@115-186-178-141.nayatel.pk) has joined #ceph
[1:25] * The_Bishop (~bishop@2001:470:50b6:0:d9d0:8747:78e5:8cb9) has joined #ceph
[1:25] * brambles (~xymox@s0.barwen.ch) has joined #ceph
[1:25] * Hell_Fire_ (~HellFire@123-243-155-184.static.tpgi.com.au) has joined #ceph
[1:25] * xarses (~andreww@12.164.168.117) has joined #ceph
[1:25] * redcavalier (~redcavali@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[1:25] * jksM (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[1:25] * Sysadmin88 (~IceChat77@94.4.22.173) has joined #ceph
[1:25] * masta (~masta@190.7.213.210) has joined #ceph
[1:25] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[1:25] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[1:25] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) has joined #ceph
[1:25] * lupu (~lupu@86.107.101.214) has joined #ceph
[1:25] * rmoe (~quassel@12.164.168.117) has joined #ceph
[1:25] * dignus (~jkooijman@t-x.dignus.nl) has joined #ceph
[1:25] * rotbeard (~redbeard@2a02:908:df11:9480:76f0:6dff:fe3b:994d) has joined #ceph
[1:25] * lyncos (~chatzilla@208.71.184.41) has joined #ceph
[1:25] * gleam (gleam@dolph.debacle.org) has joined #ceph
[1:25] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[1:25] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[1:25] * narb (~Jeff@38.99.52.10) has joined #ceph
[1:25] * adamcrume (~quassel@2601:9:6680:47:8c06:c13c:1870:a001) has joined #ceph
[1:25] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) has joined #ceph
[1:25] * kapil (~ksharma@2620:113:80c0:5::2222) has joined #ceph
[1:25] * ivan` (~ivan`@000130ca.user.oftc.net) has joined #ceph
[1:25] * gregmark (~Adium@68.87.42.115) has joined #ceph
[1:25] * bkero (~bkero@216.151.13.66) has joined #ceph
[1:25] * mondkalbantrieb (~quassel@sama32.de) has joined #ceph
[1:25] * kwmiebach__ (~sid16855@id-16855.charlton.irccloud.com) has joined #ceph
[1:25] * Andreas-IPO (~andreas@2a01:2b0:2000:11::cafe) has joined #ceph
[1:25] * KB (~oftc-webi@cpe-74-137-252-159.swo.res.rr.com) has joined #ceph
[1:25] * BranchPredictor (branch@predictor.org.pl) has joined #ceph
[1:25] * jnq (~jnq@0001b7cc.user.oftc.net) has joined #ceph
[1:25] * ufven (~ufven@130-229-28-186-dhcp.cmm.ki.se) has joined #ceph
[1:25] * racingferret (~racingfer@81.144.225.214) has joined #ceph
[1:25] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) has joined #ceph
[1:25] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[1:25] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[1:25] * rturk-away (~rturk@ds2390.dreamservers.com) has joined #ceph
[1:25] * themgt (~themgt@c-76-104-28-47.hsd1.va.comcast.net) has joined #ceph
[1:25] * Vacum_ (~vovo@i59F79493.versanet.de) has joined #ceph
[1:25] * tobiash (~quassel@mail.bmw-carit.de) has joined #ceph
[1:25] * phantomcircuit (~phantomci@2600:3c01::f03c:91ff:fe73:6892) has joined #ceph
[1:25] * aarcane_ (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[1:25] * `10 (~10@69.169.91.14) has joined #ceph
[1:25] * kfei (~root@114-27-90-88.dynamic.hinet.net) has joined #ceph
[1:25] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[1:25] * funnel (~funnel@23.226.237.192) has joined #ceph
[1:25] * jdmason (~jon@134.134.139.74) has joined #ceph
[1:25] * allig8r (~allig8r@128.135.219.116) has joined #ceph
[1:25] * bkopilov (~bkopilov@213.57.18.60) has joined #ceph
[1:25] * mastamind (~mm@193.171.234.167) has joined #ceph
[1:25] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) has joined #ceph
[1:25] * mnash (~chatzilla@66-194-114-178.static.twtelecom.net) has joined #ceph
[1:25] * simulx (~simulx@vpn.expressionanalysis.com) has joined #ceph
[1:25] * blSnoopy (~snoopy@miram.persei.mw.lg.virgo.supercluster.net) has joined #ceph
[1:25] * jamespd (~mucky@mucky.socket7.org) has joined #ceph
[1:25] * pinguini (~pinguini@user-78-139-236-122.tomtelnet.ru) has joined #ceph
[1:25] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) has joined #ceph
[1:25] * amospalla (~amospalla@0001a39c.user.oftc.net) has joined #ceph
[1:25] * Jahkeup (Jahkeup@gemini.ca.us.panicbnc.net) has joined #ceph
[1:25] * muhanpong (~povian@kang.sarang.net) has joined #ceph
[1:25] * drankis (~drankis__@89.111.13.198) has joined #ceph
[1:25] * kevincox (~kevincox@4.s.kevincox.ca) has joined #ceph
[1:25] * sherry (~sherry@mike-alien.esc.auckland.ac.nz) has joined #ceph
[1:25] * Kioob (~kioob@2a01:e34:ec0a:c0f0:21e:8cff:fe07:45b6) has joined #ceph
[1:25] * ponyofdeath (~vladi@cpe-66-27-98-26.san.res.rr.com) has joined #ceph
[1:25] * jrcresawn (~jrcresawn@150.135.211.226) has joined #ceph
[1:25] * mjeanson (~mjeanson@00012705.user.oftc.net) has joined #ceph
[1:25] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[1:25] * ifur (~osm@hornbill.csc.warwick.ac.uk) has joined #ceph
[1:25] * saturnine (~saturnine@ashvm.saturne.in) has joined #ceph
[1:25] * overclk_ (~venky@ov42.x.rootbsd.net) has joined #ceph
[1:25] * meis3 (~meise@oglarun.3st.be) has joined #ceph
[1:25] * bitserker1 (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[1:25] * hflai_ (~hflai@alumni.cs.nctu.edu.tw) has joined #ceph
[1:25] * carter (~carter@li98-136.members.linode.com) has joined #ceph
[1:25] * tziOm (~bjornar@ns3.uniweb.no) has joined #ceph
[1:25] * dmsimard_away (~dmsimard@198.72.122.121) has joined #ceph
[1:25] * cowbar_ (~cow@ip-2607-F298-0001-0100-0000-0000-0000-FFFF.dreamhost.com) has joined #ceph
[1:25] * guppy (~quassel@guppy.xxx) has joined #ceph
[1:25] * sadbox (~jmcguire@sadbox.org) has joined #ceph
[1:25] * higebu (~higebu@www3347ue.sakura.ne.jp) has joined #ceph
[1:25] * purpleidea (~james@199.180.99.171) has joined #ceph
[1:25] * leseb_ (~leseb@88-190-214-97.rev.dedibox.fr) has joined #ceph
[1:25] * rturk|afk (~rturk@nat-pool-rdu-t.redhat.com) has joined #ceph
[1:25] * hughsaunders (~hughsaund@2001:4800:780e:510:fdaa:9d7a:ff04:4622) has joined #ceph
[1:25] * Azrael (~azrael@terra.negativeblue.com) has joined #ceph
[1:25] * Gugge-47527 (gugge@kriminel.dk) has joined #ceph
[1:25] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) has joined #ceph
[1:25] * ccourtaut_ (~ccourtaut@2001:41d0:1:eed3::1) has joined #ceph
[1:25] * kraken (~kraken@gw.sepia.ceph.com) has joined #ceph
[1:25] * stj (~stj@2001:470:8b2d:bb8:21d:9ff:fe29:8a6a) has joined #ceph
[1:25] * mongo (~gdahlman@voyage.voipnw.net) has joined #ceph
[1:25] * eternaleye (~eternaley@50.245.141.73) has joined #ceph
[1:25] * bens (~ben@c-71-231-52-111.hsd1.wa.comcast.net) has joined #ceph
[1:25] * joerocklin (~joe@cpe-75-186-9-154.cinci.res.rr.com) has joined #ceph
[1:25] * aarontc (~aarontc@aarontc-1-pt.tunnel.tserv14.sea1.ipv6.he.net) has joined #ceph
[1:25] * blynch (~blynch@vm-nat.msi.umn.edu) has joined #ceph
[1:25] * mdjp (~mdjp@2001:41d0:52:100::343) has joined #ceph
[1:25] * Georgyo (~georgyo@shamm.as) has joined #ceph
[1:25] * KindOne (kindone@0001a7db.user.oftc.net) has joined #ceph
[1:25] * lincolnb (~lincoln@c-67-165-142-226.hsd1.il.comcast.net) has joined #ceph
[1:25] * \ask (~ask@oz.develooper.com) has joined #ceph
[1:25] * tcatm (~quassel@2a01:4f8:151:13c3:5054:ff:feff:cbce) has joined #ceph
[1:25] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[1:25] * gford (~fford@93.93.251.146) has joined #ceph
[1:25] * qhartman_ (~qhartman@75-151-85-53-Colorado.hfc.comcastbusiness.net) has joined #ceph
[1:25] * Fetch (fetch@gimel.cepheid.org) has joined #ceph
[1:25] * nwf_ (~nwf@67.62.51.95) has joined #ceph
[1:25] * sage (~quassel@cpe-23-242-158-79.socal.res.rr.com) has joined #ceph
[1:25] * Daviey__ (~DavieyOFT@bootie.daviey.com) has joined #ceph
[1:25] * s3an2_ (~root@korn.s3an.me.uk) has joined #ceph
[1:25] * fretb (~fretb@drip.frederik.pw) has joined #ceph
[1:25] * ismell (~ismell@host-64-17-89-79.beyondbb.com) has joined #ceph
[1:25] * beardo (~sma310@beardo.cc.lehigh.edu) has joined #ceph
[1:25] * houkouonchi-home (~linux@houkouonchi-1-pt.tunnel.tserv15.lax1.ipv6.he.net) has joined #ceph
[1:25] * cronix (~cronix@5.199.139.166) has joined #ceph
[1:25] * Guest13616 (~quassel@2607:f298:a:607:b871:797:7d9:8f8d) has joined #ceph
[1:25] * alexbligh1 (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) has joined #ceph
[1:25] * acaos (~zac@209.99.103.42) has joined #ceph
[1:25] * beardo_ (~sma310@208-58-255-215.c3-0.drf-ubr1.atw-drf.pa.cable.rcn.com) has joined #ceph
[1:25] * darkfader (~floh@88.79.251.60) has joined #ceph
[1:25] * _FL1SK (~quassel@159.118.92.60) has joined #ceph
[1:25] * tchmnkyz (~jeremy@0001638b.user.oftc.net) has joined #ceph
[1:25] * [cave] (~quassel@208.123.82.202) has joined #ceph
[1:25] * terje__ (~joey@184.96.158.62) has joined #ceph
[1:25] * iggy (~iggy@66.220.1.110) has joined #ceph
[1:25] * ctd (~root@00011932.user.oftc.net) has joined #ceph
[1:25] * pingu (~christian@mail.ponies.io) has joined #ceph
[1:25] * singler_ (~singler@zeta.kirneh.eu) has joined #ceph
[1:25] * cephalobot` (~ceph@ds2390.dreamservers.com) has joined #ceph
[1:25] * rBEL (robbe@november.openminds.be) has joined #ceph
[1:25] * portante (~portante@nat-pool-bos-t.redhat.com) has joined #ceph
[1:25] * jeffhung_ (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[1:25] * SpamapS (~clint@xencbyrum2.srihosting.com) has joined #ceph
[1:25] * clayg_ (~clayg@ec2-54-235-44-253.compute-1.amazonaws.com) has joined #ceph
[1:25] * winston-d (~ubuntu@ec2-54-244-213-72.us-west-2.compute.amazonaws.com) has joined #ceph
[1:25] * pmatulis (~peter@ec2-23-23-42-7.compute-1.amazonaws.com) has joined #ceph
[1:25] * lollipop_ (~s51itxsyc@23.94.38.19) has joined #ceph
[1:25] * tnt_ (~tnt@ec2-54-200-98-43.us-west-2.compute.amazonaws.com) has joined #ceph
[1:25] * zz_hitsumabushi (~hitsumabu@175.184.30.148) has joined #ceph
[1:25] * benner (~benner@162.243.49.163) has joined #ceph
[1:25] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[1:25] * Guest13361 (~oftc-webi@fw.spring.de) has joined #ceph
[1:25] * designated (~rroberts@host-177-39-52-24.midco.net) has joined #ceph
[1:25] * cce (~cce@50.56.54.167) has joined #ceph
[1:25] * mdxi_ (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) has joined #ceph
[1:25] * oblu- (~o@62.109.134.112) has joined #ceph
[1:25] * jackhill (~jackhill@bog.hcoop.net) has joined #ceph
[1:25] * loicd (~loicd@54.242.96.84.rev.sfr.net) has joined #ceph
[1:25] * fouxm (~foucault@ks3363630.kimsufi.com) has joined #ceph
[1:25] * eightyeight (~atoponce@pinyin.ae7.st) has joined #ceph
[1:25] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[1:25] * philips (~philips@ec2-54-196-103-51.compute-1.amazonaws.com) has joined #ceph
[1:25] * houkouonchi-work (~linux@12.248.40.138) has joined #ceph
[1:25] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[1:25] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) has joined #ceph
[1:25] * shk (sid33582@charlton.irccloud.com) has joined #ceph
[1:25] * devicenull (sid4013@ealing.irccloud.com) has joined #ceph
[1:25] * Meths (~meths@2.25.214.22) has joined #ceph
[1:25] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[1:25] * plantain (~plantain@106.187.96.118) has joined #ceph
[1:25] * jpierre03_ (~jpierre03@voyage.prunetwork.fr) has joined #ceph
[1:25] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[1:25] * sileht (~sileht@gizmo.sileht.net) has joined #ceph
[1:25] * mlausch (~mlausch@2001:8d8:1fe:7:d35:3e9f:6f82:eaf) has joined #ceph
[1:25] * mfournier (~marc@2001:4b98:dc2:41:216:3eff:fe6d:dc0b) has joined #ceph
[1:25] * jpierre03 (~jpierre03@voyage.prunetwork.fr) has joined #ceph
[1:25] * tjikkun_ (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) has joined #ceph
[1:25] * verdurin (~adam@2001:8b0:281:78ec:e2cb:4eff:fe01:f767) has joined #ceph
[1:25] * pieterl (~pieterl@194.134.32.8) has joined #ceph
[1:25] * grifferz (~andy@bitfolk.com) has joined #ceph
[1:25] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[1:25] * rektide_ (~rektide@eldergods.com) has joined #ceph
[1:25] * lurbs (user@uber.geek.nz) has joined #ceph
[1:25] * nwrk (~nwrk@202.22.228.248) has joined #ceph
[1:25] * guerby (~guerby@ip165-ipv6.tetaneutral.net) has joined #ceph
[1:25] * capri (~capri@212.218.127.222) has joined #ceph
[1:25] * Clbh (~benoit@cyllene.anchor.net.au) has joined #ceph
[1:25] * midekra (~dennis@ariel.xs4all.nl) has joined #ceph
[1:25] * sep (~sep@2a04:2740:1:0:52e5:49ff:feeb:32) has joined #ceph
[1:25] * flaxy (~afx@78.130.171.69) has joined #ceph
[1:25] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) has joined #ceph
[1:25] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[1:25] * ccourtaut (~ccourtaut@2001:41d0:2:4a25::1) has joined #ceph
[1:25] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[1:25] * classicsnail (~David@2600:3c01::f03c:91ff:fe96:d3c0) has joined #ceph
[1:25] * Amto_res (~amto_res@ks312256.kimsufi.com) has joined #ceph
[1:25] * eqhmcow (~eqhmcow@cpe-075-177-132-024.nc.res.rr.com) has joined #ceph
[1:25] * garphyx (~garphy@frank.zone84.net) has joined #ceph
[1:25] * swills (~swills@mouf.net) has joined #ceph
[1:25] * markl (~mark@knm.org) has joined #ceph
[1:25] * mfa298 (~mfa298@gateway.yapd.net) has joined #ceph
[1:25] * grepory (uid29799@uxbridge.irccloud.com) has joined #ceph
[1:25] * Meistarin (~Dont@0001c3c8.user.oftc.net) has joined #ceph
[1:25] * vhasi (vhasi@vha.si) has joined #ceph
[1:25] * Psi-Jack (~psi-jack@psi-jack.user.oftc.net) has joined #ceph
[1:25] * LCF (ball8@193.231.broadband16.iol.cz) has joined #ceph
[1:25] * destrudo (~destrudo@this.is.a.d4m4g3d.net) has joined #ceph
[1:25] * Elbandi (~ea333@elbandi.net) has joined #ceph
[1:25] * saaby_ (~as@mail.saaby.com) has joined #ceph
[1:25] * mmgaggle (~kyle@cerebrum.dreamservers.com) has joined #ceph
[1:25] * brother (foobaz@vps1.hacking.dk) has joined #ceph
[1:25] * odi (~quassel@2a00:12c0:1015:136::9) has joined #ceph
[1:25] * med (~medberry@00012b50.user.oftc.net) has joined #ceph
[1:25] * WintermeW_ (~WintermeW@212-83-158-61.rev.poneytelecom.eu) has joined #ceph
[1:25] * mancdaz (~mancdaz@2a00:1a48:7807:102:94f4:6b56:ff08:886c) has joined #ceph
[1:25] * MaZ- (~maz@00016955.user.oftc.net) has joined #ceph
[1:25] * Svedrin (svedrin@ketos.funzt-halt.net) has joined #ceph
[1:25] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[1:27] * ChanServ sets mode +v sage
[1:35] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[1:38] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[1:44] * LeaChim (~LeaChim@host86-174-77-240.range86-174.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[1:52] * nljmo (~nljmo@64.125.103.162) has joined #ceph
[1:53] * xinyi (~xinyi@2406:2000:ef96:3:31ae:68e7:7e2:5bba) has joined #ceph
[1:54] * nljmo (~nljmo@64.125.103.162) Quit ()
[1:58] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[1:59] * brytown (~brytown@2620:79:0:8204:354a:89f6:921c:78c8) has joined #ceph
[2:01] * xinyi (~xinyi@2406:2000:ef96:3:31ae:68e7:7e2:5bba) Quit (Ping timeout: 480 seconds)
[2:01] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[2:03] * nljmo (~nljmo@64.125.103.162) has joined #ceph
[2:04] * cephtron (~cephtron@115-186-178-141.nayatel.pk) Quit (Quit: HydraIRC -> http://www.hydrairc.com <- Organize your IRC)
[2:06] * rmoe (~quassel@12.164.168.117) Quit (Read error: Operation timed out)
[2:12] * zack_dolby (~textual@e0109-114-22-12-137.uqwimax.jp) has joined #ceph
[2:16] * markbby (~Adium@168.94.245.3) has joined #ceph
[2:19] * brytown (~brytown@2620:79:0:8204:354a:89f6:921c:78c8) has left #ceph
[2:19] * markbby (~Adium@168.94.245.3) Quit (Remote host closed the connection)
[2:21] * markbby (~Adium@168.94.245.2) has joined #ceph
[2:25] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[2:27] * tank100 (~tank@84.200.17.138) has joined #ceph
[2:29] * keds (~ked@cpc6-pool14-2-0-cust202.15-1.cable.virginm.net) has joined #ceph
[2:30] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[2:37] * diegows (~diegows@r186-52-137-65.dialup.adsl.anteldata.net.uy) has joined #ceph
[2:38] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:39] * racingferret (~racingfer@81.144.225.214) Quit (Quit: Leaving)
[2:42] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[2:49] * mtl1 (~Adium@c-67-174-109-212.hsd1.co.comcast.net) has joined #ceph
[2:53] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[2:53] * xinyi (~xinyi@corp-nat.peking.corp.yahoo.com) has joined #ceph
[2:54] * masta (~masta@190.7.213.210) Quit (Quit: Leaving...)
[3:01] * xinyi (~xinyi@corp-nat.peking.corp.yahoo.com) Quit (Read error: Operation timed out)
[3:02] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[3:05] * KB (~oftc-webi@cpe-74-137-252-159.swo.res.rr.com) Quit (Quit: Page closed)
[3:09] * zerick (~eocrospom@190.187.21.53) Quit (Ping timeout: 480 seconds)
[3:09] * narb (~Jeff@38.99.52.10) Quit (Quit: narb)
[3:11] * diegows (~diegows@r186-52-137-65.dialup.adsl.anteldata.net.uy) Quit (Read error: Operation timed out)
[3:17] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[3:20] * joef1 (~Adium@2601:9:2a00:690:ec2b:cb92:f3e1:4c78) has joined #ceph
[3:20] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[3:22] * shang (~ShangWu@175.41.48.77) has joined #ceph
[3:31] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[3:33] * The_Bishop (~bishop@2001:470:50b6:0:d9d0:8747:78e5:8cb9) Quit (Ping timeout: 480 seconds)
[3:37] * brytown (~brytown@142-254-47-204.dsl.dynamic.sonic.net) has joined #ceph
[3:37] * brytown (~brytown@142-254-47-204.dsl.dynamic.sonic.net) Quit ()
[3:43] * nljmo (~nljmo@64.125.103.162) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[3:44] * zhaochao (~zhaochao@124.205.245.26) has joined #ceph
[3:49] * diegows (~diegows@r190-64-89-10.su-static.adinet.com.uy) has joined #ceph
[3:50] * markbby (~Adium@168.94.245.2) Quit (Remote host closed the connection)
[3:52] * hijacker (~hijacker@213.91.163.5) Quit (Ping timeout: 480 seconds)
[3:53] * xinyi (~xinyi@2406:2000:ef96:3:74c0:c5c6:44e7:db75) has joined #ceph
[3:53] * masta (~masta@190.7.205.254) has joined #ceph
[3:56] * joef1 (~Adium@2601:9:2a00:690:ec2b:cb92:f3e1:4c78) Quit (Quit: Leaving.)
[4:00] * huangjun (~kvirc@111.174.238.55) has joined #ceph
[4:00] * haomaiwang (~haomaiwan@112.193.131.137) has joined #ceph
[4:01] * xinyi (~xinyi@2406:2000:ef96:3:74c0:c5c6:44e7:db75) Quit (Remote host closed the connection)
[4:04] * xinyi (~xinyi@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[4:08] * haomaiwang (~haomaiwan@112.193.131.137) Quit (Ping timeout: 480 seconds)
[4:14] * hijacker (~hijacker@213.91.163.5) has joined #ceph
[4:15] * redcavalier (~redcavali@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit ()
[4:16] * diegows (~diegows@r190-64-89-10.su-static.adinet.com.uy) Quit (Ping timeout: 480 seconds)
[4:19] * masta (~masta@190.7.205.254) Quit (Quit: Leaving...)
[4:19] * joef1 (~Adium@2601:9:2a00:690:ec1d:a3b9:d9c6:8bed) has joined #ceph
[4:19] * joef1 (~Adium@2601:9:2a00:690:ec1d:a3b9:d9c6:8bed) Quit ()
[4:31] * lalatenduM (~lalatendu@122.167.7.156) has joined #ceph
[4:44] * bkopilov (~bkopilov@213.57.18.60) Quit (Ping timeout: 480 seconds)
[4:48] * lincolnb (~lincoln@c-67-165-142-226.hsd1.il.comcast.net) Quit (Ping timeout: 480 seconds)
[5:01] * haomaiwang (~haomaiwan@112.193.131.137) has joined #ceph
[5:03] * lalatenduM (~lalatendu@122.167.7.156) Quit (Quit: Leaving)
[5:06] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[5:09] * haomaiwang (~haomaiwan@112.193.131.137) Quit (Ping timeout: 480 seconds)
[5:11] * The_Bishop (~bishop@e179162065.adsl.alicedsl.de) has joined #ceph
[5:13] * lincolnb (~lincoln@c-67-165-142-226.hsd1.il.comcast.net) has joined #ceph
[5:17] <sherry> How would I be able to turn journaling off on firefly with btrfs?
[5:20] * Vacum (~vovo@i59F7A9C7.versanet.de) has joined #ceph
[5:22] <lurbs> As in the Ceph journal? Currently you can't, it's required. I believe they're working on other backends (LevelDB, I believe) where the journal isn't necessary, though.
[5:25] <sherry> http://www.slideshare.net/Inktank_Ceph/ceph-performance > Why is it so good? ? No more journal! Yay! ? Object backends have built-in atomic functions
[5:27] <sherry> u r right lurbs...
[5:27] * Vacum_ (~vovo@i59F79493.versanet.de) Quit (Ping timeout: 480 seconds)
[5:30] <sherry> I have a problem that when I remove objects from a single CephFS they won't be deleted from the pool. Do u have any idea what could be the possible reasons?
[5:31] <lurbs> I haven't really touched CephFS, sorry.
[5:31] <sherry> thanks, btw
[5:45] * hai (~haiquan51@58.213.102.114) has joined #ceph
[5:46] <hai> Hi , I encounter a issue , about the ceph cluster always keep degraded 13%
[5:46] <hai> how to fix , thanks a lot!
[5:47] <hai> degraded (13.315%); 11/4994694 unfound (0.000%)
[5:53] * haomaiwang (~haomaiwan@112.193.131.137) has joined #ceph
[5:56] <hai> any idea ? thanks
[6:01] * haomaiwang (~haomaiwan@112.193.131.137) Quit (Ping timeout: 480 seconds)
[6:08] * nljmo (~nljmo@173-11-110-227-SFBA.hfc.comcastbusiness.net) has joined #ceph
[6:11] * xinyi (~xinyi@vpn-nat.peking.corp.yahoo.com) Quit (Remote host closed the connection)
[6:11] <hai> Hi , I encounter a issue , about the ceph cluster always keep degraded 13% how to fix , thanks a lot!
[6:11] * xinyi (~xinyi@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[6:12] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[6:15] * xinyi_ (~xinyi@2406:2000:ef96:e:154a:7c74:1bcd:3045) has joined #ceph
[6:15] * xinyi (~xinyi@vpn-nat.peking.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[6:16] * xinyi_ (~xinyi@2406:2000:ef96:e:154a:7c74:1bcd:3045) Quit (Remote host closed the connection)
[6:16] * xinyi (~xinyi@2406:2000:ef96:e:154a:7c74:1bcd:3045) has joined #ceph
[6:18] * themgt (~themgt@c-76-104-28-47.hsd1.va.comcast.net) Quit (Quit: themgt)
[6:28] * vbellur (~vijay@nat-pool-blr-t.redhat.com) has joined #ceph
[6:30] <sherry> hai: what does ceph -s says?
[6:33] * nhm (~nhm@74.203.127.5) has joined #ceph
[6:33] * ChanServ sets mode +o nhm
[6:37] * saurabh (~saurabh@nat-pool-blr-t.redhat.com) has joined #ceph
[6:42] * wschulze (~wschulze@12.7.204.3) has joined #ceph
[6:45] * nljmo (~nljmo@173-11-110-227-SFBA.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[6:47] * nljmo (~nljmo@173-11-110-227-SFBA.hfc.comcastbusiness.net) has joined #ceph
[6:47] * xinyi_ (~xinyi@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[6:48] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Quit: Leaving.)
[6:51] * xinyi (~xinyi@2406:2000:ef96:e:154a:7c74:1bcd:3045) Quit (Ping timeout: 480 seconds)
[6:54] * haomaiwang (~haomaiwan@112.193.131.137) has joined #ceph
[7:00] <huangjun> hai: what's about "ceph osd dump" output?
[7:02] * haomaiwang (~haomaiwan@112.193.131.137) Quit (Ping timeout: 480 seconds)
[7:02] * wschulze1 (~wschulze@12.7.204.3) has joined #ceph
[7:04] * wschulze (~wschulze@12.7.204.3) Quit (Ping timeout: 480 seconds)
[7:05] * nhm (~nhm@74.203.127.5) Quit (Read error: Operation timed out)
[7:06] * wschulze1 (~wschulze@12.7.204.3) Quit ()
[7:07] * haomaiwang (~haomaiwan@112.193.131.137) has joined #ceph
[7:09] * oblu- (~o@62.109.134.112) Quit (Quit: ~)
[7:09] <hai> 68.1.54:6837/4032241 exists,up 29cd921c-32cc-42f4-9b01-cfa51d964962
[7:09] <hai> osd.51 up in weight 1 up_from 117499 up_thru 117689 down_at 117497 last_clean_interval [114142,117497) 192.168.1.54:6821/32366 192.168.1.54:6802/4032366 192.1
[7:09] <hai> 68.1.54:6828/4032366 exists,up 7db9a985-b440-4cfc-a921-bec3bb6988ee
[7:09] <hai> osd.52 up in weight 1 up_from 117499 up_thru 117632 down_at 117497 last_clean_interval [114142,117497) 192.168.1.54:6824/32491 192.168.1.54:6811/3032491 192.1
[7:09] <hai> 68.1.54:6813/3032491 exists,up 3f5d2c83-8fa4-4e3c-b4db-3f8d9f48763a
[7:09] <hai> osd.53 up in weight 1 up_from 117523 up_thru 117633 down_at 117522 last_clean_interval [114142,117522) 192.168.1.54:6827/32616 192.168.1.54:6807/6032616 192.1
[7:09] <hai> 68.1.54:6823/6032616 exists,up 5e22e3b9-822a-40f2-9a7d-04dd8cac0eaf
[7:09] <hai> osd.54 up in weight 1 up_from 116185 up_thru 117632 down_at 116181 last_clean_interval [114142,116183) 192.168.1.54:6830/32740 192.168.1.54:6819/1032740 192.1
[7:09] <hai> 68.1.54:6825/1032740 exists,up 535625c3-7606-4391-a7c7-799121cc7f98
[7:09] <hai> pg_temp 2.2 [25,37,28,5]
[7:09] <hai> pg_temp 2.6 [20,41,6]
[7:09] <hai> pg_temp 2.8 [12,34,49]
[7:09] <hai> pg_temp 2.15 [19,42,32,48]
[7:09] <hai> pg_temp 2.26 [30,42,1,2]
[7:09] <hai> pg_temp 2.2a [10,43,47,54]
[7:09] <hai> pg_temp 2.2b [23,42,46]
[7:09] <hai> pg_temp 2.31 [18,36,46,15]
[7:09] <hai> pg_temp 2.38 [18,1,36,48]
[7:09] <hai> pg_temp 2.3c [48,43,12,29]
[7:09] <hai> Hi Sherry, like these
[7:10] <lupu> hai: use a paste service
[7:10] * xinyi_ (~xinyi@vpn-nat.peking.corp.yahoo.com) Quit (Remote host closed the connection)
[7:11] * xinyi (~xinyi@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[7:11] * nljmo (~nljmo@173-11-110-227-SFBA.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[7:12] * nljmo (~nljmo@173-11-110-227-SFBA.hfc.comcastbusiness.net) has joined #ceph
[7:13] * nljmo_ (~nljmo@173-11-110-227-SFBA.hfc.comcastbusiness.net) has joined #ceph
[7:16] * cowbar_ (~cow@ip-2607-F298-0001-0100-0000-0000-0000-FFFF.dreamhost.com) Quit (Remote host closed the connection)
[7:17] * oblu (~o@62.109.134.112) has joined #ceph
[7:19] * xinyi (~xinyi@vpn-nat.peking.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[7:19] * leseb (~leseb@185.21.174.206) Quit (Killed (NickServ (Too many failed password attempts.)))
[7:19] * leseb (~leseb@185.21.174.206) has joined #ceph
[7:20] * nljmo (~nljmo@173-11-110-227-SFBA.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[7:24] <huangjun> hai: if you have data pool replicate size set to 3?
[7:24] <hai> yes
[7:24] <hai> the rbd pool replicate size set to 3
[7:24] <huangjun> do you have more than 3 hosts?
[7:24] * ikrstic (~ikrstic@178-222-94-242.dynamic.isp.telekom.rs) has joined #ceph
[7:25] <hai> have 5 hosts
[7:25] <huangjun> have osd down?
[7:26] <hai> total 55 osd, and the status is ok
[7:27] <huangjun> can you paste the output of "ceph pg dump|grep degraded"
[7:27] <hai> before remove a osd node , then reinstall it to cluster
[7:28] <hai> [root@data01 /]# ceph pg dump |grep degraded |wc -l
[7:28] <hai> 346
[7:28] <hai> 2.1440 822 0 822 0 3398451200 0 0 active+degraded+remapped+wait_backfill 2014-06-19 10:31:08.103410 117729'42986 1
[7:28] <hai> 17729'249128 [5,17,53] [30,5,51] 82533'12628 2014-04-13 18:36:11.827488 82533'12628 2014-04-13 18:36:11.827488
[7:28] <hai> 2.1443 793 0 793 0 3299037184 0 0 active+degraded+remapped+wait_backfill 2014-06-19 10:31:07.969780 117729'110948 1
[7:28] <hai> 17729'884122 [48,8,36] [20,48,28] 61681'6425 2014-04-07 13:57:55.368632 61681'6425 2014-04-07 13:57:55.368632
[7:28] <hai> 2.1438 765 0 765 0 3159659008 0 0 active+degraded+remapped+wait_backfill 2014-06-19 10:31:08.778777 117729'42879 1
[7:28] <hai> 17729'311965 [11,43,46] [32,11,50] 82533'10663 2014-04-13 20:02:13.268186 82533'10663 2014-04-13 20:02:13.268186
[7:28] <hai> 2.142e 803 0 803 0 3330437120 0 0 active+degraded+remapped+wait_backfill 2014-06-19 10:15:37.093374 117729'18162 1
[7:28] <hai> 17729'254978 [54,14,28] [6,54,23] 61673'10120 2014-04-07 13:07:48.505211 61673'10120 2014-04-07 13:07:48.505211
[7:28] <hai> 2.142a 792 0 792 0 3269873664 0 0 active+degraded+remapped+wait_backfill 2014-06-19 10:15:58.739972 117729'18955 1
[7:28] <hai> 17729'299677 [48,7,21] [1,48,54] 61657'14000 2014-04-07 15:32:22.418854 61657'14000 2014-04-07 15:32:22.418854
[7:28] <hai> 2.1420 792 2 794 0 3290566656 0 0 active+recovery_wait+degraded+remapped 2014-06-19 10:15:51.657889 117400'26154 1
[7:28] <hai> 17729'183418 [54,41,24] [10,54,49] 61657'8039 2014-04-07 13:15:55.336208 61657'8039 2014-04-07 13:15:55.336208
[7:28] <hai> 2.141b 816 0 816 0 3370090496 0 0 active+degraded+remapped+wait_backfill 2014-06-19 10:31:07.966140 117729'957346 1
[7:28] <hai> 17729'1370659 [14,49,25] [16,14,28] 82533'34798 2014-04-13 18:12:58.354782 82533'17226 2014-04-12 17:29:41.757776
[7:28] <hai> 2.1411 781 0 781 0 3249393664 0 0 active+degraded+remapped+wait_backfill 2014-06-19 10:15:40.560182 117729'96817 1
[7:28] <hai> 17729'427132 [34,1,11] [20,34,48] 82533'13305 2014-04-14 03:09:18.464993 61657'12940 2014-04-11 02:46:05
[7:28] <hai> like these
[7:29] <huangjun> do you execute "ceph osd rm osdid" and "ceph osd crush remove osd.id"?
[7:30] <hai> yes
[7:30] <hai> ceph auth del osd.id
[7:32] <huangjun> ok, have you tried to restart osd daemons?
[7:33] <sherry> hai: I think u may need to scrub some of ur OSDs that have degraded PGs
[7:33] <hai> we already removed all osd on the data04 this server , and then reinstall ceph osd role , then readd to cluster
[7:35] <hai> how to scrub some of osds ?
[7:35] <huangjun> before you remove the osds on data04, do you set "ceph osd set noout" or "ceph osd set nodown"
[7:36] <hai> first , if find some objects storage who osds ? then scrub this osd , right ?
[7:36] <huangjun> scrub will check the data between you primary and replicas, i think will not help, but you can try it
[7:37] <sherry> https://ceph.com/docs/master/rados/operations/control/
[7:37] * tiger (~textual@58.213.102.114) has joined #ceph
[7:38] <sherry> ceph osd scrub {osd-num}
[7:38] <hai> all osd need to do it ?
[7:38] <sherry> first find the PGs that are degraded
[7:39] <sherry> then find out what are their OSDs
[7:39] <hai> ok, thanks a lot , I will try it
[7:39] <sherry> huangjun might be right, that might not help. that was only my best guess.
[7:39] <huangjun> or "ceph pg dump|grep degraded| awk '{print $1}'" will find the degraed pg's pgid
[7:40] * vbellur (~vijay@nat-pool-blr-t.redhat.com) Quit (Ping timeout: 480 seconds)
[7:40] <huangjun> then you can use "ceph pg repair pgid " try to repair the pgs
[7:41] * xinyi (~xinyi@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[7:43] * xinyi_ (~xinyi@2406:2000:ef96:e:d803:1d51:6cc4:13ef) has joined #ceph
[7:43] * xinyi (~xinyi@vpn-nat.peking.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[7:45] * thb (~me@2a02:2028:95:4d60:2caf:9a85:40a9:1400) has joined #ceph
[7:47] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[7:49] * sleinen1 (~Adium@2001:620:0:26:3497:327c:86a:5c43) has joined #ceph
[7:52] * tiger_ (~textual@58.213.102.114) has joined #ceph
[7:53] * tiger (~textual@58.213.102.114) Quit (Ping timeout: 480 seconds)
[7:55] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[7:56] * vbellur (~vijay@209.132.188.8) has joined #ceph
[7:57] * sleinen1 (~Adium@2001:620:0:26:3497:327c:86a:5c43) Quit (Quit: Leaving.)
[8:03] * Hell_Fire__ (~HellFire@123-243-155-184.static.tpgi.com.au) has joined #ceph
[8:03] * Hell_Fire_ (~HellFire@123-243-155-184.static.tpgi.com.au) Quit (Read error: Connection reset by peer)
[8:06] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) has joined #ceph
[8:08] * ade (~abradshaw@dslb-092-078-248-076.pools.arcor-ip.net) has joined #ceph
[8:09] * ade (~abradshaw@dslb-092-078-248-076.pools.arcor-ip.net) Quit ()
[8:14] * xinyi (~xinyi@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[8:17] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[8:18] * xinyi_ (~xinyi@2406:2000:ef96:e:d803:1d51:6cc4:13ef) Quit (Ping timeout: 480 seconds)
[8:22] * julian_ (~julianwa@125.70.135.10) has joined #ceph
[8:23] * hai (~haiquan51@58.213.102.114) Quit (Ping timeout: 480 seconds)
[8:23] * hai (~haiquan51@58.213.102.114) has joined #ceph
[8:24] * cowbar (~cow@ip-2607-F298-0001-0100-0000-0000-0000-FFFF.dreamhost.com) has joined #ceph
[8:29] * vbellur (~vijay@209.132.188.8) Quit (Ping timeout: 480 seconds)
[8:30] * rdas (~rdas@nat-pool-pnq-t.redhat.com) has joined #ceph
[8:32] <sherry> what is the right form of osd journal dev in ceph.conf, I want to do something like: osd_journal_devs = /dev/disk/by-uuid/...
[8:34] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[8:34] * Sysadmin88 (~IceChat77@94.4.22.173) Quit (Quit: Never put off till tomorrow, what you can do the day after tomorrow)
[8:35] * aldavud (~aldavud@213.55.176.178) has joined #ceph
[8:41] * vbellur (~vijay@nat-pool-blr-t.redhat.com) has joined #ceph
[8:41] * hai (~haiquan51@58.213.102.114) Quit (Quit: hai)
[8:43] * sleinen (~Adium@2001:620:0:26:5152:88dd:309d:4762) has joined #ceph
[8:49] * xinyi (~xinyi@vpn-nat.peking.corp.yahoo.com) Quit (Remote host closed the connection)
[8:49] * xinyi (~xinyi@2406:2000:ef96:e:d803:1d51:6cc4:13ef) has joined #ceph
[8:49] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[8:51] * ajazdzewski (~quassel@lpz-66.sprd.net) has joined #ceph
[8:52] * haomaiwang (~haomaiwan@112.193.131.137) Quit (Remote host closed the connection)
[8:52] * haomaiwang (~haomaiwan@li721-169.members.linode.com) has joined #ceph
[8:53] * rendar (~I@host46-177-dynamic.1-87-r.retail.telecomitalia.it) has joined #ceph
[8:53] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) Quit (Ping timeout: 480 seconds)
[8:55] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[8:57] * xinyi (~xinyi@2406:2000:ef96:e:d803:1d51:6cc4:13ef) Quit (Ping timeout: 480 seconds)
[9:00] * thomnico (~thomnico@2a01:e35:8b41:120:2891:86a8:c9a9:6075) has joined #ceph
[9:01] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:03] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[9:04] * haomaiwa_ (~haomaiwan@112.193.131.137) has joined #ceph
[9:09] * drankis (~drankis__@89.111.13.198) Quit (Remote host closed the connection)
[9:11] * haomaiwang (~haomaiwan@li721-169.members.linode.com) Quit (Ping timeout: 480 seconds)
[9:14] * drankis (~drankis__@159.148.207.145) has joined #ceph
[9:14] * LeaChim (~LeaChim@host86-174-77-240.range86-174.btcentralplus.com) has joined #ceph
[9:21] * hybrid512 (~walid@195.200.167.70) has joined #ceph
[9:23] * drankis (~drankis__@159.148.207.145) Quit (Ping timeout: 480 seconds)
[9:31] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) Quit (Quit: Leaving)
[9:33] * aldavud (~aldavud@213.55.176.178) Quit (Ping timeout: 480 seconds)
[9:34] * allsystemsarego (~allsystem@79.115.62.26) has joined #ceph
[9:34] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:34] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[9:35] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:40] <ssejourne> hi. I'm using kernel 3.2.0 with firefly. Do I have to use "crush tunables legacy" or can I use "crush tunables optimal" ?
[9:42] <ssejourne> or do I have to compile my kernel 3.X ?
[9:43] <tziOm> loicd, http://pastie.org/9304383
[9:44] <tziOm> loicd, this is after a successful "ceph-disk-prepare --zap-disk --fs-type xfs /dev/sde"
[9:45] <tziOm> loicd, the way to solve this is to manually get osd.X key and put in keyring
[9:46] <tziOm> loicd, can some of this be caused by ceph-disk dependency on some settings ceph-deploy sets?
[9:46] <tziOm> (not using ceph-deploy)
[9:47] <tziOm> ceph auth get osd.2 > /etc/ceph/keyring.osd.2
[9:47] <tziOm> ..and then activate succeeds
[9:48] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) has joined #ceph
[9:52] * tiger (~textual@58.213.102.114) has joined #ceph
[9:55] * xinyi (~xinyi@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[9:55] * tiger_ (~textual@58.213.102.114) Quit (Ping timeout: 480 seconds)
[9:59] * Infitialis (~infitiali@194.30.182.18) has joined #ceph
[10:03] * rdas (~rdas@nat-pool-pnq-t.redhat.com) Quit (Quit: Leaving)
[10:03] * tiger (~textual@58.213.102.114) Quit (Quit: Textual IRC Client: www.textualapp.com)
[10:06] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:07] <loicd> tziOm: if you ceph-disk activate /dev/sde1 (i.e. do not use --activate-key ), does it work ? By default it will try to use the osd bootstrap key i.e /var/lib/ceph/bootstrap-osd/ceph.keyring )
[10:14] * rdas (~rdas@nat-pool-pnq-t.redhat.com) has joined #ceph
[10:14] * haomaiwa_ (~haomaiwan@112.193.131.137) Quit (Read error: Connection reset by peer)
[10:18] * mtl2 (~Adium@c-67-174-109-212.hsd1.co.comcast.net) has joined #ceph
[10:19] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[10:19] * dignus_ (~jkooijman@t-x.dignus.nl) has joined #ceph
[10:20] * joef1 (~Adium@2620:79:0:131:3c84:3eb4:dc70:a996) has joined #ceph
[10:20] * dignus (~jkooijman@t-x.dignus.nl) Quit (Read error: Connection reset by peer)
[10:20] * saturnin1 (~saturnine@ashvm.saturne.in) has joined #ceph
[10:20] * saturnine (~saturnine@ashvm.saturne.in) Quit (Read error: Connection reset by peer)
[10:22] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (Read error: Operation timed out)
[10:23] * mtl1 (~Adium@c-67-174-109-212.hsd1.co.comcast.net) Quit (Read error: Operation timed out)
[10:26] * PureNZ (~paul@122-62-45-132.jetstream.xtra.co.nz) has joined #ceph
[10:36] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:39] * vbellur (~vijay@nat-pool-blr-t.redhat.com) Quit (Ping timeout: 480 seconds)
[10:40] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[10:53] * vbellur (~vijay@209.132.188.8) has joined #ceph
[10:55] <sherry> what could be the possible reason that the objects in my pool cannot be deleted after I remove all of the objects in CephFS????
[10:58] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[10:59] * zack_dolby (~textual@e0109-114-22-12-137.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[11:04] * thomnico (~thomnico@2a01:e35:8b41:120:2891:86a8:c9a9:6075) Quit (Quit: Ex-Chat)
[11:09] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[11:11] * thomnico (~thomnico@2a01:e35:8b41:120:2891:86a8:c9a9:6075) has joined #ceph
[11:14] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Ping timeout: 480 seconds)
[11:15] * ikrstic (~ikrstic@178-222-94-242.dynamic.isp.telekom.rs) Quit (Remote host closed the connection)
[11:15] * xinyi (~xinyi@vpn-nat.peking.corp.yahoo.com) Quit (Remote host closed the connection)
[11:16] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) has joined #ceph
[11:19] * xinyi (~xinyi@2406:2000:ef96:3:e59f:7565:1c0:1421) has joined #ceph
[11:25] * jtangwk (~Adium@gateway.tchpc.tcd.ie) has joined #ceph
[11:26] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[11:27] * michalefty (~micha@p4FC9BA17.dip0.t-ipconnect.de) has joined #ceph
[11:27] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) Quit (Quit: ZNC - http://znc.sourceforge.net)
[11:28] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) has joined #ceph
[11:29] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) has joined #ceph
[11:37] * fdmanana (~fdmanana@bl4-61-209.dsl.telepac.pt) has joined #ceph
[11:40] * Kioob (~kioob@2a01:e34:ec0a:c0f0:21e:8cff:fe07:45b6) Quit (Ping timeout: 480 seconds)
[11:40] * thb (~me@2a02:2028:95:4d60:2caf:9a85:40a9:1400) has joined #ceph
[11:40] * michalefty (~micha@p4FC9BA17.dip0.t-ipconnect.de) Quit (Read error: Connection reset by peer)
[11:41] * michalefty (~micha@p4FC9BA17.dip0.t-ipconnect.de) has joined #ceph
[11:41] * ikrstic (~ikrstic@c82-214-88-26.loc.akton.net) has joined #ceph
[11:41] <ghartz> sherry, when I used ZFS I had this issue
[11:42] <ghartz> xattr=sa on the zpool
[11:42] <sherry> ghartz: May I ask your client permissions for osd, mds and mon?
[11:42] <ghartz> hmm, nothing funky. Only used admin
[11:43] <sherry> what is xattr=sa?
[11:43] <ghartz> it's a parameter for ZFS
[11:43] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has joined #ceph
[11:44] <ghartz> this will set the xattr in file instead of a directory
[11:44] <ghartz> but it's ZFS specific
[11:44] <sherry> you client has: caps: [mds] allow caps: [mon] allow * caps: [osd] allow *
[11:45] * michalefty (~micha@p4FC9BA17.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[11:46] <sherry> I thought that might be related to cephx, then I removed authentication, bt that didnt help either! The funny thing about it, is that it removes just some of the objects from the pool!!!
[11:50] * xinyi_ (~xinyi@corp-nat.peking.corp.yahoo.com) has joined #ceph
[11:51] <sherry> The last thing that I can think of is that I have multiple directories in one and each of them is directed to a specific pool, I really appreciate if anyone could give a clue :(
[11:54] * PureNZ (~paul@122-62-45-132.jetstream.xtra.co.nz) Quit (Read error: Operation timed out)
[11:55] * Kioob (~kioob@sal69-4-78-192-172-15.fbxo.proxad.net) has joined #ceph
[11:56] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) Quit (Ping timeout: 480 seconds)
[11:56] * xinyi (~xinyi@2406:2000:ef96:3:e59f:7565:1c0:1421) Quit (Ping timeout: 480 seconds)
[12:00] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[12:05] * zack_dolby (~textual@pdf8519e7.tokynt01.ap.so-net.ne.jp) has joined #ceph
[12:10] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[12:20] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[12:25] * vbellur (~vijay@209.132.188.8) Quit (Quit: Leaving.)
[12:26] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) Quit (Quit: Leaving)
[12:31] <tziOm> loicd, no, does not work wo --activate-key even tho its the default key defined in ceph.conf
[12:38] <loicd> does /var/lib/ceph/bootstrap-osd/ceph.keyring exist tziOm ?
[12:39] <tziOm> no
[12:39] <loicd> it should but it depends on how you setup the cluster on the machine
[12:39] <loicd> how did you do it ?
[12:40] <tziOm> manual
[12:40] <loicd> tziOm: like make install ?
[12:40] <tziOm> I made a client.bootstrap-osd keyring
[12:40] <tziOm> caps mon = "allow profile bootstrap-osd"
[12:41] <tziOm> the key is in /etc/ceph/keyring.admin and it uses the key just fine..
[12:41] <tziOm> and it creates auth for osd.X from it..
[12:41] <tziOm> loicd, its debian package, but I mean the cluster deploy is manual
[12:42] <loicd> ok
[12:43] <tziOm> shouldnt that be ok, to copy client.bootstrap-osd key to new osd (and ceph.conf) and then do disk prepare and disk activate manually with this --activate-key ?
[12:44] <loicd> it's interesting to do things the hard way, sometime you find traps ;-)
[12:44] <tziOm> that why I do it ;)
[12:44] <loicd> what if you add the [client.bootstrap-osd] key in the /var/lib/ceph/bootstrap-osd/ceph.keyring
[12:44] <loicd> instead of /etc/ceph/keyring.admin ? does that work better ?
[12:45] * loicd just trying to narrow down the problem
[12:45] <tziOm> as a sidenote, ceph-disk should bail out early with a nice warning if it cant find client.bootstrap-osd (and the name of the key should be --definable)
[12:45] <loicd> tziOm: +1
[12:46] <loicd> [client.bootstrap-osd]
[12:46] <loicd> key = AQC+CHVSePG9NxAANYWZzFxj3xY4W4PvhUUA==
[12:46] <loicd> caps mon = "allow profile bootstrap-osd"
[12:47] <tziOm> thats my caps
[12:47] <loicd> your /var/lib/ceph/bootstrap-osd/ceph.keyring contains something like that, right ?
[12:47] <tziOm> I dont have that keyring, but I will make one and see..
[12:47] <loicd> yes : that will tell us if there is a problem with the handling of the pathname
[12:47] <loicd> if it still fails, it means it's something else
[12:54] * markbby (~Adium@168.94.245.2) has joined #ceph
[12:56] <tziOm> it fails
[12:56] <tziOm> monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
[12:56] <tziOm> 2014-06-19 12:56:24.112817 7f089d2a3700 0 librados: osd.24 initialization error (2) No such file or directory
[12:56] <tziOm> Error connecting to cluster: ObjectNotFound
[12:56] <tziOm> failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf --name=osd.24 --keyring=/etc/ceph/keyring.osd.24 osd crush create-or-move -- 24 0.91 host=cdc01n02 root=default'
[12:56] <tziOm> ceph-disk: Error: ceph osd start failed: Command '['/usr/sbin/service', 'ceph', 'start', 'osd.24']' returned non-zero exit status 1
[12:57] <tziOm> # cat /var/lib/ceph/bootstrap-osd/ceph.keyring
[12:57] <tziOm> [client.bootstrap-osd]
[12:57] <tziOm> key = AQAmcKFTOKANMxAAOgkqD6ZRI2tvnvqQGR77Bw==
[12:57] <tziOm> caps mon = "allow profile bootstrap-osd"
[12:57] <tziOm> and that key is extracted by ceph auth get ...
[12:58] <tziOm> loicd..
[12:58] * loicd reading
[12:59] <loicd> where does --keyring=/etc/ceph/keyring.osd.24 come from ?
[12:59] <tziOm> its keyring setting under [osd] in ceph.conf
[13:00] <tziOm> ah.. there it worked
[13:00] <loicd> oh ? how did you do it ?
[13:00] <tziOm> so, it you have keyring config setting in ceph.conf it will not work
[13:01] <loicd> well, it will work but for the bootstrap phase it's a little tricky
[13:01] <tziOm> I first removed keyring under [osd] ... but then it used /etc/ceph/keyring.admin keyring under [global] .. so I removed..
[13:01] <tziOm> and then bootstrap worked.
[13:01] <loicd> tziOm: do you have a problem using the default path names ?
[13:02] <loicd> I mean, is there a particular reason why you need to explicitly set the pathnames of the keyrings ?
[13:02] <tziOm> loicd, I dont have a problem with it.. but setting osd data to something else than osd data = /var/lib/ceph/osd/ceph-$id makes ceph-disk fail
[13:02] <tziOm> some of ceph disk takes the setting into account.. some not, so its clearly a bufg
[13:03] <tziOm> loicd, ah.. the pathnames for key.. no spesific reason, but it should work, rite? :)
[13:03] <loicd> tziOm: if you change the standard locations of files, I recommend you do everything manually, using ceph-osd or ceph-mon and not even try using ceph-disk
[13:03] <loicd> your approach is to use the strict minimum, I assume ;-)
[13:04] <tziOm> loicd, here I disagree with you, ceph-disk should understand ceph.conf
[13:04] <loicd> and ceph-disk is a layer on top of ceph-osd . And it makes a few assumption about path names. The same is true for the init scripts.
[13:04] <tziOm> (and it does use ceph-conf
[13:05] <loicd> tziOm: it does understand ceph.conf and there is a way to make it work. But it will require that you precisely understand the ceph-disk / init scripts assumptions. And in my opinion it will be easier on you if you just do things manually and not use them at all. But it's really up to you ;-)
[13:05] <tziOm> defaults are nice, but config settings are there to override defaults)
[13:06] <loicd> tziOm: they will override defaults. But if you try inconsisten combinations, you will run into a dead end. And it will be a little tricky to figure out why.
[13:06] <tziOm> loicd, sure, but the partition tagging part of the manual deployment is not really well^D^D^D documented
[13:06] <tziOm> ..unless code is docs
[13:07] <loicd> tziOm: the documentations could be better in this regard. It would be nice if you have time to patch it with the expertise you just acquired, it will be precious to others trying the same approach. Most users (me included, I confess ;-) are happy with the higher levels and the default values ;-)
[13:07] * nhm (~nhm@74.203.127.5) has joined #ceph
[13:07] * ChanServ sets mode +o nhm
[13:08] <tziOm> loicd, I will perhaps make a patch for ceph-disk, taking the relevant ceph.conf settings into account, perhaps with a warning that default settings are overwritten, and all warranties lost ;)
[13:09] <loicd> ahahah
[13:10] * shang (~ShangWu@175.41.48.77) Quit (Remote host closed the connection)
[13:10] <tziOm> I made ceph-osd work in just under 15Mb bzipped (pxe) ..
[13:13] <loicd> cute !
[13:14] <sherry> Any idea why I can't remove my cold storage pool > pool cold-pool does not exist error 16: (16) Device or resource busy
[13:16] * haomaiwang (~haomaiwan@119.6.74.137) has joined #ceph
[13:17] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[13:20] * morse (~morse@supercomputing.univpm.it) Quit (Read error: No route to host)
[13:22] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[13:24] * drankis (~drankis__@89.111.13.198) has joined #ceph
[13:26] * saurabh (~saurabh@nat-pool-blr-t.redhat.com) Quit (Quit: Leaving)
[13:28] * vbellur (~vijay@122.178.201.12) has joined #ceph
[13:37] * zhaochao (~zhaochao@124.205.245.26) has left #ceph
[13:41] * xinyi_ (~xinyi@corp-nat.peking.corp.yahoo.com) Quit (Remote host closed the connection)
[13:41] * xinyi (~xinyi@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[13:43] * DV_ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[13:43] * haomaiwang (~haomaiwan@119.6.74.137) Quit (Ping timeout: 480 seconds)
[13:44] * kosmas (~kosmasgia@capra.lib.uoc.gr) has joined #ceph
[13:45] * haomaiwang (~haomaiwan@125-227-255-23.HINET-IP.hinet.net) has joined #ceph
[13:46] * kosmas (~kosmasgia@capra.lib.uoc.gr) has left #ceph
[13:47] * i_m (~ivan.miro@gbibp9ph1--blueice2n2.emea.ibm.com) has joined #ceph
[13:48] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[13:49] * ade (~abradshaw@dslb-092-078-248-076.pools.arcor-ip.net) has joined #ceph
[13:49] * xinyi (~xinyi@vpn-nat.peking.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[13:50] * pressureman (~pressurem@62.217.45.26) has joined #ceph
[13:54] * nhm (~nhm@74.203.127.5) Quit (Ping timeout: 480 seconds)
[13:54] <tziOm> loicd - a check for hdparm (and other executables ... which (that should also check if file in executable)) would be good..
[13:56] * haomaiwa_ (~haomaiwan@125-227-255-23.HINET-IP.hinet.net) has joined #ceph
[13:56] * haomaiwang (~haomaiwan@125-227-255-23.HINET-IP.hinet.net) Quit (Read error: Connection reset by peer)
[13:58] * wschulze (~wschulze@12.7.204.3) has joined #ceph
[13:59] <tziOm> loicd, also, even tho ceph-disk activate succeeds, I still get this:
[13:59] <tziOm> 2014-06-19 13:57:50.514959 7f5a7b1f6780 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected f28ad726-b51c-4a99-b2a8-f25d96501efb, invalid (someone else's?) journal
[13:59] <tziOm> 2014-06-19 13:57:50.556660 7f5a7b1f6780 -1 filestore(/var/lib/ceph/tmp/mnt.zk2dIx) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
[13:59] <tziOm> 2014-06-19 13:57:50.589121 7f5a7b1f6780 -1 created object store /var/lib/ceph/tmp/mnt.zk2dIx journal /var/lib/ceph/tmp/mnt.zk2dIx/journal for osd.48 fsid fd594f3b-d6ca-4267-bb45-138eec9c8629
[13:59] <tziOm> 2014-06-19 13:57:50.589193 7f5a7b1f6780 -1 auth: error reading file: /var/lib/ceph/tmp/mnt.zk2dIx/keyring: can't open /var/lib/ceph/tmp/mnt.zk2dIx/keyring: (2) No such file or directory
[13:59] <tziOm> 2014-06-19 13:57:50.589495 7f5a7b1f6780 -1 created new key in keyring /var/lib/ceph/tmp/mnt.zk2dIx/keyring
[13:59] <tziOm> added key for osd.48
[13:59] <tziOm> === osd.48 ===
[14:00] <tziOm> create-or-move updating item name 'osd.48' weight 0.91 at location {host=cdc01n02,root=default} to crush map
[14:00] <tziOm> Starting Ceph osd.48 on cdc01n02...
[14:00] <tziOm> starting osd.48 at :/0 osd_data /var/lib/ceph/osd/ceph-48 /var/lib/ceph/osd/ceph-48/journal
[14:00] * jordanP (~jordan@185.23.92.11) has joined #ceph
[14:02] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (Read error: Operation timed out)
[14:09] * wschulze (~wschulze@12.7.204.3) Quit (Quit: Leaving.)
[14:10] * steveeJ (~junky@client156.amh.kn.studentenwohnheim-bw.de) has joined #ceph
[14:11] * julian_ (~julianwa@125.70.135.10) Quit (Quit: afk)
[14:15] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[14:17] * koleosfuscus (~koleosfus@ws11-189.unine.ch) has joined #ceph
[14:19] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Ping timeout: 480 seconds)
[14:29] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[14:32] * themgt (~themgt@c-76-104-28-47.hsd1.va.comcast.net) has joined #ceph
[14:34] <loicd> looks like the ceph-osd mkfs went wrong and the keyring file was not installed for some reason
[14:35] <loicd> tziOm: you're going to have a lot of fun discovering the inner logic of ceph ;-)
[14:36] * Cube (~Cube@tacocat.concubidated.com) has joined #ceph
[14:37] * Cube (~Cube@tacocat.concubidated.com) Quit (Remote host closed the connection)
[14:39] * huangjun (~kvirc@111.174.238.55) Quit (Ping timeout: 480 seconds)
[14:41] * primechuck (~primechuc@69.170.148.179) has joined #ceph
[14:42] * koleosfuscus (~koleosfus@ws11-189.unine.ch) Quit (Quit: koleosfuscus)
[14:43] * koleosfuscus (~koleosfus@ws11-189.unine.ch) has joined #ceph
[14:48] * tdasilva (~quassel@nat-pool-bos-t.redhat.com) has joined #ceph
[14:54] * sleinen1 (~Adium@2001:620:0:46:854a:d163:e1ff:a3a9) has joined #ceph
[14:58] * sleinen (~Adium@2001:620:0:26:5152:88dd:309d:4762) Quit (Ping timeout: 480 seconds)
[14:58] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) has joined #ceph
[14:59] * ade (~abradshaw@dslb-092-078-248-076.pools.arcor-ip.net) Quit (Remote host closed the connection)
[15:03] <loicd> http://ceph.com/docs/master/ is down / 404 . houkouonchi-work do you know anything about it ?
[15:05] * DV_ (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[15:06] * jtangwk (~Adium@gateway.tchpc.tcd.ie) Quit (Quit: Leaving.)
[15:11] * koleosfuscus (~koleosfus@ws11-189.unine.ch) Quit (Quit: koleosfuscus)
[15:13] * koleosfuscus (~koleosfus@ws11-189.unine.ch) has joined #ceph
[15:13] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[15:13] * haomaiwa_ (~haomaiwan@125-227-255-23.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[15:14] * haomaiwang (~haomaiwan@125-227-255-23.HINET-IP.hinet.net) has joined #ceph
[15:16] <JayJ> Do anyone recommend me a guide/writeup to configure openstack cinder over Ceph?
[15:16] * rdas (~rdas@nat-pool-pnq-t.redhat.com) Quit (Quit: Leaving)
[15:17] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[15:18] * tracphil (~tracphil@130.14.71.217) has joined #ceph
[15:21] * mourgaya (~kvirc@80.124.164.139) has joined #ceph
[15:22] <mourgaya> hi, ceph doc master is not availabale, forbidden access
[15:22] <loicd> mourgaya: yes :-)
[15:22] <loicd> I don't know more about it though
[15:23] <mourgaya> so I will wait, perhaps an update !
[15:24] <stj> i had a commit merged into the docs this morning... I wonder if I broke it :)
[15:24] <loicd> ahahah
[15:24] <stj> was just a single word change though, so probably not :)
[15:25] * ade (~abradshaw@dslb-092-078-248-076.pools.arcor-ip.net) has joined #ceph
[15:25] <mourgaya> stj: what a amazing word!
[15:25] <loicd> https://github.com/sjahl/ceph/commit/d2e852e2e7aafe01ffb1a3fd23a2d3714c4efe45
[15:25] <stj> that's it!
[15:25] * hasues (~hazuez@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[15:25] <loicd> the word is "access" and the error is "forbidden access" ... some voodoo is in play
[15:26] <stj> dark, evil voodoo
[15:26] * loicd sacrifices a chicken
[15:26] <stj> :-P
[15:26] <loicd> !norris stj
[15:26] <kraken> stj can access private methods.
[15:26] <mourgaya> :-)
[15:26] * loicd scared by the "access" invasion
[15:27] <leseb> ceph.com/docs/master/ looks down :/ eu.ceph.com is still up though
[15:27] * koleosfuscus (~koleosfus@ws11-189.unine.ch) Quit (Quit: koleosfuscus)
[15:27] <loicd> leseb: it's stj fault :-P
[15:27] <Vacum> http://web.archive.org/web/20140604134538/http://ceph.com/docs/master/
[15:27] * jcsp (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[15:27] <mourgaya> Vacum: balance
[15:27] <leseb> loicd: ok :)
[15:28] <stj> the build succeeded on my laptop! closed:worksforme ;-)
[15:28] <Vacum> balance?
[15:29] <mourgaya> Vacum: in french in the text
[15:29] <mourgaya> Vacum: but is for loic
[15:32] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[15:37] <mo-> I may have a situation here. (old cuttlefish release). all 3 monitors have died and I believe their stores all got corrupted because they all say 0 OSDs up/in and that theyre not in the monmap. i.e. I consider all the monstores gone.
[15:37] <mo-> theres no way to extract the CRUSH-map from the still running OSDs (or rbd clients) and feed osdmap+pgmap back into the mon to start a new, but working mon store?
[15:38] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Read error: Connection reset by peer)
[15:38] * Shishire (~Shishire@0001844c.user.oftc.net) has joined #ceph
[15:38] <mo-> s/store?/store, right?/
[15:38] <kraken> mo- meant to say: theres no way to extract the CRUSH-map from the still running OSDs (or rbd clients) and feed osdmap+pgmap back into the mon to start a new, but working mon store, right??
[15:39] <Shishire> I'm having trouble accessing the documentation. I'm getting 403's when I try to hit ceph.com/docs/master.
[15:40] * jcsp (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[15:40] <stj> Shishire: yeah, we're seeing the same thing. I just emailed John Wilkins about it...
[15:42] <stj> somthing odd is definitely afoot... the web server seems to also be getting a 404 when trying to call the ErrorDocument
[15:42] * shang (~ShangWu@211.21.156.86) has joined #ceph
[15:45] * rpowell (~rpowell@128.135.219.215) has joined #ceph
[15:46] * dmsimard_away is now known as dmsimard
[15:48] <tziOm> I cant get my cluster into a healthy state .. wheezy and 0.80.1
[15:49] <tziOm> placement rules and size is fine
[15:49] <tziOm> and updated leveldb to 1.9
[15:49] * shang (~ShangWu@211.21.156.86) Quit (Quit: Ex-Chat)
[15:50] <tziOm> HEALTH_WARN 50 pgs peering; 107 pgs stuck inactive; 107 pgs stuck unclean
[15:50] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:51] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) has joined #ceph
[15:51] * dr_whax (drwhax@devio.us) has joined #ceph
[15:51] <dr_whax> hoi, seems ceph.com/docs/ is having problems, I get forbidden.
[15:54] * dmsimard (~dmsimard@198.72.122.121) Quit (Quit: Signed off)
[15:56] * dmsimard_away (~dmsimard@198.72.123.142) has joined #ceph
[15:56] * dmsimard_away is now known as dmsimard
[15:57] * dmsimard (~dmsimard@198.72.123.142) Quit ()
[15:57] * dmsimard_away (~dmsimard@198.72.123.142) has joined #ceph
[15:58] * dmsimard_away is now known as dmsimard
[15:58] * dmsimard (~dmsimard@198.72.123.142) Quit ()
[16:00] * ajazdzewski (~quassel@lpz-66.sprd.net) Quit (Remote host closed the connection)
[16:01] * dmsimard_away (~dmsimard@198.72.123.142) has joined #ceph
[16:01] * dmsimard_away is now known as dmsimard
[16:02] * dmsimard (~dmsimard@198.72.123.142) Quit ()
[16:02] * dmsimard_away (~dmsimard@198.72.123.142) has joined #ceph
[16:03] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[16:03] * KB (~oftc-webi@cpe-74-137-252-159.swo.res.rr.com) has joined #ceph
[16:03] * dmsimard_away is now known as dmsimard
[16:06] <mourgaya> dr_whax: yes stj work on this
[16:06] * brad_mssw (~brad@shop.monetra.com) has joined #ceph
[16:07] * ikrstic (~ikrstic@c82-214-88-26.loc.akton.net) Quit (Quit: Konversation terminated!)
[16:08] * shang (~ShangWu@211.21.156.86) has joined #ceph
[16:08] * Cube (~Cube@tacocat.concubidated.com) has joined #ceph
[16:08] <stj> well, I can't fix whatever the problem is :) I just made a small change to the docs
[16:09] <stj> assuming something went awry with the deploy after the change was merged in
[16:10] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) Quit (Read error: Operation timed out)
[16:12] <loicd> http://gitbuilder.sepia.ceph.com/gitbuilder-doc/log.cgi?log=f9eb77b14daac3a4db39b00c3b42da1dd5412ab6
[16:12] <loicd> failed: No space left on device (28)
[16:12] <loicd> that's why
[16:12] <stj> ahh
[16:13] * huangjun (~kvirc@117.151.47.192) has joined #ceph
[16:14] <loicd> I don't have access to the machine though, I guess it really is a problem for houkouonchi-work to solve ;-)
[16:15] <mo-> does anybody know real quick if I can get an OSD to tell me which pgmap its working with? via admin-daemon (socket)
[16:18] <mourgaya> loicd: so you can resize the rbd image and the lvm level!
[16:19] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) has joined #ceph
[16:19] <loicd> :-)
[16:19] <mo-> trying to find out whether a mon store backup I have is recent enough (up to date pgmap) :/
[16:20] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[16:23] <seapasulli> Anyone able to access ceph.com/docs? Mine 403s
[16:24] <seapasulli> oop nm it just worked
[16:24] <seapasulli> wow
[16:24] <stj> i guess the partition the docs were on ran out of space on the last deploy
[16:24] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Ping timeout: 480 seconds)
[16:25] <loicd> thanks to whoever fixed the docs ;-)
[16:26] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) has joined #ceph
[16:26] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) Quit ()
[16:26] <stj> indeed
[16:26] <kraken> http://i.imgur.com/bQcbpki.gif
[16:27] <stj> hah
[16:28] * jcsp (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[16:28] <mourgaya> stj: great it works
[16:29] * rturk|afk is now known as rturk
[16:29] <dr_whax> thanks you folks!
[16:31] * dr_whax (drwhax@devio.us) Quit (Quit: leaving)
[16:33] <mo-> cephx: ceph_decode_ticket could not get service secret for service_id=auth secret_id=2
[16:33] <mo-> what am I looking at?
[16:34] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[16:35] * sleinen (~Adium@2001:620:0:26:c838:74c0:2f64:39fb) has joined #ceph
[16:41] * sleinen1 (~Adium@2001:620:0:46:854a:d163:e1ff:a3a9) Quit (Ping timeout: 480 seconds)
[16:45] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has left #ceph
[16:46] * sleinen (~Adium@2001:620:0:26:c838:74c0:2f64:39fb) Quit (Quit: Leaving.)
[16:47] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Ping timeout: 480 seconds)
[16:50] * bkopilov (~bkopilov@213.57.19.82) has joined #ceph
[16:52] * jcsp (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[16:56] * haomaiwang (~haomaiwan@125-227-255-23.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[16:56] * ade (~abradshaw@dslb-092-078-248-076.pools.arcor-ip.net) Quit (Quit: Too sexy for his shirt)
[16:56] * shang (~ShangWu@211.21.156.86) Quit (Ping timeout: 480 seconds)
[16:58] <tziOm> Any clues on why or how I can figure out why I have stuck unclean and stuck inactive?
[16:59] <tziOm> I did read somewhere it could be a kernel < 3.10 issue, but if this is true, how come?
[17:01] * nhm (~nhm@nat-pool-rdu-u.redhat.com) has joined #ceph
[17:01] * ChanServ sets mode +o nhm
[17:03] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[17:03] * Infitialis (~infitiali@194.30.182.18) Quit (Remote host closed the connection)
[17:05] * nljmo_ (~nljmo@173-11-110-227-SFBA.hfc.comcastbusiness.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[17:05] * sjm (~sjm@nat-pool-rdu-u.redhat.com) has joined #ceph
[17:05] * haomaiwang (~haomaiwan@112.193.131.94) has joined #ceph
[17:06] * BManojlovic (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[17:07] * haomaiwa_ (~haomaiwan@125-227-255-23.HINET-IP.hinet.net) has joined #ceph
[17:09] * Infitialis (~infitiali@194.30.182.18) has joined #ceph
[17:11] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[17:13] * haomaiwang (~haomaiwan@112.193.131.94) Quit (Ping timeout: 480 seconds)
[17:14] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[17:14] * mourgaya (~kvirc@80.124.164.139) Quit (Remote host closed the connection)
[17:15] * mourgaya (~kvirc@80.124.164.139) has joined #ceph
[17:15] <mo-> http://pastebin.com/UBibhL2d has anybody seen somelike like that before? I didnt set this up, just trying how I can make the cluster recover, but this osd tree doesnt look right at all
[17:16] * scuttlemonkey (~scuttlemo@nat-pool-rdu-t.redhat.com) has joined #ceph
[17:16] * ChanServ sets mode +o scuttlemonkey
[17:16] <mo-> do I just reweight osd0 to 1? why does osd1 appear twice, I dont even
[17:17] * allig8r (~allig8r@128.135.219.116) Quit (Read error: Connection reset by peer)
[17:18] * i_m (~ivan.miro@gbibp9ph1--blueice2n2.emea.ibm.com) Quit (Quit: Leaving.)
[17:20] * Infitialis (~infitiali@194.30.182.18) Quit (Remote host closed the connection)
[17:22] <brad_mssw> I've got a 3 node ceph cluster with vm images that I'm just setting up, with a pool with a size(replica) of 2. I'm monitoring disk use across the 3 servers, and it appears disk space is being consumed roughly evenly rather than just 2 servers. Is ceph's architecture such that it actually stripes the data, more like a RAID does than, say, what glusterfs does?
[17:24] * Infitialis (~infitiali@194.30.182.18) has joined #ceph
[17:30] * narb (~Jeff@38.99.52.10) has joined #ceph
[17:32] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[17:35] * scuttlemonkey (~scuttlemo@nat-pool-rdu-t.redhat.com) has left #ceph
[17:35] * allig8r (~allig8r@128.135.219.116) has joined #ceph
[17:35] * scuttlemonkey|afk (~scuttlemo@nat-pool-rdu-t.redhat.com) has joined #ceph
[17:37] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[17:38] * jordanP (~jordan@185.23.92.11) Quit (Quit: Leaving)
[17:39] * markbby (~Adium@168.94.245.2) Quit (Quit: Leaving.)
[17:42] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[17:43] * scuttlemonkey|afk (~scuttlemo@nat-pool-rdu-t.redhat.com) Quit (Quit: Coyote finally caught me)
[17:44] * scuttlemonkey|afk (~scuttlemo@nat-pool-rdu-t.redhat.com) has joined #ceph
[17:45] * pressureman (~pressurem@62.217.45.26) Quit (Quit: Ex-Chat)
[17:45] * scuttlemonkey|afk (~scuttlemo@nat-pool-rdu-t.redhat.com) Quit ()
[17:46] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[17:46] * scuttlemonkey|afk (~scuttlemo@nat-pool-rdu-t.redhat.com) has joined #ceph
[17:46] * jtangwk (~Adium@gateway.tchpc.tcd.ie) has joined #ceph
[17:49] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:50] * madkiss (~madkiss@178.188.60.118) Quit (Quit: Leaving.)
[17:53] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[17:53] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) has joined #ceph
[17:54] * scuttlemonkey|afk (~scuttlemo@nat-pool-rdu-t.redhat.com) Quit (Quit: Coyote finally caught me)
[17:56] * Infitialis (~infitiali@194.30.182.18) Quit (Remote host closed the connection)
[17:56] * nljmo (~nljmo@64.125.103.162) has joined #ceph
[17:56] * sjm (~sjm@nat-pool-rdu-u.redhat.com) Quit (Ping timeout: 480 seconds)
[17:59] * Infitialis (~infitiali@194.30.182.18) has joined #ceph
[17:59] * Infitialis (~infitiali@194.30.182.18) Quit (Remote host closed the connection)
[18:04] * koleosfuscus (~koleosfus@ws11-189.unine.ch) has joined #ceph
[18:06] * JC (~JC@nat-pool-rdu-t.redhat.com) has joined #ceph
[18:06] * masta (~masta@190.7.213.210) has joined #ceph
[18:06] * rturk is now known as rturk|afk
[18:08] * Cube (~Cube@tacocat.concubidated.com) Quit (Remote host closed the connection)
[18:10] * scuttlemonkey|afk (~scuttlemo@nat-pool-rdu-t.redhat.com) has joined #ceph
[18:12] * scuttlemonkey|afk (~scuttlemo@nat-pool-rdu-t.redhat.com) Quit ()
[18:14] * JC (~JC@nat-pool-rdu-t.redhat.com) Quit (Ping timeout: 480 seconds)
[18:15] * gregmark (~Adium@68.87.42.115) has joined #ceph
[18:15] <bens> is your crushmap right?
[18:16] * scuttlemonkey|afk (~scuttlemo@nat-pool-rdu-t.redhat.com) has joined #ceph
[18:18] * scuttlemonkey|afk (~scuttlemo@nat-pool-rdu-t.redhat.com) Quit ()
[18:21] * rturk|afk is now known as rturk
[18:23] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Read error: Operation timed out)
[18:23] * Pedras (~Adium@50.185.218.255) has joined #ceph
[18:25] * nljmo (~nljmo@64.125.103.162) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[18:26] * rturk (~rturk@nat-pool-rdu-t.redhat.com) Quit (Quit: Coyote finally caught me)
[18:26] * rturk|afk (~rturk@nat-pool-rdu-t.redhat.com) has joined #ceph
[18:26] * rturk|afk is now known as rturk
[18:26] * xarses (~andreww@12.164.168.117) has joined #ceph
[18:29] * nljmo (~nljmo@64.125.103.162) has joined #ceph
[18:31] * KaZeR (~kazer@64.201.252.132) has joined #ceph
[18:31] * scuttle|afk (~scuttle@nat-pool-rdu-t.redhat.com) has joined #ceph
[18:31] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[18:32] * scuttle|afk (~scuttle@nat-pool-rdu-t.redhat.com) Quit ()
[18:40] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:42] * rturk is now known as rturk|afk
[18:43] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) Quit (Ping timeout: 480 seconds)
[18:48] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[18:53] * bkopilov (~bkopilov@213.57.19.82) Quit (Remote host closed the connection)
[18:54] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Operation timed out)
[18:56] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[18:58] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:05] * haomaiwa_ (~haomaiwan@125-227-255-23.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[19:08] * rturk|afk is now known as rturk
[19:14] * The_Bishop_ (~bishop@e178113048.adsl.alicedsl.de) has joined #ceph
[19:14] <mo-> I am looking to fix up a cluster that has one OSD with a weight of -0.00003052 (no Im not kidding). I dont know why or how, but I want to know, is it safe to rebalance that osd to 1 (where it should be) or is that number so weird that ceph may throw a tantrum
[19:15] * The_Bishop_ (~bishop@e178113048.adsl.alicedsl.de) Quit ()
[19:15] * The_Bishop_ (~bishop@e178113048.adsl.alicedsl.de) has joined #ceph
[19:19] * koleosfuscus (~koleosfus@ws11-189.unine.ch) Quit (Quit: koleosfuscus)
[19:21] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[19:21] * The_Bishop (~bishop@e179162065.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[19:28] * nolan_ (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) has joined #ceph
[19:32] * aldavud (~aldavud@213.55.184.162) has joined #ceph
[19:35] * scuttle|afk (~scuttle@nat-pool-rdu-t.redhat.com) has joined #ceph
[19:35] * Sysadmin88 (~IceChat77@94.4.22.173) has joined #ceph
[19:36] * scuttle|afk is now known as scuttlemonkey
[19:43] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[19:45] * KaZeR (~kazer@64.201.252.132) Quit (Ping timeout: 480 seconds)
[19:48] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:52] * KaZeR (~kazer@64.201.252.132) has joined #ceph
[19:55] * rturk is now known as rturk|afk
[19:55] * mo- (~mo@2a01:4f8:141:3264::3) Quit (Quit: leaving)
[20:01] <seapasulli> How can I tell how much usable storage a pool has total? When I do ceph -s it seems to report total storage space without replication. How can I find the maximum amount of space a pool can consume? I know I can guess by taking ceph status and divide by 3 but is there any other way?
[20:02] * thomnico (~thomnico@2a01:e35:8b41:120:2891:86a8:c9a9:6075) Quit (Ping timeout: 480 seconds)
[20:04] <ponyofdeath> hi, does the cache tier only work with a single backend pool?
[20:04] <scuttlemonkey> seapasulli: have you messed w/ the new 'ceph df' command at all?
[20:05] * KaZeR (~kazer@64.201.252.132) Quit (Read error: Operation timed out)
[20:05] <scuttlemonkey> new-ish
[20:05] * KaZeR (~kazer@64.201.252.132) has joined #ceph
[20:10] <scuttlemonkey> ponyofdeath: not sure I understand the question
[20:10] <scuttlemonkey> you want one cache pool for multiple backend pools?
[20:14] * rturk|afk is now known as rturk
[20:15] * Sysadmin88 (~IceChat77@94.4.22.173) Quit (Read error: Connection reset by peer)
[20:18] * sigsegv (~sigsegv@188.26.160.142) has joined #ceph
[20:23] * markbby (~Adium@73.37.159.113) has joined #ceph
[20:27] * thomnico (~thomnico@2a01:e35:8b41:120:2891:86a8:c9a9:6075) has joined #ceph
[20:29] * leseb (~leseb@185.21.174.206) Quit (Killed (NickServ (Too many failed password attempts.)))
[20:29] * leseb (~leseb@185.21.174.206) has joined #ceph
[20:31] * Infitialis (~infitiali@5ED48E69.cm-7-5c.dynamic.ziggo.nl) has joined #ceph
[20:31] * themgt (~themgt@c-76-104-28-47.hsd1.va.comcast.net) Quit (Quit: themgt)
[20:32] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[20:32] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) has joined #ceph
[20:32] * markbby1 (~Adium@168.94.245.3) has joined #ceph
[20:33] * markbby (~Adium@73.37.159.113) Quit (Ping timeout: 480 seconds)
[20:33] * aldavud (~aldavud@213.55.184.162) Quit (Ping timeout: 480 seconds)
[20:34] * markbby1 (~Adium@168.94.245.3) Quit (Remote host closed the connection)
[20:35] <seapasulli> scuttlemonkey: I was using rados df which seems to be roughly the same thing
[20:37] <scuttlemonkey> seapasulli: ahh, ok
[20:39] <seapasulli> I have 20 nodes each with 5 4TB disks. so I have roughly 360 - 370 TB of space. The pool on the other hand has size = 3 so That means I would have 121TB of usable space right?
[20:39] <scuttlemonkey> I know there was talk about trying to figure it out...especially via the API for calamari
[20:39] <scuttlemonkey> not sure what came of it though
[20:39] <seapasulli> so it should just be total space / size number = usable space. right?
[20:39] <scuttlemonkey> well, the hard part is that the cluster often has pools with differing replication levels
[20:40] <brad_mssw> i've got a 3-node ceph cluster right now, each running a monitor. When configuring kvm to use a RDB backend, do I need to specify all monitor nodes? If not, what happens if the monitor node goes down _after_ kvm initializes, does it auto-learn the other nodes?
[20:40] <scuttlemonkey> and depending on which one fills up faster it'll change the "overall storage"
[20:40] <scuttlemonkey> but yes, your napkin sketch is fine
[20:41] <seapasulli> but for total space and total space left. The replication level of other pools shouldn't matter right? I mean if you are reporting per pool then each pool can consume a maximum of the total space / size of pool
[20:41] * Pauline (~middelink@bigbox.ch.polyware.nl) Quit (Quit: Leaving)
[20:41] <seapasulli> ah score! heheh
[20:42] <seapasulli> i'm just making sure my logic is sound.. I only watch cartoons... and only read coloring books.
[20:42] <seapasulli> Thanks scuttlemonkey !!
[20:45] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) Quit (Remote host closed the connection)
[20:48] * rendar (~I@host46-177-dynamic.1-87-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[20:51] * rendar (~I@host46-177-dynamic.1-87-r.retail.telecomitalia.it) has joined #ceph
[20:55] * ScOut3R (~ScOut3R@51B614F9.dsl.pool.telekom.hu) has joined #ceph
[20:56] * hijacker (~hijacker@213.91.163.5) Quit (Ping timeout: 480 seconds)
[21:02] * diegows (~diegows@190.190.5.238) has joined #ceph
[21:09] * BManojlovic (~steki@cable-94-189-160-74.dynamic.sbb.rs) has joined #ceph
[21:09] * haomaiwang (~haomaiwan@112.193.130.62) has joined #ceph
[21:10] * markbby (~Adium@168.94.245.3) has joined #ceph
[21:18] * hijacker (~hijacker@213.91.163.5) has joined #ceph
[21:27] * bjornar (~bjornar@ti0099a430-1124.bb.online.no) has joined #ceph
[21:28] * rturk is now known as rturk|afk
[21:28] <bjornar> I am struggeling with pgs stuck in unclean and inactive
[21:31] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[21:31] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[21:38] * tracphil (~tracphil@130.14.71.217) Quit (Quit: Lost terminal)
[21:43] * rturk|afk is now known as rturk
[21:44] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[21:51] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[21:59] * tdasilva (~quassel@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[22:03] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[22:06] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[22:10] <seapasulli> Can I ask what the maximum throughput people have seen in a ceph cluster.
[22:10] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[22:11] * thomnico (~thomnico@2a01:e35:8b41:120:2891:86a8:c9a9:6075) Quit (Ping timeout: 480 seconds)
[22:16] <seapasulli> I head that because of crushs cpu draw, the maximum speed of a ceph cluster can only be 2.5GB/s. This sounds incorrect to me. Has anyone heard anything like this?
[22:16] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[22:18] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) Quit (Ping timeout: 480 seconds)
[22:18] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit (Quit: Quitte)
[22:20] * hasues (~hazuez@kwfw01.scrippsnetworksinteractive.com) Quit (Read error: Operation timed out)
[22:20] <cookednoodles> that sounds like the most random thing ever
[22:20] <cookednoodles> read, write, mixed, what size, what gear
[22:21] <seapasulli> yeah indeed. I asked about it in print somewhere to find out how the cluster was setup, with what tech, etc.
[22:22] <seapasulli> To be fair it was a salesperson that relayed the info but I would still like to know the max throughput someone has achieved.
[22:24] * Infitialis (~infitiali@5ED48E69.cm-7-5c.dynamic.ziggo.nl) Quit (Remote host closed the connection)
[22:32] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[22:35] * ScOut3R (~ScOut3R@51B614F9.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[22:35] * haomaiwang (~haomaiwan@112.193.130.62) Quit (Ping timeout: 480 seconds)
[22:36] * ScOut3R (~ScOut3R@51B614F9.dsl.pool.telekom.hu) has joined #ceph
[22:36] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) has joined #ceph
[22:37] * hasues (~hazuez@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[22:37] * Infitialis (~infitiali@5ED48E69.cm-7-5c.dynamic.ziggo.nl) has joined #ceph
[22:39] * plantain (~plantain@106.187.96.118) Quit (Remote host closed the connection)
[22:39] * plantain (~plantain@106.187.96.118) has joined #ceph
[22:44] * ScOut3R (~ScOut3R@51B614F9.dsl.pool.telekom.hu) Quit (Ping timeout: 480 seconds)
[22:44] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[22:45] * bandrus (~Adium@nat-pool-rdu-t.redhat.com) has joined #ceph
[22:45] <brad_mssw> I'm trying to get cephfs to work, but mds won't start, it says monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication ERROR: failed to authenticate: (95) Operation not supported
[22:45] <brad_mssw> what keyring is it looking for?
[22:47] * rturk is now known as rturk|afk
[22:47] * bandrus (~Adium@nat-pool-rdu-t.redhat.com) Quit ()
[22:50] * rotbeard (~redbeard@2a02:908:df11:9480:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[22:50] * scuttlemonkey is now known as scuttle|afk
[22:51] * lyncos (~chatzilla@208.71.184.41) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 30.0/20140608211622])
[22:51] * beardo (~sma310@beardo.cc.lehigh.edu) Quit (Remote host closed the connection)
[22:56] <seapasulli> brad_mssw: it's looking for the monitor keyring
[22:56] <seapasulli> http://ceph.com/docs/firefly/dev/mon-bootstrap/
[22:56] <seapasulli> one of the first commands in adding monitor nodes:: ceph-authtool --create-keyring /path/to/keyring --gen-key -n mon.
[22:56] * hasues (~hazuez@kwfw01.scrippsnetworksinteractive.com) Quit (Ping timeout: 480 seconds)
[22:56] * markbby (~Adium@168.94.245.3) Quit (Remote host closed the connection)
[22:57] <brad_mssw> I used ceph-deploy ... but it appears that didn't update my ceph.conf
[22:57] * The_Bishop_ (~bishop@e178113048.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[22:57] <brad_mssw> think that was the issue as i see a keyring did indeed get created in /var/lib/ceph/mds/...
[22:59] <brad_mssw> so it wasn't looking for the monitor keyring itself .... it was looking for the mds-specific keyring
[22:59] <brad_mssw> anyhow, looks like is working now .... need to wrap my head around when I need to manually touch ceph.conf or not :/
[23:01] <seapasulli> I never had to touch the ceph.conf to specify the mon keyring
[23:02] * scuttle|afk is now known as scuttlemonkey
[23:02] * brad_mssw (~brad@shop.monetra.com) Quit (Quit: Leaving)
[23:03] * allsystemsarego (~allsystem@79.115.62.26) Quit (Quit: Leaving)
[23:14] <bjornar> My new cluster ends up in stuck unclean inactive state
[23:15] <seapasulli> bjornar: what does "ceph status" say?
[23:15] <bjornar> there are 72 osds up and in (3 nodes)
[23:15] <bjornar> it says: http://pastie.org/9306526
[23:16] * huangjun (~kvirc@117.151.47.192) Quit (Read error: Connection reset by peer)
[23:16] * scuttlemonkey is now known as scuttle|afk
[23:17] <seapasulli> how long has it been this way?
[23:17] * Infitialis (~infitiali@5ED48E69.cm-7-5c.dynamic.ziggo.nl) Quit ()
[23:18] * scuttle|afk is now known as scuttlemonkey
[23:18] <seapasulli> has anyone fully flooded a 20Gb network pipe with ceph? or been able to transfer 20Gb/s or near that?
[23:18] <bjornar> ok..
[23:18] <bjornar> seapasulli, its been since start
[23:18] <bjornar> but think I found the problem!
[23:18] <seapasulli> Woo nice
[23:18] <bjornar> MTU!!!
[23:18] <Pedras> seapasulli: to be clear 20 gigabits?
[23:19] <Pedras> or bytes :)
[23:19] <seapasulli> bits
[23:19] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[23:19] <Pedras> yes
[23:19] <seapasulli> Have you gone higher than 2 by chance?
[23:19] * The_Bishop_ (~bishop@2001:bf7:830:ff04:d05:6e97:d9c1:28c3) has joined #ceph
[23:19] <seapasulli> Pedras: may I ask how high you've gone?
[23:19] <Pedras> the node in question only had 2x10gbps facing the pub net
[23:20] <Pedras> this is a one off test with a particular node
[23:20] <seapasulli> so it's a single node cluster?
[23:20] <Pedras> a one node cluster
[23:20] <Pedras> it was a test of the system itself
[23:21] <Pedras> with a bunch of 10gbps clients doing a lot of cephfs with multiple (large) block sizes
[23:22] <Pedras> I only had one of those to test with
[23:22] <seapasulli> ou set up your cluster to push 20G like that.
[23:23] <Pedras> the machine had a lot of drives
[23:23] <Pedras> 64, fronted by 8 high SSDs for journaling
[23:24] <Pedras> high = high end
[23:24] <Pedras> dcs3700
[23:25] * rpowell (~rpowell@128.135.219.215) Quit (Read error: Operation timed out)
[23:25] <seapasulli> yeah I figured as much
[23:25] <Pedras> ~4Amps
[23:26] <seapasulli> wow that's pretty intense. We're running 32 with 4 SSD right now so pretty close
[23:26] <Pedras> I guess we came to same ssd/disk ratio. glad to hear
[23:26] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[23:27] <Pedras> spent some time on that??? wasted time on lower end stuff
[23:27] <Pedras> still, quite cheap PB/sqft
[23:33] <bjornar> Total time run: 60.001353
[23:33] <bjornar> Total reads made: 1370926
[23:33] <bjornar> Read size: 4096
[23:33] <bjornar> Bandwidth (MB/sec): 89.251
[23:33] <bjornar> Average Latency: 0.00560096
[23:33] <bjornar> Max latency: 0.051316
[23:33] <bjornar> Min latency: 0.000599
[23:33] <bjornar> ups
[23:36] <ponyofdeath> scuttlemonkey: yup one cache pool for multiple backend pools
[23:38] * scuttlemonkey is now known as scuttle|afk
[23:39] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[23:39] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[23:40] * rendar (~I@host46-177-dynamic.1-87-r.retail.telecomitalia.it) Quit ()
[23:43] * jcsp (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[23:44] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) Quit (Quit: Leaving)
[23:45] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[23:46] <_Tass4dar> so.. which kernel version is new enough to include the erasure code stuff?
[23:47] <_Tass4dar> mounting with cephfs fails with feature mismatch due to missing erasure code bits in kernel 3.14
[23:47] <_Tass4dar> would updating to 3.15 be enough, or do i need the 3.16 release candidate?
[23:48] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.