#ceph IRC Log

Index

IRC Log for 2015-02-10

Timestamps are in GMT/BST.

[0:00] * reed (~reed@net-93-144-229-167.cust.dsl.teletu.it) Quit (Quit: Ex-Chat)
[0:01] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) has joined #ceph
[0:02] <seapasulli> anyone free to help with ec pool creation question?
[0:03] * thb (~me@0001bd58.user.oftc.net) Quit (Quit: Leaving.)
[0:03] <seapasulli> I have 21 hosts with 30 osds each . I am trying to create an EC pool of 14/7 and the pool is stuck in undersized_degraded
[0:03] * badone (~brad@66.187.239.16) Quit (Ping timeout: 480 seconds)
[0:04] * alram (~alram@67.159.191.98) has joined #ceph
[0:05] <seapasulli> I am not really sure why as, to me, it should work
[0:05] <seapasulli> here is the pg query of one of the 3 degraded pgs:: http://paste.ubuntu.com/10149594/
[0:09] <togdon> I just spent *WAY* too much time beating my head against the fact that the version of qemu-kvm (and associated tools) that ship with RDO/EL7 are not compatible with Ceph. It'd be helpful if the documentation for either RDO or Ceph mentioned this (and even more helpful they passed along a pointer to the ovirt repo which does provide compatible packages...)
[0:09] * grepory (sid29799@id-29799.brockwell.irccloud.com) Quit (Read error: Connection reset by peer)
[0:10] * badone (~brad@66.187.239.11) has joined #ceph
[0:10] * CephTestC (~CephTestC@199.91.185.156) has joined #ceph
[0:10] * gabrtv (sid36209@id-36209.brockwell.irccloud.com) Quit (Ping timeout: 480 seconds)
[0:11] * diegows (~diegows@190.190.5.238) has joined #ceph
[0:11] * alram (~alram@67.159.191.98) Quit (Quit: leaving)
[0:12] * zack_dol_ (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[0:12] <seapasulli> wow togdon didn't know. We run ubuntu here and it seems to work.
[0:12] <seapasulli> we'll with openstack
[0:13] <togdon> In 6.x there are working packages in ceph-extras... http://ceph.com/packages/ceph-extras/rpm/centos6/x86_64/
[0:14] <jwilkins> togdon: Sorry to hear that. Did you see this: http://ceph.com/docs/master/install/install-vm-cloud/#rpm-packages
[0:14] <togdon> Note that the ones from the ovirt project work fine... it's just that it's not mentioned on either RDO's site or in the ceph docs... http://docs.ceph.com/docs/master/rbd/rbd-openstack/
[0:15] <togdon> Yes, including the part that reads "It is not needed for Fedora or RHEL 7+."
[0:17] <jwilkins> If you can send me an example for the repo I can update the docs, or you can do a pull request: http://ceph.com/docs/master/start/documenting-ceph/
[0:17] <togdon> I found the solution here: https://ask.openstack.org/en/question/59480/how-can-i-get-kvm-rpm-package-which-support-ceph-rbd-for-centos7-or-rhel-7/ but the error state that led me to look in that direction was super non-obivous
[0:17] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[0:17] * karis_ (~karis@178-156-160.dynamic.cyta.gr) Quit (Ping timeout: 480 seconds)
[0:18] * lpabon (~quassel@213.133.141.248) has joined #ceph
[0:20] * chasmo77 (~chas77@158.183-62-69.ftth.swbr.surewest.net) Quit (Remote host closed the connection)
[0:28] * lpabon (~quassel@213.133.141.248) Quit (Ping timeout: 480 seconds)
[0:37] * jaank (~quassel@98.215.50.223) has joined #ceph
[0:40] <seapasulli> is there a reason ceph defaults to 2048 vs 4096 for the -i size= option for mkfs?
[0:41] * lpabon (~quassel@213.133.141.248) has joined #ceph
[0:41] * lpabon (~quassel@213.133.141.248) Quit (Remote host closed the connection)
[0:45] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[0:48] * sjm (~sjm@pool-98-109-11-113.nwrknj.fios.verizon.net) has left #ceph
[0:52] * nitti (~nitti@162.222.47.218) Quit (Ping timeout: 480 seconds)
[0:54] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[0:59] * dmsimard is now known as dmsimard_away
[1:01] * shaunm (~shaunm@74.215.76.114) Quit (Read error: Connection timed out)
[1:02] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[1:02] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[1:05] * zack_dolby (~textual@nfmv001082069.uqw.ppp.infoweb.ne.jp) has joined #ceph
[1:06] * lcurtis (~lcurtis@47.19.105.250) Quit (Ping timeout: 480 seconds)
[1:13] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:16] * oms101 (~oms101@p20030057EA40BD00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:19] * vakulkar (~vakulkar@209.132.181.86) Quit (Ping timeout: 480 seconds)
[1:20] <Tene> Anyone know of any projects for storing log data in ceph? We're considering trying to move some of our production data into ceph, but we'd like to try it out with something less important first, and we've been talking about trying to improve our logging systems.
[1:21] * lalatenduM (~lalatendu@mx-b.hotelavanti.cz) Quit (Quit: Leaving)
[1:24] * oms101 (~oms101@p20030057EA225E00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[1:27] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[1:28] * sjm (~sjm@172.56.36.120) has joined #ceph
[1:28] * shang_ (~ShangWu@220-135-203-169.HINET-IP.hinet.net) has joined #ceph
[1:31] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[1:32] * grepory (sid29799@id-29799.brockwell.irccloud.com) has joined #ceph
[1:37] * ircolle (~Adium@2601:1:a580:145a:41ee:ee2f:feab:2eeb) Quit (Quit: Leaving.)
[1:41] * gabrtv (sid36209@id-36209.brockwell.irccloud.com) has joined #ceph
[1:42] * jaank (~quassel@98.215.50.223) Quit (Ping timeout: 480 seconds)
[1:44] * togdon (~togdon@74.121.28.6) Quit (Quit: Textual IRC Client: www.textualapp.com)
[1:49] * jaank (~quassel@98.215.50.223) has joined #ceph
[1:50] * sputnik13 (~sputnik13@74.202.214.170) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:52] * bandrus (~brian@54.sub-70-211-78.myvzw.com) has joined #ceph
[1:58] * shang_ (~ShangWu@220-135-203-169.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[2:00] * redf_ (~red@chello084112110034.11.11.vie.surfer.at) has joined #ceph
[2:01] * mikedawson (~chatzilla@98.227.179.172) has joined #ceph
[2:01] * jlavoy (~Adium@173.227.74.5) Quit (Quit: Leaving.)
[2:01] * swami1 (~swami@117.192.233.108) has joined #ceph
[2:03] <mikedawson> joshd: just getting around to fixing that volume with a missing rbd_header. Should I be able to rados -p volumes put rbd_header.missing rbd_header.good where rbd_header.good is a new clone of the same parent image with the same size?
[2:03] <mikedawson> joshd: it just hangs...
[2:04] <joshd> mikedawson: yeah, no reason you wouldn't be able to unless rados is unhealthy
[2:05] <mikedawson> I haven't messed with the omap stuff yet. Perhaps it is hanging because the 'object_prefix' is referencing the good, new volume instead of the missing volume?
[2:06] <mikedawson> When, exactly do you set the omap values? before or after the rados put?
[2:07] <mikedawson> joshd: rados seems fine, except hanging on any command that needs the missing rbd_header. Perhaps I need to just inject the good rbd_header clone with the missing rbd_header name into the PG's filesystem instead?
[2:07] <mikedawson> then do the omap stuff, perhaps
[2:07] * redf (~red@chello084112110034.11.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[2:08] <joshd> I'm not sure if write_full resets the omap values too
[2:09] <joshd> easiest to do the omap stuff after copying the good one
[2:09] <mikedawson> Considering the rados put hangs, what do you suggest?
[2:10] <mikedawson> joshd: this volume is currently mounted and running. Is that potentially the source of the hang?
[2:11] <joshd> rbd_header.good is your local file copy of the new header?
[2:12] <mikedawson> Yes. I got it with a rados get from the good volume. It is 0 bytes on disk
[2:12] <nigwil> Tene: this one stores metrics, rather than logs as such in Ceph: https://github.com/anchor/vaultaire
[2:13] <joshd> mikedawson: ok, no need to put it if it's 0 bytes. might be an old bug making it hang on a 0 byte file
[2:14] <mikedawson> joshd: 'rados -p volumes put rbd_header.missing rbd_header.good' hangs whereas 'rados -p volumes put mikes-test rbd_header.good' works
[2:14] * vicente_luchi (~smuxi@189.27.162.157.dynamic.adsl.gvt.net.br) has joined #ceph
[2:15] * vicente_luchi (~smuxi@189.27.162.157.dynamic.adsl.gvt.net.br) has left #ceph
[2:15] * kefu (~kefu@114.92.113.105) has joined #ceph
[2:17] <mikedawson> joshd: What do you think about creating the rbd_header.missing via something like 'rados -p volumes create rbd_header.missing'? Or the idea of copying the good rbd_header to the missing rbd_header's location in the PG's filesystem?
[2:17] <joshd> mikedawson: it's all in omap, so doing it through the PG's filesystem wouldn't be easy (it's in leveldb there)
[2:18] * danieagle (~Daniel@201-95-103-54.dsl.telesp.net.br) has joined #ceph
[2:18] <joshd> mikedawson: why do you want to restore the old header at this point? are you trying to get data out of it?
[2:19] <mikedawson> joshd: Ideally I'd like to have the volume restored to working order (and continue to use it).
[2:20] <mikedawson> joshd: its been running the whole time since the rbd_header went missing. My thought is that if the rbd_header was restored, the client instance continues and rados is happy.
[2:22] <nigwil> Tene: here is a video about Voltaire: http://mirror.linux.org.au/linux.conf.au/2015/OGGB3/Thursday/Vaultaire_a_data_vault_for_system_metrics_backed_onto_Ceph.webm
[2:22] <joshd> mikedawson: unfortunately the tools around omap are limited, you'd need to use librados in c++ to handle the binary data stored in the header
[2:23] <joshd> omap was only added to the C interface in giant, and it's not in the python interface yet
[2:25] <mikedawson> joshd: Last time we chatted, we were thinking something like "object=rbd_header.good; for key in `rados -p volumes listomapkeys $object`; do rados -p volumes getomapval $object $key $key; done" would extract the needed values. Edit them, then setomapheader to rbd_header.missing. Is that too simplistic to work?
[2:28] <joshd> mikedawson: it looks like that would work - it's just a bit annoying to pass the data to setomapval as the third argument
[2:29] <joshd> mikedawson: you can quote it with $'' in bash
[2:33] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:34] * yanzheng (~zhyan@182.139.204.47) has joined #ceph
[2:35] <mikedawson> joshd: ok. If that stands a chance, it seems the first thing I need to do is be able to rados put the missing rbd_header with a clone of the good one. Since it currently hangs, do you thing 'ceph pg 4.653 mark_unfound_lost delete' is appropriate?
[2:35] <mikedawson> then create and/or rados put the object?
[2:35] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[2:36] <joshd> mikedawson: no, you don't need to worry about that if it's not marked lost in ceph -s. just rados rm/rados put it
[2:36] <joshd> don't even need to create it before adding omap settings actually
[2:37] * swami2 (~swami@223.227.68.39) has joined #ceph
[2:37] <mikedawson> joshd: well... 'rados -p volumes rm rbd_header.9e43272eb141f2' hangs too. That's the real name of the missing rbd_header.
[2:39] <joshd> mikedawson: does it show up as unfound?
[2:39] <mikedawson> joshd: yes... pg 4.653 is active+recovering, acting [5,70,63], 1 unfound
[2:41] <joshd> ok, doing the mark_unfound_lost_revert may restore a workable version then
[2:41] * vakulkar (~vakulkar@c-50-185-132-102.hsd1.ca.comcast.net) has joined #ceph
[2:41] <joshd> if not, it'll let you do the omap stuff above
[2:42] * swami1 (~swami@117.192.233.108) Quit (Ping timeout: 480 seconds)
[2:42] <mikedawson> joshd: Revert doesn't seem like it will work in this case... 'ceph pg 4.653 list_missing' returns (among other things)... "need": "82479'2892808", "have": "0'0", "locations": []}],
[2:43] <joshd> mark_unfound_lost_delete then
[2:44] * LeaChim (~LeaChim@host86-159-236-51.range86-159.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:47] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:50] * kefu (~kefu@114.92.113.105) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[2:53] * Kupo2 (~tyler.wil@23.111.254.159) has joined #ceph
[2:56] * lcurtis (~lcurtis@ool-18bfec0b.dyn.optonline.net) has joined #ceph
[2:56] * JCL (~JCL@73.189.243.134) Quit (Quit: Leaving.)
[2:57] * JCL (~JCL@73.189.243.134) has joined #ceph
[2:57] <mikedawson> joshd: the primary osd is crashing (with backtrace) after the 'ceph pg 4.653 mark_unfound_lost revert'. I'll look it up in the tracker or file a new bug
[2:57] * cholcombe973 (~chris@7208-76ef-ff1f-ed2f-329a-f002-3420-2062.6rd.ip6.sonic.net) has left #ceph
[2:57] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[2:58] * sudocat (~davidi@192.185.1.20) Quit (Ping timeout: 480 seconds)
[3:00] * kefu (~kefu@114.92.113.105) has joined #ceph
[3:01] * rmoe (~quassel@173-228-89-134.dsl.static.fusionbroadband.com) has joined #ceph
[3:02] * shaunm (~shaunm@74.215.76.114) has joined #ceph
[3:03] * calvinx (~calvin@103.7.202.198) has joined #ceph
[3:04] * puffy (~puffy@216.207.42.129) Quit (Ping timeout: 480 seconds)
[3:04] * jaank (~quassel@98.215.50.223) Quit (Ping timeout: 480 seconds)
[3:06] * lcurtis (~lcurtis@ool-18bfec0b.dyn.optonline.net) Quit (Ping timeout: 480 seconds)
[3:08] <mikedawson> joshd: looks similar to http://tracker.ceph.com/issues/8008
[3:09] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[3:09] <mikedawson> but I'm on 0.67.9
[3:09] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[3:10] * swami1 (~swami@117.221.97.61) has joined #ceph
[3:11] <joshd> mikedawson: what's the backtrace from yours? (that one was from EC refactoring post-dumpling, so it'd be different)
[3:13] * swami2 (~swami@223.227.68.39) Quit (Read error: Connection reset by peer)
[3:15] <mikedawson> joshd: http://pastebin.com/raw.php?i=WqLtp7A0
[3:15] <mikedawson> if you want to see farther back, I can upload it
[3:19] <joshd> mikedawson: I think that's enough info for a bug. but it goes through a different path that may work if you do 'mark_unfound_lost delete' instead of revert
[3:20] <mikedawson> joshd: the 'ceph' cli doesn't accept delete... validation only seems to accept revert
[3:21] <mikedawson> joshd: "# ceph pg 4.653 mark_unfound_lost delete" -> "Invalid command: delete not in revert" "Error EINVAL: invalid command"
[3:21] * eternaleye (~eternaley@50.245.141.73) Quit (Quit: Quit)
[3:22] <mikedawson> slightly different output with a 0.80.8 client, but it still fails with "Invalid command: delete not in revert"
[3:23] <joshd> ah, yeah, that apparently wasn't in dumpling osds
[3:26] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[3:27] * eternaleye (~eternaley@50.245.141.73) has joined #ceph
[3:27] * calvinx (~calvin@103.7.202.198) Quit (Quit: calvinx)
[3:28] <joshd> mikedawson: I'm not sure there's a workaround for that. I'd suggest filing a new bug about it - looks like it's the case when no old copies exist that's problematic
[3:28] <mikedawson> joshd: will do
[3:36] * calvinx (~calvin@103.7.202.198) has joined #ceph
[3:37] * KevinPerks (~Adium@2606:a000:80a1:1b00:617e:c2b8:e848:5f6e) Quit (Quit: Leaving.)
[3:37] <winston-d> joshd: quick question about 'rados load-gen', does it make use of 'sparse' object, i.e. if one specifies 20MB for object size, does it write 20MB actual data or do the trick by writing only small amount of data to 20MB-data_size offset to make ceph think the object is 20MB?
[3:38] * eternaleye_ (~eternaley@50.245.141.73) has joined #ceph
[3:40] <joshd> winston-d: looks like it precreates objects according to the min/max length by writing one byte to the end of them, so they would be sparse
[3:41] * OutOfNoWhere (~rpb@76.8.45.168) has joined #ceph
[3:41] * eternaleye (~eternaley@50.245.141.73) Quit (Ping timeout: 480 seconds)
[3:41] * eternaleye_ is now known as eternaleye
[3:41] <winston-d> joshd: ok, good to know.
[3:43] <winston-d> joshd: one of our guys managed to use 'rados load-gen' to bring down the entire cluster.
[3:44] <joshd> oops
[3:44] <winston-d> joshd: he use rados load-gen against one pool with limited # of PGs and then increased PG# 100 times.
[3:44] * swami1 (~swami@117.221.97.61) Quit (Quit: Leaving.)
[3:45] <winston-d> joshd: turns out sparse object isn't really considered sparse when ceph is rebalacing or moving data around for whatever reason.
[3:45] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[3:45] * nitti (~nitti@c-66-41-30-224.hsd1.mn.comcast.net) has joined #ceph
[3:45] <joshd> like 8 -> 800 pgs? the monitors will stop you from doing large jumps now, since it is a lot of movement
[3:46] <winston-d> joshd: sorry, it was 10 times, 99->1024
[3:48] <joshd> there was some effort to make sure things stayed sparse during recovery via fiemap, but that's off by default since it's buggy on some kernels
[3:49] <winston-d> joshd: is that available on firefly?
[3:49] * eternaleye_ (~eternaley@50.245.141.73) has joined #ceph
[3:50] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[3:50] <joshd> no, it'll be in hammer
[3:51] <winston-d> joshd: oh, that's a bummer, we have to stay with stable/LTS version
[3:51] * jaank (~quassel@98.215.50.223) has joined #ceph
[3:52] * calvinx (~calvin@103.7.202.198) Quit (Quit: calvinx)
[3:53] * sjm (~sjm@172.56.36.120) Quit (Read error: Connection reset by peer)
[3:53] <joshd> well hammer will be the next LTS one. fiemap still needs careful testing to make sure it doesn't misbehave on your system in any case. in the future we'll switch to SEEK_HOLE/SEEK_DATA, which is the more reliable interface
[3:53] * nitti (~nitti@c-66-41-30-224.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[3:54] * calvinx (~calvin@103.7.202.198) has joined #ceph
[3:54] * calvinx (~calvin@103.7.202.198) Quit ()
[3:54] * eternaleye (~eternaley@50.245.141.73) Quit (Ping timeout: 480 seconds)
[3:54] * eternaleye_ is now known as eternaleye
[3:55] <winston-d> Until hammer LTS is out, we have to stick to firefly for now. But still good to know.
[3:55] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) Quit (Quit: Away)
[3:56] * zhaochao (~zhaochao@111.161.77.232) has joined #ceph
[3:58] * yanzheng1 (~zhyan@182.139.205.12) has joined #ceph
[3:58] <via> was giant not intended to be a lts release?
[3:59] * calvinx (~calvin@103.7.202.198) has joined #ceph
[3:59] <winston-d> via I guess it's like odd version of Ubuntu
[3:59] <via> i upgraded when it came out, and kinda wish i hadn't
[4:01] * yanzheng (~zhyan@182.139.204.47) Quit (Ping timeout: 480 seconds)
[4:01] * KevinPerks (~Adium@2606:a000:80a1:1b00:617e:c2b8:e848:5f6e) has joined #ceph
[4:02] <winston-d> via: well, you get the benefit of new features, improved performance together with some new bugs. ;)
[4:03] <via> well, i upgraded because i thought it was an lts release
[4:03] <via> ...looking at the original release email, it was
[4:04] <via> and there's been talk on the mailing list about backporting fixes for a while
[4:04] <joshd> it's still a stable release, just not a long term one (every other stable one is long term)
[4:04] <via> i see
[4:04] <via> i didn't realize there was a distinction
[4:04] <winston-d> via: last LTS was Dumpling, so it's been like that
[4:05] <winston-d> if you don't need to stick to a version, keeping up with latest stable release isn't a bad choice IMO
[4:06] <via> what is the latest stable?
[4:06] <via> still .87?
[4:08] <joshd> yeah, until next month
[4:11] * calvinx (~calvin@103.7.202.198) Quit (Quit: calvinx)
[4:14] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) has joined #ceph
[4:17] * shang_ (~ShangWu@175.41.48.77) has joined #ceph
[4:17] * sudocat (~davidi@2601:e:2b80:9920:249e:90c4:eef7:6c4b) has joined #ceph
[4:21] * danieagle (~Daniel@201-95-103-54.dsl.telesp.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[4:30] * dmsimard_away is now known as dmsimard
[4:30] * overclk (~overclk@121.244.87.117) has joined #ceph
[4:32] * dmsimard is now known as dmsimard_away
[4:35] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[4:43] * vbellur (~vijay@122.167.168.113) has joined #ceph
[4:47] * jaank (~quassel@98.215.50.223) Quit (Ping timeout: 480 seconds)
[4:54] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[4:55] * elder_ (~elder@113.28.134.59) has joined #ceph
[5:06] * vilobhmm (~vilobhmm@c-24-23-129-93.hsd1.ca.comcast.net) has joined #ceph
[5:08] * overclk (~overclk@121.244.87.117) Quit (Ping timeout: 480 seconds)
[5:11] * macjack (~Thunderbi@123.51.160.200) Quit (Read error: Connection reset by peer)
[5:12] * macjack (~Thunderbi@123.51.160.200) has joined #ceph
[5:12] * yanzheng1 (~zhyan@182.139.205.12) Quit (Quit: This computer has gone to sleep)
[5:13] * Vacuum_ (~vovo@88.130.197.210) has joined #ceph
[5:16] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[5:18] * overclk (~overclk@121.244.87.124) has joined #ceph
[5:18] * calvinx (~calvin@49.128.61.253) has joined #ceph
[5:20] * Vacuum (~vovo@i59F7A3E9.versanet.de) Quit (Ping timeout: 480 seconds)
[5:22] * yanzheng (~zhyan@182.139.205.12) has joined #ceph
[5:32] * kefu (~kefu@114.92.113.105) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[5:33] * amote (~amote@121.244.87.116) has joined #ceph
[5:36] * elder_ (~elder@113.28.134.59) Quit (Ping timeout: 480 seconds)
[5:39] * calvinx (~calvin@49.128.61.253) Quit (Quit: calvinx)
[5:42] * yanzheng (~zhyan@182.139.205.12) Quit (Quit: This computer has gone to sleep)
[5:44] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[5:46] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[5:47] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit ()
[5:47] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[5:49] * vilobhmm (~vilobhmm@c-24-23-129-93.hsd1.ca.comcast.net) Quit (Quit: Away)
[5:50] * elder_ (~elder@210.177.145.245) has joined #ceph
[5:56] * elder_ (~elder@210.177.145.245) Quit (Quit: Leaving)
[6:04] * OutOfNoWhere (~rpb@76.8.45.168) Quit (Ping timeout: 480 seconds)
[6:05] * yanzheng (~zhyan@182.139.205.12) has joined #ceph
[6:14] * swami1 (~swami@49.32.0.223) has joined #ceph
[6:24] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[6:25] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[6:25] * yanzheng1 (~zhyan@182.139.205.12) has joined #ceph
[6:26] * yanzheng (~zhyan@182.139.205.12) Quit (Ping timeout: 480 seconds)
[6:29] * rdas (~rdas@121.244.87.116) has joined #ceph
[6:33] * ohnomrbill (~ohnomrbil@c-67-174-241-112.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[6:43] * rdas_ (~rdas@121.244.87.116) has joined #ceph
[6:49] * rdas (~rdas@121.244.87.116) Quit (Ping timeout: 480 seconds)
[6:52] * vbellur (~vijay@122.167.168.113) Quit (Ping timeout: 480 seconds)
[6:58] * PingKuo (~ping@123.51.160.200) has joined #ceph
[6:59] * calvinx (~calvin@103.7.202.198) has joined #ceph
[7:00] * vakulkar (~vakulkar@c-50-185-132-102.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[7:01] * sudocat (~davidi@2601:e:2b80:9920:249e:90c4:eef7:6c4b) Quit (Ping timeout: 480 seconds)
[7:04] * Edmond21 (~Edmond21@79.141.163.14) has joined #ceph
[7:04] <Edmond21>
[7:07] * Edmond21 (~Edmond21@79.141.163.14) Quit (Read error: Connection reset by peer)
[7:11] * linjan (~linjan@80.179.241.27) has joined #ceph
[7:16] * elder_ (~elder@113.28.134.59) has joined #ceph
[7:17] * vbellur (~vijay@121.244.87.117) has joined #ceph
[7:22] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[7:24] * calvinx (~calvin@103.7.202.198) Quit (Quit: calvinx)
[7:37] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Remote host closed the connection)
[7:37] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[7:45] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[7:49] * rwheeler (~rwheeler@nat-pool-tlv-u.redhat.com) has joined #ceph
[7:49] * macjack (~Thunderbi@123.51.160.200) Quit (Read error: Connection reset by peer)
[7:50] * macjack (~Thunderbi@123.51.160.200) has joined #ceph
[7:55] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[8:01] * zack_dolby (~textual@nfmv001082069.uqw.ppp.infoweb.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[8:16] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[8:21] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[8:24] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[8:26] * Sysadmin88 (~IceChat77@94.12.240.104) Quit (Quit: On the other hand, you have different fingers.)
[8:31] * KevinPerks (~Adium@2606:a000:80a1:1b00:617e:c2b8:e848:5f6e) Quit (Quit: Leaving.)
[8:32] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[8:42] * kanagaraj (~kanagaraj@94.205.252.243) has joined #ceph
[8:42] * kanagaraj (~kanagaraj@94.205.252.243) Quit (Remote host closed the connection)
[8:43] * thb (~me@0001bd58.user.oftc.net) has joined #ceph
[8:46] * PingKuo (~ping@123.51.160.200) Quit (Ping timeout: 480 seconds)
[8:47] * overclk (~overclk@121.244.87.124) Quit (Remote host closed the connection)
[8:51] * elder_ (~elder@113.28.134.59) Quit (Ping timeout: 480 seconds)
[8:53] * kefu (~kefu@114.92.113.105) has joined #ceph
[8:55] * lalatenduM (~lalatendu@2001:718:801:22e:8e70:5aff:fe49:d30c) has joined #ceph
[8:57] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[8:59] * elder_ (~elder@210.177.145.249) has joined #ceph
[9:00] * dgurtner (~dgurtner@178.197.231.128) has joined #ceph
[9:03] * sleinen1 (~Adium@2001:620:0:82::102) Quit (Ping timeout: 480 seconds)
[9:03] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[9:03] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[9:11] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[9:15] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[9:17] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) has joined #ceph
[9:19] * badone (~brad@66.187.239.11) Quit (Ping timeout: 480 seconds)
[9:21] * kefu (~kefu@114.92.113.105) Quit (Max SendQ exceeded)
[9:22] * cok (~chk@2a02:2350:18:1010:b0ef:1ff5:1430:219a) has joined #ceph
[9:25] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[9:30] * BManojlovic (~steki@178-222-84-14.dynamic.isp.telekom.rs) has joined #ceph
[9:31] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:31] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[9:35] * elder_ (~elder@210.177.145.249) Quit (Ping timeout: 480 seconds)
[9:35] * ksingh (~Adium@2001:708:10:10:25cb:9998:e957:b4e1) has joined #ceph
[9:36] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) has joined #ceph
[9:37] * fsimonce (~simon@host217-37-dynamic.30-79-r.retail.telecomitalia.it) has joined #ceph
[9:38] * squ (~Thunderbi@46.109.186.160) has joined #ceph
[9:38] * brutuscat (~brutuscat@198.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[9:41] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[9:44] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) has joined #ceph
[9:45] * elder_ (~elder@210.177.145.249) has joined #ceph
[9:54] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[9:57] * karis (~karis@78-106-206.adsl.cyta.gr) has joined #ceph
[10:03] * bandrus (~brian@54.sub-70-211-78.myvzw.com) Quit (Quit: Leaving.)
[10:04] * analbeard (~shw@support.memset.com) has joined #ceph
[10:07] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[10:07] <Be-El> hi
[10:07] * squ (~Thunderbi@46.109.186.160) Quit (Quit: squ)
[10:07] * zack_dolby (~textual@S225200086111.seint-userreverse.kddi.ne.jp) has joined #ceph
[10:08] * yguang11 (~yguang11@2406:2000:ef96:e:fd0d:5b2f:9ab0:df01) has joined #ceph
[10:13] * brutuscat (~brutuscat@198.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[10:14] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:14] <SamYaple> hello Be-El
[10:14] * brutuscat (~brutuscat@198.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[10:17] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[10:19] * overclk (~overclk@121.244.87.117) has joined #ceph
[10:28] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[10:29] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Remote host closed the connection)
[10:30] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[10:31] * branto (~branto@178-253-148-48.3pp.slovanet.sk) has joined #ceph
[10:38] * sig_wall (~adjkru@xn--hwgz2tba.lamo.su) Quit (Quit: Changing server)
[10:39] * cok (~chk@2a02:2350:18:1010:b0ef:1ff5:1430:219a) Quit (Quit: Leaving.)
[10:42] * yguang11_ (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[10:45] * yguang11 (~yguang11@2406:2000:ef96:e:fd0d:5b2f:9ab0:df01) Quit (Ping timeout: 480 seconds)
[10:45] * brutuscat (~brutuscat@198.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[10:47] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[10:48] * brutuscat (~brutuscat@198.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[10:56] * shang_ (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[10:56] * linjan (~linjan@80.179.241.27) Quit (Read error: Connection reset by peer)
[10:58] * linjan (~linjan@80.179.241.27) has joined #ceph
[10:59] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) Quit (Quit: Leaving.)
[10:59] * mikedawson (~chatzilla@98.227.179.172) Quit (Ping timeout: 480 seconds)
[11:00] * zack_dolby (~textual@S225200086111.seint-userreverse.kddi.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[11:04] * zack_dolby (~textual@S225200086111.seint-userreverse.kddi.ne.jp) has joined #ceph
[11:04] * Stephany21 (~Stephany2@79.141.163.20) has joined #ceph
[11:04] <Stephany21> Never Pay for Porn ever again. Click Here!
[11:06] * Stephany21 (~Stephany2@79.141.163.20) Quit (autokilled: This host violated network policy. Contact support@oftc.net for further information and assistance. (2015-02-10 10:06:48))
[11:07] * linjan (~linjan@80.179.241.27) Quit (Ping timeout: 480 seconds)
[11:14] * brutuscat (~brutuscat@198.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[11:14] * sleinen (~Adium@130.59.94.73) has joined #ceph
[11:15] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[11:16] * sleinen1 (~Adium@2001:620:0:82::104) has joined #ceph
[11:17] * yanzheng1 (~zhyan@182.139.205.12) Quit (Quit: This computer has gone to sleep)
[11:19] * brutuscat (~brutuscat@198.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[11:20] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[11:21] * bjornar (~bjornar@ns3.uniweb.no) has joined #ceph
[11:22] * sleinen (~Adium@130.59.94.73) Quit (Ping timeout: 480 seconds)
[11:24] * elder_ (~elder@210.177.145.249) Quit (Quit: Leaving)
[11:30] * linjan (~linjan@80.178.220.195.adsl.012.net.il) has joined #ceph
[11:33] * vbellur (~vijay@121.244.87.124) has joined #ceph
[11:41] * sleinen (~Adium@130.59.94.73) has joined #ceph
[11:41] * sleinen1 (~Adium@2001:620:0:82::104) Quit (Read error: Connection reset by peer)
[11:45] * brutuscat (~brutuscat@198.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[11:45] * lalatenduM (~lalatendu@2001:718:801:22e:8e70:5aff:fe49:d30c) Quit (Quit: Leaving)
[11:47] * yanzheng1 (~zhyan@182.139.205.12) has joined #ceph
[11:49] * sleinen (~Adium@130.59.94.73) Quit (Ping timeout: 480 seconds)
[11:54] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[12:01] * yanzheng1 (~zhyan@182.139.205.12) Quit (Quit: This computer has gone to sleep)
[12:02] * zack_dolby (~textual@S225200086111.seint-userreverse.kddi.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[12:02] * oro (~oro@2001:620:20:16:e433:3ad3:683:82b4) has joined #ceph
[12:04] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[12:06] * yanzheng (~zhyan@182.139.205.12) has joined #ceph
[12:08] * zack_dolby (~textual@S225200086111.seint-userreverse.kddi.ne.jp) has joined #ceph
[12:08] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Remote host closed the connection)
[12:10] * oro (~oro@2001:620:20:16:e433:3ad3:683:82b4) Quit (Ping timeout: 480 seconds)
[12:11] * haomaiwang (~haomaiwan@115.218.152.118) Quit (Quit: Leaving...)
[12:17] * yanzheng (~zhyan@182.139.205.12) Quit (Quit: This computer has gone to sleep)
[12:19] * ScOut3R (~ScOut3R@catv-80-98-46-171.catv.broadband.hu) has joined #ceph
[12:23] * karis (~karis@78-106-206.adsl.cyta.gr) Quit (Remote host closed the connection)
[12:23] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[12:26] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[12:35] * branto (~branto@178-253-148-48.3pp.slovanet.sk) Quit (Ping timeout: 480 seconds)
[12:35] * mattronix (~quassel@fw1.sdc.mattronix.nl) Quit (Read error: Connection reset by peer)
[12:37] * mattronix (~quassel@fw1.sdc.mattronix.nl) has joined #ceph
[12:49] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[12:49] * yanzheng (~zhyan@182.139.205.12) has joined #ceph
[12:50] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[12:55] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[12:56] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[13:00] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[13:04] * Caitlin21 (~Caitlin21@79.141.163.13) has joined #ceph
[13:04] <Caitlin21> Never Pay for Porn ever again. Click Here!
[13:07] * Caitlin21 (~Caitlin21@79.141.163.13) Quit (Read error: Connection reset by peer)
[13:15] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[13:17] * i_m (~ivan.miro@deibp9eh1--blueice1n2.emea.ibm.com) has joined #ceph
[13:19] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[13:20] * yanzheng (~zhyan@182.139.205.12) Quit (Quit: This computer has gone to sleep)
[13:20] * yerrysherry (~yerrysher@ns.milieuinfo.be) has joined #ceph
[13:20] * yanzheng (~zhyan@182.139.205.12) has joined #ceph
[13:21] * zack_dolby (~textual@S225200086111.seint-userreverse.kddi.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[13:22] * mikedawson (~chatzilla@98.227.179.172) has joined #ceph
[13:24] * zhaochao (~zhaochao@111.161.77.232) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 31.4.0/20150113100542])
[13:24] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[13:24] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[13:30] * yanzheng (~zhyan@182.139.205.12) Quit (Quit: This computer has gone to sleep)
[13:32] * yanzheng (~zhyan@182.139.205.12) has joined #ceph
[13:32] * vbellur (~vijay@122.167.168.113) has joined #ceph
[13:44] * shang_ (~ShangWu@220-135-203-169.HINET-IP.hinet.net) has joined #ceph
[13:49] * oro (~oro@2001:620:20:16:e433:3ad3:683:82b4) has joined #ceph
[13:49] * jdillaman (~jdillaman@pool-108-56-67-212.washdc.fios.verizon.net) has joined #ceph
[13:53] * ScOut3R (~ScOut3R@catv-80-98-46-171.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[13:55] * karis (~karis@78-106-206.adsl.cyta.gr) has joined #ceph
[13:56] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) has joined #ceph
[14:04] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[14:08] * rdas_ (~rdas@121.244.87.116) Quit (Quit: Leaving)
[14:14] * yanzheng (~zhyan@182.139.205.12) Quit (Quit: This computer has gone to sleep)
[14:14] * mikedawson (~chatzilla@98.227.179.172) Quit (Ping timeout: 480 seconds)
[14:15] * KevinPerks (~Adium@2606:a000:80a1:1b00:e12e:9c26:beb6:7e08) has joined #ceph
[14:16] * jcsp (~jcsp@0001bf3a.user.oftc.net) has joined #ceph
[14:20] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[14:20] * tupper_ (~tcole@rtp-isp-nat-pool1-1.cisco.com) Quit (Read error: Connection reset by peer)
[14:24] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[14:30] * amote (~amote@121.244.87.116) Quit (Quit: Leaving)
[14:33] * yanzheng (~zhyan@182.139.205.12) has joined #ceph
[14:33] * ScOut3R (~ScOut3R@catv-80-98-46-171.catv.broadband.hu) has joined #ceph
[14:35] * tupper_ (~tcole@108-83-203-37.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[14:37] * _karl (~karl@kamr.at) has joined #ceph
[14:39] * dgurtner (~dgurtner@178.197.231.128) Quit (Ping timeout: 480 seconds)
[14:41] * hasues (~hazuez@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[14:41] * hasues (~hazuez@kwfw01.scrippsnetworksinteractive.com) has left #ceph
[14:41] * RayTracer (~RayTracer@89-77-236-112.dynamic.chello.pl) has joined #ceph
[14:44] * yanzheng (~zhyan@182.139.205.12) Quit (Quit: This computer has gone to sleep)
[14:46] <RayTracer> Hi all! If I have cluster of ceph on e.g. 10x250GB SSD osds should I set a weight for 0.250 or can I leave it with 1? Is this can lead to unbalance usage of osds within cluster?
[14:48] <Be-El> RayTracer: you can choose any weight as long as all weight are computed based on the same metric
[14:49] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[14:49] <Be-El> the default metric is 1.0 ^= 1 TB of storage
[14:52] <RayTracer> I just wander if there is a way to aquire better balance for osd usage in our cluster because we spoted that within selected hosts (we have 2 osd on host) there is a diffrence for around 10-50GB in disk space usage.
[14:53] <Be-El> RayTracer: crush rules and weight are used to distribute placement group between osd. objects in pools are mapped to placement group. size of objects, number of placement groups/osd etc may vary
[14:55] * dmsimard_away is now known as dmsimard
[14:55] * sjm (~sjm@pool-98-109-11-113.nwrknj.fios.verizon.net) has joined #ceph
[14:58] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[14:58] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has left #ceph
[15:02] * jrankin (~jrankin@d53-64-170-236.nap.wideopenwest.com) has joined #ceph
[15:04] * Alyssa21 (~Alyssa21@79.141.163.18) has joined #ceph
[15:04] <Alyssa21> Free Porn for Life!. Click Here!
[15:06] <RayTracer> So maybe increasing pg number from 512 to 4096 is the issue? Right now i see we have 14 osds but still our pools are set to 512 placement groups. Can this value help to beter redistribute data across all osds?
[15:06] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[15:07] * Alyssa21 (~Alyssa21@79.141.163.18) Quit (Read error: Connection reset by peer)
[15:10] * reed (~reed@net-93-144-229-167.cust.dsl.teletu.it) has joined #ceph
[15:13] * nitti (~nitti@173-160-123-93-Minnesota.hfc.comcastbusiness.net) has joined #ceph
[15:14] <Be-El> RayTracer: it may help, but you will never get an equal data distribution across osd. 20% difference is considered normal
[15:15] * nitti_ (~nitti@162.222.47.218) has joined #ceph
[15:15] * nitti (~nitti@173-160-123-93-Minnesota.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[15:17] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Ping timeout: 480 seconds)
[15:22] * dmsimard is now known as dmsimard_away
[15:24] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[15:24] * swami1 (~swami@49.32.0.223) Quit (Quit: Leaving.)
[15:25] * jks (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[15:25] * xahare_ (~pixel@cpe-23-241-195-16.socal.res.rr.com) has joined #ceph
[15:25] * Nats_ (~natscogs@114.31.195.238) has joined #ceph
[15:25] * aarcane_ (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[15:26] * darkfaded (~floh@88.79.251.60) has joined #ceph
[15:27] * jeffhung_ (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[15:28] * sc-rm_ (~rene@mail-outbound.microting.com) has joined #ceph
[15:28] * yanzheng (~zhyan@182.139.205.12) has joined #ceph
[15:28] * rhamon__ (~rhamon@208.71.184.41) has joined #ceph
[15:28] * mattronix_ (~quassel@fw1.sdc.mattronix.nl) has joined #ceph
[15:28] * sage (~quassel@2605:e000:854d:de00:230:48ff:fed3:6786) has joined #ceph
[15:28] * ChanServ sets mode +o sage
[15:29] * rmoe_ (~quassel@173-228-89-134.dsl.static.fusionbroadband.com) has joined #ceph
[15:29] * cmdrk (~lincoln@c-67-165-142-226.hsd1.il.comcast.net) has joined #ceph
[15:29] * saltsa (~joonas@dsl-hkibrasgw1-58c01a-36.dhcp.inet.fi) has joined #ceph
[15:29] * trond_ (~trond@evil-server.alseth.info) has joined #ceph
[15:29] * mdxi_ (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) has joined #ceph
[15:29] * Georgyo_ (~georgyo@shamm.as) has joined #ceph
[15:29] * purpleid1a (~james@216.252.94.181) has joined #ceph
[15:29] * ismell_ (~ismell@host-24-52-35-110.beyondbb.com) has joined #ceph
[15:29] * kvanals_ (kvanals@kvanals.org) has joined #ceph
[15:29] * mfa298_ (~mfa298@gateway.yapd.net) has joined #ceph
[15:29] * ctd_ (~root@00011932.user.oftc.net) has joined #ceph
[15:29] * alfredodeza_ (~alfredode@198.206.133.89) has joined #ceph
[15:29] * dosaboy_ (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) has joined #ceph
[15:29] * TomB_ (~tom@167.88.45.146) has joined #ceph
[15:29] * ifur_ (~osm@hornbill.csc.warwick.ac.uk) has joined #ceph
[15:29] * eqhmcow_ (~eqhmcow@cpe-075-177-128-160.nc.res.rr.com) has joined #ceph
[15:30] * codice_ (~toodles@97-94-175-73.static.mtpk.ca.charter.com) has joined #ceph
[15:30] * Azrael_ (~azrael@terra.negativeblue.com) has joined #ceph
[15:30] * liiwi_ (liiwi@idle.fi) has joined #ceph
[15:30] * reed (~reed@net-93-144-229-167.cust.dsl.teletu.it) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * RayTracer (~RayTracer@89-77-236-112.dynamic.chello.pl) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * mattronix (~quassel@fw1.sdc.mattronix.nl) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * kawa2014 (~kawa@89.184.114.246) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * shaunm (~shaunm@74.215.76.114) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * rmoe (~quassel@173-228-89-134.dsl.static.fusionbroadband.com) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * xahare (~pixel@cpe-23-241-195-16.socal.res.rr.com) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * jtang (~jtang@109.255.42.21) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * Georgyo (~georgyo@shamm.as) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * nwf (~nwf@00018577.user.oftc.net) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * aarcane (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * wkennington (~william@76.77.180.204) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * pmatulis (~peter@ec2-23-23-42-7.compute-1.amazonaws.com) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * ctd (~root@00011932.user.oftc.net) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * sc-rm (~rene@mail-outbound.microting.com) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * trond (~trond@evil-server.alseth.info) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * cronix1 (~cronix@5.199.139.166) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * darkfader (~floh@88.79.251.60) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * jksM (~jks@3e6b5724.rev.stofanet.dk) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * epf (epf@epf.im) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * \ask (~ask@oz.develooper.com) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * `10 (~10@69.169.91.14) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * Nats (~natscogs@114.31.195.238) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * acaos (~zac@209.99.103.42) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * TomB (~tom@167.88.45.146) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * kraken (~kraken@gw.sepia.ceph.com) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * ismell (~ismell@host-24-52-35-110.beyondbb.com) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * Kingrat (~shiny@cpe-96-29-149-153.swo.res.rr.com) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * jkappert (~jkappert@5.39.189.119) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * rhamon_ (~rhamon@208.71.184.41) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * kevincox (~kevincox@4.s.kevincox.ca) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * alfredodeza (~alfredode@198.206.133.89) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * ifur (~osm@0001f63e.user.oftc.net) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * purpleidea (~james@216.252.94.181) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * cmdrk_ (~lincoln@c-67-165-142-226.hsd1.il.comcast.net) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * mdxi (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * toabctl (~toabctl@toabctl.de) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * portante (~portante@nat-pool-bos-t.redhat.com) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * mfa298 (~mfa298@gateway.yapd.net) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * a1-away (~jelle@62.27.85.48) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * eqhmcow (~eqhmcow@cpe-075-177-128-160.nc.res.rr.com) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * saltsa_ (~joonas@dsl-hkibrasgw1-58c01a-36.dhcp.inet.fi) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * off_rhoden (~off_rhode@209.132.181.86) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * liiwi (liiwi@idle.fi) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * Azrael (~azrael@terra.negativeblue.com) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * kvanals (kvanals@kvanals.org) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * lmb (lmb@212.8.204.10) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * skullone (~skullone@shell.skull-tech.com) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * carter (~carter@li98-136.members.linode.com) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * fam (~famz@nat-pool-bos-t.redhat.com) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * phantomcircuit (~phantomci@smartcontracts.us) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * zz_hitsumabushi (~hitsumabu@175.184.30.148) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * _br_ (~bjoern_of@213-239-215-232.clients.your-server.de) Quit (resistance.oftc.net larich.oftc.net)
[15:30] * sc-rm_ is now known as sc-rm
[15:30] * darkfaded is now known as darkfader
[15:30] * alfredodeza_ is now known as alfredodeza
[15:30] * acaos (~zac@209.99.103.42) has joined #ceph
[15:30] * jkappert (~jkappert@5.39.189.119) has joined #ceph
[15:30] * pmatulis (~peter@ec2-23-23-42-7.compute-1.amazonaws.com) has joined #ceph
[15:31] * tupper_ (~tcole@108-83-203-37.lightspeed.rlghnc.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[15:32] * wkennington (~william@76.77.180.204) has joined #ceph
[15:33] * dyasny (~dyasny@173.231.115.58) has joined #ceph
[15:33] * phantomcircuit (~phantomci@smartcontracts.us) has joined #ceph
[15:33] * epf (epf@epf.im) has joined #ceph
[15:33] * carter (~carter@li98-136.members.linode.com) has joined #ceph
[15:33] * kevincox (~kevincox@4.s.kevincox.ca) has joined #ceph
[15:33] * toabctl (~toabctl@toabctl.de) has joined #ceph
[15:33] * zz_hitsumabushi (~hitsumabu@175.184.30.148) has joined #ceph
[15:33] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[15:33] * fam (~famz@nat-pool-bos-t.redhat.com) has joined #ceph
[15:33] * off_rhoden (~off_rhode@209.132.181.86) has joined #ceph
[15:33] * _br_ (~bjoern_of@213-239-215-232.clients.your-server.de) has joined #ceph
[15:33] * \ask (~ask@oz.develooper.com) has joined #ceph
[15:34] * portante (~portante@nat-pool-bos-t.redhat.com) has joined #ceph
[15:34] * dneary (~dneary@96.237.180.105) Quit (Ping timeout: 480 seconds)
[15:36] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[15:36] * ade (~abradshaw@193.202.255.218) has joined #ceph
[15:40] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) has joined #ceph
[15:40] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Read error: Connection reset by peer)
[15:40] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[15:40] * tupper_ (~tcole@rtp-isp-nat-pool1-1.cisco.com) has joined #ceph
[15:40] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[15:40] * skullone (~skullone@shell.skull-tech.com) has joined #ceph
[15:41] * reed (~reed@net-93-144-229-167.cust.dsl.teletu.it) has joined #ceph
[15:41] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) has joined #ceph
[15:41] * ChanServ sets mode +o elder
[15:41] * Kingrat (~shiny@cpe-96-29-149-153.swo.res.rr.com) has joined #ceph
[15:41] * `10 (~10@69.169.91.14) has joined #ceph
[15:41] * shaunm (~shaunm@74.215.76.114) has joined #ceph
[15:41] * nwf (~nwf@00018577.user.oftc.net) has joined #ceph
[15:42] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[15:42] * jtang (~jtang@109.255.42.21) has joined #ceph
[15:42] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[15:42] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[15:42] * kraken (~kraken@gw.sepia.ceph.com) has joined #ceph
[15:42] * hasues (~hazuez@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[15:42] * a1-away (~jelle@62.27.85.48) has joined #ceph
[15:42] * lmb (lmb@212.8.204.10) has joined #ceph
[15:49] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[15:50] * yanzheng (~zhyan@182.139.205.12) Quit (Quit: This computer has gone to sleep)
[15:50] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) has joined #ceph
[15:51] * dgurtner (~dgurtner@178.197.231.128) has joined #ceph
[15:52] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[15:53] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[15:54] * gregmark (~Adium@68.87.42.115) has joined #ceph
[15:55] * brutusca_ (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[15:55] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[15:56] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[15:57] * danieagle (~Daniel@201-95-103-54.dsl.telesp.net.br) has joined #ceph
[15:57] * danieagle_ (~Daniel@201-95-103-54.dsl.telesp.net.br) has joined #ceph
[15:58] * danieagle_ (~Daniel@201-95-103-54.dsl.telesp.net.br) Quit ()
[15:58] * danieagle (~Daniel@201-95-103-54.dsl.telesp.net.br) Quit ()
[15:59] * danieagle (~Daniel@201-95-103-54.dsl.telesp.net.br) has joined #ceph
[15:59] * brutusca_ (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[15:59] * saltlake (~saltlake@12.250.199.170) has joined #ceph
[16:00] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[16:05] * hasues (~hazuez@kwfw01.scrippsnetworksinteractive.com) has left #ceph
[16:06] <ZyTer> hi
[16:08] <ZyTer> i have 12 OSD, and one replicate 2, so the calculation for PG : 12*100 / 2 = 600
[16:08] <saltlake> Hi champs: I am trying to export - import an incremental diff from snapshots and it seems to be not doing anything ... I think it is hung... http://pastebin.com/LvjQu0Wc
[16:08] <ZyTer> i put pg = 512 or 1024 ?
[16:08] <saltlake> ZyTer: If none of the other experts answer, the answer I have received for that question before is use 500.
[16:09] <saltlake> If you see issues with it then increase to 1024.. but mostly 512 would be a good number.
[16:09] <saltlake> There are additional overheads associated with a large pg_num
[16:09] <ZyTer> saltlake: ok :)
[16:10] <ZyTer> and thank you
[16:10] <saltlake> ZyTer: np :-)
[16:10] <CephTestC> Does anyone know if I can use the same OSD Cache tier on multiple storage pools with different caching methods?
[16:11] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[16:13] <saltlake> Sorry my message got lost maybe so pasting it here any help would we awesome!!Hi champs: I am trying to export - import an incremental diff from snapshots and it seems to be not doing anything ... I think it is hung... http://pastebin.com/LvjQu0Wc
[16:15] * dmsimard_away is now known as dmsimard
[16:16] <Gugge-47527> saltlake: i would split it up, to see what part is not working
[16:16] <Gugge-47527> export to a file, copy it to the other server, import from the file
[16:16] <Gugge-47527> see where it hangs
[16:16] <saltlake> Gugge.. ok,, thanks .. will try
[16:19] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[16:20] <saltlake> Gugge-47527: while xporting to a file do you specify a format ? If yes which is recommended
[16:22] <Gugge-47527> im pretty sure export-diff just wants a filename, where it saves the raw stream
[16:22] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[16:22] <saltlake> sudo rbd export-diff --from-snap snap1 rbd/cyphredata@snap2 ./x.diff
[16:22] * alram (~alram@ppp-seco11pa2-46-193-140-198.wb.wifirst.net) has joined #ceph
[16:23] <saltlake> should work ... I will jyst wait some time to see if does somehting
[16:24] <saltlake> Gugge-47527 : I see x.diff file created but the size is stuck at 37024 bytes and the command does not return..
[16:26] <saltlake> Gugge-47527 : Never mind please I finally got a message saying "Exporting image: 2% complete"
[16:27] <ZyTer> humm, if its possible to help me because, my ceph cluster say me : HEALTH_WARN too few pgs per osd (16 < min 20)
[16:27] <ZyTer> i try to change that : ceph osd pool set data pg_num 512
[16:27] <ZyTer> and the result are :
[16:28] <ZyTer> Error E2BIG: specified pg_num 512 is too large (creating 448 new PGs on ~12 OSDs exceeds per-OSD max of 32)
[16:28] <ZyTer> any idea...?
[16:28] <Gugge-47527> yes
[16:28] <Gugge-47527> dont exceed 32 pr osd
[16:28] * RayTrace_ (~RayTracer@89-77-236-112.dynamic.chello.pl) has joined #ceph
[16:28] <ZyTer> ok..
[16:29] <Gugge-47527> just double the pg num untill you reach 512 :)
[16:29] <ZyTer> so 384 PG (12*32)
[16:29] <Gugge-47527> start by 32
[16:29] <Gugge-47527> then 64
[16:29] <Gugge-47527> then 128
[16:29] <Gugge-47527> then 256
[16:29] <Gugge-47527> and then ... 512 :)
[16:29] <ZyTer> ok ...
[16:29] <Gugge-47527> remember to change pgp_num too
[16:29] <RayTrace_> Be-El: Thanks for help. I got disconnected for a while. :]
[16:30] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[16:30] <ZyTer> Gugge-47527: ok, but "ceph status" say me pgmap v535: 192 pgs , i have already 192 pg ?
[16:32] * lalatenduM (~lalatendu@2001:718:801:22e:8e70:5aff:fe49:d30c) has joined #ceph
[16:32] <ZyTer> Gugge-47527: and, i have 3 pools (0 data,1 metadata,2 rbd) by default, i must set 64,32,128 etc for all these
[16:33] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[16:34] <Gugge-47527> ZyTer: http://ceph.com/pgcalc/
[16:36] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[16:38] * yerrysherry (~yerrysher@ns.milieuinfo.be) Quit (Quit: yerrysherry)
[16:38] * togdon (~togdon@74.121.28.6) has joined #ceph
[16:39] * purpleid1a is now known as purpleidea
[16:39] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[16:45] * i_m (~ivan.miro@deibp9eh1--blueice1n2.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[16:46] * analbeard (~shw@support.memset.com) Quit (Remote host closed the connection)
[16:46] * rwheeler (~rwheeler@nat-pool-tlv-u.redhat.com) Quit (Quit: Leaving)
[16:47] * analbeard (~shw@support.memset.com) has joined #ceph
[16:51] * shang_ (~ShangWu@220-135-203-169.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[16:52] <ZyTer> Gugge-47527: ok ! thanks :)
[16:56] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[17:03] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[17:03] * brutusca_ (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[17:04] * BManojlovic (~steki@178-222-84-14.dynamic.isp.telekom.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:04] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:05] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[17:05] * puffy (~puffy@50.185.218.255) has joined #ceph
[17:06] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[17:06] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[17:09] * Andrea21 (~Andrea21@95.141.31.6) has joined #ceph
[17:09] <Andrea21>
[17:12] * Andrea21 (~Andrea21@95.141.31.6) Quit (Read error: Connection reset by peer)
[17:12] * mattch1 (~mattch@pcw3047.see.ed.ac.uk) Quit (Quit: Leaving.)
[17:15] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[17:15] <saltlake> Gugge-47527: I think I find that export-diff hangs once the diff files becsome 2G. The message never proceeds beyond "Esporting image: 2%..." Waited for 25 min.
[17:21] * swami1 (~swami@223.227.247.196) has joined #ceph
[17:21] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[17:30] * joshd1 (~jdurgin@24-205-54-236.dhcp.gldl.ca.charter.com) has joined #ceph
[17:31] * brutusca_ (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Ping timeout: 480 seconds)
[17:34] * RayTrace_ (~RayTracer@89-77-236-112.dynamic.chello.pl) Quit (Remote host closed the connection)
[17:35] * ScOut3R (~ScOut3R@catv-80-98-46-171.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[17:35] * puffy (~puffy@50.185.218.255) Quit (Quit: Leaving.)
[17:35] * georgem (~Adium@fwnat.oicr.on.ca) has left #ceph
[17:36] * xahare (~pixel@cpe-23-241-195-16.socal.res.rr.com) has joined #ceph
[17:37] * puffy (~puffy@50.185.218.255) has joined #ceph
[17:37] * puffy (~puffy@50.185.218.255) Quit ()
[17:40] * puffy (~puffy@50.185.218.255) has joined #ceph
[17:40] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[17:41] * puffy (~puffy@50.185.218.255) Quit ()
[17:41] * puffy (~puffy@50.185.218.255) has joined #ceph
[17:41] * puffy (~puffy@50.185.218.255) Quit ()
[17:42] * xahare_ (~pixel@cpe-23-241-195-16.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[17:43] * puffy (~puffy@50.185.218.255) has joined #ceph
[17:43] * puffy (~puffy@50.185.218.255) Quit ()
[17:44] * puffy (~puffy@50.185.218.255) has joined #ceph
[17:44] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[17:45] * jcsp (~jcsp@0001bf3a.user.oftc.net) Quit (Quit: Leaving.)
[17:45] * lalatenduM (~lalatendu@2001:718:801:22e:8e70:5aff:fe49:d30c) Quit (Ping timeout: 480 seconds)
[17:45] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[17:46] * chasmo77 (~chas77@158.183-62-69.ftth.swbr.surewest.net) has joined #ceph
[17:49] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:49] * sudocat (~davidi@192.185.1.20) has joined #ceph
[17:51] * pmxceph (~pmxceph@208.98.194.163) has joined #ceph
[17:52] <pmxceph> Hello all, could somebody help me figure out the ceph performance issue after adding SSD for journal? I have been running 5 nodes, 20 OSDs cluster with journal colocated on same spinners. Last week added SSDs on all nodes for journal. But the performance is nowhere near what it should be. Just wanting some pointer how to diagnose the bottleneck
[17:53] * fmanana (~fdmanana@bl13-151-100.dsl.telepac.pt) Quit (Quit: Leaving)
[17:54] * eternaleye (~eternaley@50.245.141.73) Quit (Ping timeout: 480 seconds)
[17:55] <CephTestC> Hi PMXceph. A good mixture of OSD to SSD journal is 4 spinners per OSD
[17:55] <CephTestC> Journal
[17:56] <pmxceph> All 5 SSDs were tested based on http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
[17:56] <pmxceph> Cephtest: i have 4 OSDs per SSD journal right now
[17:56] * rmoe_ (~quassel@173-228-89-134.dsl.static.fusionbroadband.com) Quit (Ping timeout: 480 seconds)
[17:57] <CephTestC> How are your SSD's setup? are they striped?
[17:57] <CephTestC> Or just JBOD
[17:58] <pmxceph> No, 1 SSD journal and 4 spinner OSDs per node
[17:58] * togdon (~togdon@74.121.28.6) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:58] <CephTestC> Sounds good to me!
[17:58] <pmxceph> SSD has 4 paritions for 4 OSD journals
[17:59] * jlavoy (~Adium@173.227.74.5) has joined #ceph
[17:59] <pmxceph> benchmark for OSDs shows around 200mb/s through #ceph tell osd.1 bench
[17:59] <pmxceph> without SSD journal for same command shows around 100 mb/s. So obviously journal is in effect on SSD
[17:59] <CephTestC> We have a working setup with the same type of config let me check how the Journals are setup on SSD drives
[17:59] * oro (~oro@2001:620:20:16:e433:3ad3:683:82b4) Quit (Ping timeout: 480 seconds)
[18:00] <pmxceph> rados benchmark shows 178mb/s through this command #rados -p test bench 60 write. I think it should be much higher than that
[18:01] * danieagle (~Daniel@201-95-103-54.dsl.telesp.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[18:01] <Be-El> pmxceph: does that command use multiple threads by default?
[18:01] <pmxceph> 16
[18:03] * cooldharma06 (~chatzilla@218.248.25.100) has joined #ceph
[18:05] <pmxceph> Be-El: I am trying to stick with the same command accross the board so that i can see the difference. Without SSD journal same command shows me 97mb/s with default 16 thread. I mean SSD is still makes it faster, but i dont think it is working at full potential
[18:05] <pmxceph> Both op and disk threads are set at 4
[18:06] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[18:06] <Be-El> pmxceph: which filesystem do you use on the osds?
[18:11] * alram (~alram@ppp-seco11pa2-46-193-140-198.wb.wifirst.net) Quit (Read error: Connection reset by peer)
[18:13] * brutusca_ (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[18:13] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[18:13] * togdon (~togdon@74.121.28.6) has joined #ceph
[18:14] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:15] * Concubidated (~Adium@2607:f298:b:635:2c52:caab:e95c:9201) has joined #ceph
[18:16] * ade (~abradshaw@193.202.255.218) Quit (Quit: Too sexy for his shirt)
[18:19] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[18:28] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) Quit (Quit: Leaving)
[18:28] * brutusca_ (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[18:30] * dgurtner (~dgurtner@178.197.231.128) Quit (Ping timeout: 480 seconds)
[18:31] * vakulkar (~vakulkar@c-50-185-132-102.hsd1.ca.comcast.net) has joined #ceph
[18:32] * vakulkar (~vakulkar@c-50-185-132-102.hsd1.ca.comcast.net) Quit ()
[18:32] * vakulkar (~vakulkar@c-50-185-132-102.hsd1.ca.comcast.net) has joined #ceph
[18:32] * cooldharma06 (~chatzilla@218.248.25.100) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 21.0/20130515140136])
[18:33] <pmxceph> Be-El: I am using ext4 for all OSDs
[18:33] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[18:33] * sputnik13 (~sputnik13@74.202.214.170) has joined #ceph
[18:34] * mykola (~Mikolaj@91.225.200.48) has joined #ceph
[18:37] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:38] * jdillaman (~jdillaman@pool-108-56-67-212.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[18:44] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[18:45] <CephTestC> pmxceph: We're not using the same setup as you unfortunately... For the OSD's we had btter performance with XFS. Also I beleive there is a flag you should set specifically for ext4.
[18:45] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[18:46] <CephTestC> filestore xattr use omap = true
[18:47] <CephTestC> pertaining to the above attribute "You should always add the following line to the [osd] section of your ceph.conf file for ext4 filesystems; you can optionally use it for btrfs and XFS.:"
[18:49] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[18:53] <pmxceph> Cephtest: I have the filestore xattr set as true.
[18:54] <pmxceph> But i have it global section. Does it make any difference if it is not in OSD section?
[18:54] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[18:56] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[18:58] <JCL> pmxceph: Fine in the global section as it is usually where we put it.
[18:59] <JCL> CephTestC: This parameter has been deprecated and always defaults to true firefly and upward
[19:01] * cholcombe973 (~chris@7208-76ef-ff1f-ed2f-329a-f002-3420-2062.6rd.ip6.sonic.net) has joined #ceph
[19:04] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[19:05] * Kako21 (~Kako21@95.141.29.55) has joined #ceph
[19:05] <Kako21>
[19:07] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[19:07] * Kako21 (~Kako21@95.141.29.55) Quit (Read error: Connection reset by peer)
[19:09] * ircolle (~Adium@2601:1:a580:145a:139:9095:56de:8570) has joined #ceph
[19:10] <pmxceph> any other possible way to increase SSD journal performance? I ran hdparm -Tt on the SSD device and it shows 265 mb/s.
[19:12] <pmxceph> #rados -p test bench -b 4096 60 -t 1 write gives me 0.126 mb/s. Any thought on what the speed should be on SSD journal?
[19:13] * swami1 (~swami@223.227.247.196) Quit (Quit: Leaving.)
[19:16] * diegows (~diegows@190.190.5.238) has joined #ceph
[19:18] <Gugge-47527> pmxceph: what is mb? (im guessing its not milibit)
[19:18] * diegows (~diegows@190.190.5.238) Quit ()
[19:18] <pmxceph> megabyte
[19:19] * xarses (~andreww@12.164.168.117) has joined #ceph
[19:20] <Gugge-47527> so around 128 KB/s .. with 4k writes .... 32 iops in a single thread
[19:20] <Gugge-47527> around 31ms pr io
[19:20] <Gugge-47527> doesnt sound that far off
[19:21] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) has joined #ceph
[19:21] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) Quit ()
[19:22] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) has joined #ceph
[19:22] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) Quit ()
[19:22] * ksingh (~Adium@2001:708:10:10:25cb:9998:e957:b4e1) has left #ceph
[19:24] * gregmark (~Adium@68.87.42.115) has joined #ceph
[19:26] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) Quit (Quit: Leaving.)
[19:27] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) has joined #ceph
[19:28] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) has joined #ceph
[19:28] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) Quit ()
[19:29] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[19:34] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[19:35] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[19:39] * eternaleye (~eternaley@50.245.141.73) has joined #ceph
[19:43] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[19:44] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[19:45] * vasu (~vakulkar@c-50-185-132-102.hsd1.ca.comcast.net) has joined #ceph
[19:46] * vasu (~vakulkar@c-50-185-132-102.hsd1.ca.comcast.net) Quit ()
[19:47] * LeaChim (~LeaChim@host86-159-236-51.range86-159.btcentralplus.com) has joined #ceph
[19:48] * Nacer (~Nacer@2001:41d0:fe82:7200:a9a7:6568:e368:b87) has joined #ceph
[19:50] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[19:56] * bkopilov (~bkopilov@bzq-79-179-13-13.red.bezeqint.net) has joined #ceph
[19:59] * Nacer (~Nacer@2001:41d0:fe82:7200:a9a7:6568:e368:b87) Quit (Remote host closed the connection)
[19:59] * dmsimard (~dmsimard@198.72.123.202) Quit (Ping timeout: 480 seconds)
[20:03] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[20:04] * karis (~karis@78-106-206.adsl.cyta.gr) Quit (Remote host closed the connection)
[20:06] * tobiash_ (~quassel@mail.bmw-carit.de) has joined #ceph
[20:06] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[20:06] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[20:07] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[20:10] <cholcombe973> ceph: why does the cluster have to come up in order? why does it matter if i add osd 12 and then osd 0?
[20:10] <cholcombe973> the cluster hands out id's in order but is that a requirement or just a convention?
[20:10] * tobiash (~quassel@mail.bmw-carit.de) Quit (Ping timeout: 480 seconds)
[20:13] * dgbaley27 (~matt@ucb-np1-206.colorado.edu) has joined #ceph
[20:18] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[20:24] * scuttlemonkey is now known as scuttle|afk
[20:25] * houkouonchi-work (~linux@2607:f298:b:635:225:90ff:fe39:38ce) Quit (Quit: Client exiting)
[20:36] * danieljh (~daniel@0001b4e9.user.oftc.net) has joined #ceph
[20:52] * scuttle|afk is now known as scuttlemonkey
[20:53] * dgbaley27 (~matt@ucb-np1-206.colorado.edu) Quit (Quit: Leaving.)
[21:01] <pmxceph> cholocombe973: as far as i know ceph cluster will always try to maintain incremental sequence of OSD ID.
[21:01] <pmxceph> It will assign the first available ID when creating new OSD
[21:03] * houkouonchi-work (~linux@2607:f298:b:635:225:90ff:fe39:38ce) has joined #ceph
[21:08] * Kupo2 (~tyler.wil@23.111.254.159) has left #ceph
[21:08] * houkouonchi-work (~linux@2607:f298:b:635:225:90ff:fe39:38ce) Quit (Read error: Connection reset by peer)
[21:08] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:08] * houkouonchi-work (~linux@2607:f298:b:635:225:90ff:fe39:38ce) has joined #ceph
[21:16] * ScOut3R (~ScOut3R@4E5CC061.dsl.pool.telekom.hu) has joined #ceph
[21:30] * cok (~chk@nat-cph5-sys.net.one.com) has joined #ceph
[21:37] * togdon (~togdon@74.121.28.6) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[21:40] * ScOut3R (~ScOut3R@4E5CC061.dsl.pool.telekom.hu) Quit (Ping timeout: 480 seconds)
[21:41] * togdon (~togdon@74.121.28.6) has joined #ceph
[21:52] * cok (~chk@nat-cph5-sys.net.one.com) Quit (Quit: Leaving.)
[22:00] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) has joined #ceph
[22:02] * badone (~brad@66.187.239.16) has joined #ceph
[22:10] * joef1 (~Adium@2601:9:280:f2e:c9bb:1292:8dcf:daba) has joined #ceph
[22:10] * joef1 (~Adium@2601:9:280:f2e:c9bb:1292:8dcf:daba) has left #ceph
[22:11] * saltlake2 (~saltlake@12.250.199.170) has joined #ceph
[22:16] * saltlake (~saltlake@12.250.199.170) Quit (Ping timeout: 480 seconds)
[22:27] * ShaunR2 (~ShaunR@staff.ndchost.com) Quit ()
[22:30] * bhong (~root@67.215.92.184) has joined #ceph
[22:31] * jrankin (~jrankin@d53-64-170-236.nap.wideopenwest.com) Quit (Quit: Leaving)
[22:32] <saltlake2> Has anyone see issues with rbd export-diff ? My observation is that it hangs after a 2G diff creation. This is 100% reproducible,
[22:35] <joshd> saltlake2: which version are you running? sounds like a new bug
[22:36] <saltlake2> joshd: thanks again !! it is still 0.80.7
[22:36] <saltlake2> joshd: I wanted to share I am running this on a 32b system .. I see some application have 2GB imits on 32b systems..
[22:37] <joshd> ah, that might be the problem
[22:38] * dmsimard (~dmsimard@198.72.123.202) has joined #ceph
[22:39] <saltlake2> joshd: Seriousl!! I can get the repo.. is it possible to point me to the file that might indicate this ?
[22:40] <saltlake2> joshd: Is there a way around it .. ? I wa splanning to backup my ceph rbd from one geog locaton to another using the rbd export-diff from snapshots mechanism.. which will not work out..!!
[22:40] <joshd> saltlake2: src/rbd.cc do_export_diff(), or maybe src/librbd/internal.cc diff_iterate()
[22:40] <saltlake2> joshd: thanks a bunch.. do you know what might be another good way to do backup from one rbd to another offsite ?
[22:40] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[22:42] * ircolle is now known as ircolle-afk
[22:44] <joshd> saltlake2: well, if the bug isn't in diff_iterate() you could write your own diff export/import type of thing
[22:44] <joshd> probably best to find the bug (one culprit would be ftruncate() for exporting to a file, that wouldn't cause a hang though...)
[22:44] * palmeida (~palmeida@gandalf.wire-consulting.com) has joined #ceph
[22:45] * Sysadmin88 (~IceChat77@94.12.240.104) has joined #ceph
[22:45] <joshd> you could do full exports of course, or use a 64-bit machine (vm?) I suppose
[22:47] <saltlake2> joshd: Hmm the rbd exort-diff works its the export-diff --from-snap option that hangs where the command fails to complete and the diff file created size is stuck at 2GB. There is plenty of space.
[22:47] <bhong> anyone know what crypto/ciphers 'ceph-authtool --gen-print-key' uses? I'd like to create valid keys for automation that doesn't necessarily have the ceph package installed
[22:48] <saltlake2> joshd. Yes I am stuck with a 32b ppc machine/hw!!
[22:49] <joshd> saltlake2: does rbd diff --from-snap work?
[22:49] <joshd> it should just list the changed extents
[22:50] <saltlake2> joshd: I have not trid rbd diff I have only tried rbd export-diff.. I will give that a shot right away
[22:50] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[22:51] * bhong (~root@67.215.92.184) has left #ceph
[22:51] <saltlake2> joshd:sudo rbd diff --from-snap snap1 rbd/<poolname>@snap2
[22:51] <saltlake2> It seems to just hang actually but will give it a few more min
[22:52] <joshd> it'd be interesting if that hangs, but it works without --from-snap, or if it works in both cases - then the bug would be in the export-diff part, not diff_iterate(), which is used by both internally
[22:53] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) has joined #ceph
[22:53] <saltlake2> joshd: I think it hangs..
[22:54] <joshd> can you try the same thing with --debug-rbd 20 --debug-ms 1 and pastebin the output?
[22:54] <cholcombe973> pmxceph: thanks for the info :)
[22:55] * dyasny (~dyasny@173.231.115.58) Quit (Ping timeout: 480 seconds)
[22:56] <saltlake2> joshd: It I tried this "sudo rbd diff rbd/<poolname> --from-snap snap1" and "sudo rbd diff rbd/<poolname>"
[22:56] * linjan (~linjan@80.178.220.195.adsl.012.net.il) Quit (Ping timeout: 480 seconds)
[22:56] <saltlake2> joshd: They all appear to hang.
[22:57] <joshd> saltlake2: and ceph -s says HEALTH_OK ?
[22:57] <saltlake2> joshd: ceph -s shows health_ok and everything is up as expected
[22:58] <joshd> saltlake2: ok, can you try any of those commands with --debug-rbd 20 --debug-ms 1
[22:58] <saltlake2> joshd: yes ..
[22:59] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[22:59] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[23:01] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[23:03] * mykola (~Mikolaj@91.225.200.48) Quit (Quit: away)
[23:04] <saltlake2> joshd: http://pastebin.com/index.php?e=1
[23:04] <saltlake2> joshd: sorry .. sent u wrong link
[23:05] <saltlake2> joshd:http://pastebin.com/mERPrcaa
[23:12] * nitti_ (~nitti@162.222.47.218) Quit (Ping timeout: 480 seconds)
[23:16] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[23:17] * jdillaman (~jdillaman@pool-108-56-67-212.washdc.fios.verizon.net) has joined #ceph
[23:20] * mozg (~andrei@37.220.104.190) has joined #ceph
[23:20] <mozg> hello guys
[23:21] <mozg> I would like some advise on setting up an erasure coded pool to be used as my backup
[23:21] <mozg> i have just two osd server with 9 osds in each server
[23:22] <mozg> which I use for my rbd storage
[23:22] <mozg> across another DC I have a similar setup, which I would like to use for backing up the rbd images
[23:22] <mozg> however, I would like to use it for backing up other things and I would like to have more usable space
[23:23] <mozg> can I use just two servers with 18 osds in total for erasure coded pool?
[23:23] <xahare> no ones stopping you, but if one goes down your in trouble
[23:23] <xahare> i mean when one goes down
[23:23] <mozg> would it provide resilance against one server failure if I am to use osd as the failover domain?
[23:24] <mozg> xahare, so there will be a chance of things going tits up, right?
[23:24] <xahare> yes
[23:24] <xahare> but other than that, it should work
[23:25] <xahare> you could have a dummy box as a 3rd monitor
[23:25] <xahare> then if one of your real nodes goes down, the other will keep going
[23:25] <xahare> but you wouldnt want to do that erasure coded
[23:26] <Sysadmin88> cache pools with erasure coded back end pools working yet?
[23:27] <Sysadmin88> mozg, you probably need more nodes for erasure coding. so you don't lose too many parts if one of your servers dies
[23:28] <mozg> Sysadmin88, so, realistically speaking I should have a normal replicated pool with replica 2 to make sure if one of the servers is down the data will not be lost
[23:29] <mozg> i thought that crush will equally replicate data across both servers
[23:29] <Sysadmin88> you know how erasure coding works?
[23:29] <mozg> i've just read some info on it
[23:29] <Sysadmin88> i was asking about if it was working since it's realatively new
[23:30] <Sysadmin88> replicas can do 2 nodes... but you probably should try and get 3 nodes or more.
[23:30] <Sysadmin88> but erasure coding wouldn't benefit you with 2 nodes... in order to lose half the OSDs and remain working you might as well use replication
[23:31] <Sysadmin88> and repair is costly with erasure coding
[23:31] <Sysadmin88> you can configure it to do almost anything :)
[23:31] <mozg> yeah, i thought so as well
[23:32] <mozg> wanted to double check
[23:33] <mozg> i guess i really need to have at least 3 or 4 nodes to utilise erasure coding
[23:34] <xahare> is there a reason you specifically want erasure coding?
[23:35] <Sysadmin88> i haven't looked which 'erasure codes' ceph can use. be interesting to see the different redundancy levels
[23:35] <xahare> is there a way to move rbds between pools?
[23:35] <xahare> if so, you can do size 2 for now, then when you get more nodes, make your erasure coded pool and move the blocks to that
[23:37] * saltlake (~saltlake@12.250.199.170) has joined #ceph
[23:37] <saltlake> joshd: sorry I lost my connection, gotta go I hope we can chat tomorrow
[23:38] * al (d@niel.cx) Quit (Remote host closed the connection)
[23:38] * georgem (~Adium@184.151.190.211) has joined #ceph
[23:38] * al (quassel@niel.cx) has joined #ceph
[23:38] <CephTestC> Does anyone know if I can use the same OSD Cache tier on multiple storage pools with different caching methods?
[23:39] * togdon (~togdon@74.121.28.6) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[23:40] * saltlake2 (~saltlake@12.250.199.170) Quit (Ping timeout: 480 seconds)
[23:40] * KevinPerks (~Adium@2606:a000:80a1:1b00:e12e:9c26:beb6:7e08) Quit (Ping timeout: 480 seconds)
[23:40] * togdon (~togdon@74.121.28.6) has joined #ceph
[23:41] * al (quassel@niel.cx) Quit (Remote host closed the connection)
[23:41] * al (quassel@niel.cx) has joined #ceph
[23:41] <joshd> saltlake: ok, see you tomorrow
[23:43] * diegows (~diegows@190.190.5.238) has joined #ceph
[23:45] * saltlake (~saltlake@12.250.199.170) Quit (Ping timeout: 480 seconds)
[23:47] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) has joined #ceph
[23:53] * mozg (~andrei@37.220.104.190) has left #ceph
[23:54] * mozg (~andrei@37.220.104.190) has joined #ceph
[23:54] <mozg> xahare, i would like to save some space as i want to set up a backup pool
[23:54] <mozg> and i do not really need to have fast access to data
[23:55] <mozg> xarses, i think you can move with rbd cp command
[23:55] <mozg> or mv command
[23:56] * scuttlemonkey is now known as scuttle|afk
[23:56] <mozg> something like rbd mv pool1/image1 pool2/image1
[23:56] <mozg> that should do it

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.