#ceph IRC Log

Index

IRC Log for 2013-09-12

Timestamps are in GMT/BST.

[0:01] <xarses> damn /18 for your public
[0:01] <xarses> nice
[0:01] <mikedawson> terje-: patch for an ubuntu raring apparmor file to get rbd logging and admin sockets http://pastebin.com/raw.php?i=fVXxanqu
[0:03] <mikedawson> xarses: thinking ahead ;-) not quite full yet
[0:03] <terje-> thanks mikedawson, you rule.
[0:04] <xarses> mikedawson, the last time i tired to use a CIDR, the ops guys kept keying the wrong mask anyway so we had to go back to /24's
[0:04] <xarses> sadness
[0:05] <mikedawson> terje-: glad to help. I'm really just trying to be helpful to others so sagewk will help me!
[0:05] <terje-> that's a tall order..
[0:05] <terje-> ;)
[0:06] <terje-> you have 'rbd cache = true' while the docs say 'rbd_cache = true'
[0:06] <xarses> terje both _ and " " are supported
[0:07] <xarses> the docs are horribly inconsistent about it
[0:07] <terje-> well ok then
[0:07] <terje-> huh
[0:07] <xarses> applies to everything inside ceph.conf
[0:07] <dmick> and sometimes -
[0:07] <terje-> unfortunately, these are horrible and rbd cache doesn't seem like it's helping
[0:07] <dmick> maybe always
[0:08] <terje-> these = my vm's
[0:08] <cmdrk> will ceph.conf eventually standardize on one of those conventions?
[0:08] <xarses> cmdrk never!
[0:09] <cmdrk> :)
[0:09] <xarses> i'd rather have up-to-date documentation
[0:09] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[0:10] <terje-> I don't think this cache setting is taking affect
[0:10] <terje-> I don't see rbd admin sockets in /var/run/ceph/rbd-bleh
[0:10] * terje- scratches head
[0:12] <ntranger> xarses here is the instruction list that I followed, but it stops after the OSD's.
[0:12] <ntranger> http://ceph.com/howto/deploying-ceph-with-ceph-deploy/
[0:13] <ntranger> I'm kinda lost on the MDS.
[0:13] <mikedawson> terje-: look at /var/log/dmesg for apparmor errors when you start qemu
[0:13] <xarses> ntranger ceph-deploy mds create host
[0:13] <terje-> well, this is actually RHEL and I dont' have selinux enabled.. but I'll continue to poke around.
[0:13] <terje-> I think I have the right idea. thanks again.
[0:14] <xarses> sit back and enjoy the ceph-deploy magic
[0:15] <dmick> terje-: strace on the right proc to look for the create can be a big help
[0:15] <terje-> alrighty
[0:15] * terje- futz's
[0:15] <mikedawson> terje-: the best thing to do is benchmark rbd by itself 'rbd bench-write --pool volumes --io-size 8192 --io-threads 16 --no-rbd-cache --io-pattern rand test-volume'. Toggling from --no-rbd-cache to --rbd-cache should show the impact
[0:16] <ntranger> xarses this will associate all 3 of my nodes to 1 host?
[0:16] <mikedawson> terje-: first do 'rbd create --size 102400 --pool volumes test-volume'
[0:16] <xarses> ntranger, the clients use the monitors to discover the osd's and then work directly with the osd's to store and retreive data
[0:18] <terje-> yea, so I did that
[0:18] <terje-> I have a pool called mgmt and --rbd-cache is like 500x faster
[0:19] <terje-> which is cool but in the VM's currently, it seems like it's only 3x faster
[0:19] <mikedawson> terje-: yeah, it should be a striking improvement
[0:20] * nwat (~nwat@eduroam-237-79.ucsc.edu) Quit (Ping timeout: 480 seconds)
[0:21] <mikedawson> terje-: perhaps reads are holding you back, not writes
[0:21] <terje-> perhaps
[0:24] * dmsimard (~Adium@108.163.152.2) Quit (Quit: Leaving.)
[0:27] <ntranger> xarses okay, I rant that command. I should be able to type ceph health, and get status, correct?
[0:27] <xarses> you should have been able to do a ceph -s (ceph health) anytime after the monitors where running
[0:29] <ntranger> ok. I'm getting a weird error, which makes me think the time on 2 of my nodes is off
[0:29] <ntranger> HEALTH_WARN 18 pgs peering; 18 pgs stuck inactive; 18 pgs stuck unclean; clock skew detected on mon.ceph02, mon.ceph03
[0:30] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[0:31] <mikedawson> ntranger: use ntp to get clocks in sync, then restart the monitors. That should clear the last part. the peering/inactive/unclean PGs are a different issue
[0:37] * yasu` (~yasu`@dhcp-59-166.cse.ucsc.edu) has joined #ceph
[0:37] * rturk-away is now known as rturk
[0:38] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:41] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[0:43] * malcolm (~malcolm@silico24.lnk.telstra.net) has joined #ceph
[0:44] * rturk is now known as rturk-away
[0:48] <buck> centos newb question: configure is telling me that it cannot find tcmalloc but I've installed gperftools-lib.x86_64. Is there something else that I should be installing or is this known to not exist?
[0:48] <buck> centos 6.4
[0:51] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:54] <dmick> does that package contain the lib?
[0:54] <buck> sorted it out, needed the gperftools-devel packages
[0:54] * dmick can never remember the dpkg -L equiv for rpm.... -ql?
[0:54] <dmick> k
[0:54] <buck> which yum wasn't returning on a 'yum search tcmalloc'
[0:55] <dmick> ah, for headers, for configure test exec build
[0:59] * shang (~ShangWu@207.96.227.9) has joined #ceph
[1:00] * andreask1 (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[1:00] * ChanServ sets mode +v andreask1
[1:00] * andreask is now known as Guest6385
[1:00] * andreask1 is now known as andreask
[1:00] * Guest6385 (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Read error: Connection reset by peer)
[1:01] <terje-> yum whatprovides */tcmalloc
[1:03] * malcolm (~malcolm@silico24.lnk.telstra.net) Quit (Quit: Konversation terminated!)
[1:08] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[1:08] <buck> terje-: thanks
[1:30] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[1:30] <cmdrk> is it safe to mix underlying filesystem types with OSDs ?
[1:31] <cmdrk> for example i have disks as XFS and other as BTRFS
[1:33] <mech422> oh! that reminds me... is it safe to mount a RBD read-only from multiple locations simulataneously ? I'd like to use 1 RBD for the 'base' of an overlayFS
[1:35] <joshd> yes to both
[1:35] <mech422> cool - Thanks :-)
[1:36] <mech422> that'll cut back on vm 'root' images a ton!
[1:37] <dmick> as long as it's *reeeeallly* readonly
[1:37] <joshd> ideally you'd set the block device as read-only (when mapping with the kernel, or via qemu) so the os doesn't get any funny ideas about replaying journals
[1:38] <mech422> by 'kernel' you mean the 'guest vm' kernel (ie mount ro in /etc/fstab) ?
[1:38] <joshd> no, I meant if you were using the kernel driver
[1:38] <dmick> or updating superblocks or the like
[1:38] <joshd> for rbd
[1:39] <mech422> ahh - no, not using kernel driver - just qemu - I'll look for a 'read-only' tag in the virsh xml format
[1:40] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:40] <mech422> I was going to have a single base image for each given 'flavor' , OverlayFS mounted with a per-vm r/w layer
[1:42] <xarses> i thought everyone used fuse nowdays instead of overlayfs
[1:42] <mech422> seems to be a bit up in the air...aufs, unionFS, OverlayFS
[1:42] <xarses> still
[1:42] <xarses> jeese
[1:43] <mech422> last I heard, they were trying to move OverlayFS into kernel
[1:43] <mech422> and linus bought off on it
[1:43] <mech422> so it seems like a safe bet
[1:43] <mech422> (its also used by ubuntu live, slax live, etc etc )
[1:45] * mschiff_ (~mschiff@46.59.224.175) has joined #ceph
[1:47] <gregaf1> sagewk: can you look at wip-4221? short patch
[1:48] <gregaf1> should probably backport to dumpling as well
[1:53] * mschiff (~mschiff@46.59.224.175) Quit (Ping timeout: 480 seconds)
[1:55] * grepory (~Adium@143.sub-70-192-193.myvzw.com) has joined #ceph
[1:58] * grepory1 (~Adium@143.sub-70-192-193.myvzw.com) has joined #ceph
[1:58] * grepory (~Adium@143.sub-70-192-193.myvzw.com) Quit (Read error: Connection reset by peer)
[1:59] <sagewk> gregaf1: i would put he try/catch in the caller?
[1:59] * LeaChim (~LeaChim@054073b1.skybroadband.com) Quit (Ping timeout: 480 seconds)
[2:02] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[2:05] <gregaf1> sagewk: you don't think we want to keep the caller's interface clean?
[2:06] <gregaf1> we don't really do exceptions most places so having the contract be "oh, and you need to catch this exception" would be a bit weird — returning NULL is already well-defined
[2:06] <gregaf1> (decode_event() can return NULL and already set up the try-catch pattern that I'm using here)
[2:09] * berant (~blemmenes@24-236-241-163.dhcp.trcy.mi.charter.com) has joined #ceph
[2:09] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Ping timeout: 480 seconds)
[2:11] * berant (~blemmenes@24-236-241-163.dhcp.trcy.mi.charter.com) Quit ()
[2:14] * mnash (~chatzilla@vpn.expressionanalysis.com) Quit (Ping timeout: 480 seconds)
[2:16] * grepory1 (~Adium@143.sub-70-192-193.myvzw.com) Quit (Quit: Leaving.)
[2:18] <sagewk> oh, didn't realize it could already return NULL
[2:18] <sagewk> that means 'end of journal' or something?
[2:18] <sagewk> it still misses the type decode at the top...
[2:21] <sagewk> hmm, there's only 1 caller.. i'd still rather see it there. error handling is something that it is nice to see explicitly
[2:22] <gregaf1> sagewk: NULL already means "failed to decode event from this point"
[2:23] <gregaf1> the type decode can't fail unless there's not enough data to read that (whatever-sized) int from, in which case it would already fail out
[2:24] <gregaf1> and error handling should be handled as far down the stack as it can be...
[2:24] <gregaf1> otherwise it just mucks up your higher-level code
[2:32] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[2:32] <sagewk> yeah, i guess that's ok then!
[2:32] <sagewk> just catch the first ::decode(type...) too
[2:37] * sagelap1 (~sage@2600:1012:b012:cd8d:e186:c089:3fcf:2938) has joined #ceph
[2:37] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Read error: Connection reset by peer)
[2:38] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[2:38] * ChanServ sets mode +v andreask
[2:40] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[2:40] * sagelap (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) Quit (Ping timeout: 480 seconds)
[2:40] * freedomhui (~freedomhu@117.79.232.247) has joined #ceph
[2:49] * buck (~buck@c-24-6-91-4.hsd1.ca.comcast.net) has left #ceph
[2:52] * mschiff_ (~mschiff@46.59.224.175) Quit (Remote host closed the connection)
[2:52] * alfredo|afk (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) Quit (Remote host closed the connection)
[2:59] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[2:59] * ScOut3R (~scout3r@91EC1DC5.catv.pool.telekom.hu) has joined #ceph
[3:00] * yy-nm (~Thunderbi@218.74.35.201) has joined #ceph
[3:06] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[3:11] * berant (~blemmenes@24-236-241-163.dhcp.trcy.mi.charter.com) has joined #ceph
[3:14] * yanzheng (~zhyan@134.134.137.71) has joined #ceph
[3:14] * berant_ (~blemmenes@gw01.ussignalcom.com) has joined #ceph
[3:14] * DarkAceZ (~BillyMays@50.107.55.36) has joined #ceph
[3:15] * peetaur (~peter@CPEbc1401e60493-CMbc1401e60490.cpe.net.cable.rogers.com) Quit (Read error: Operation timed out)
[3:17] * berant (~blemmenes@24-236-241-163.dhcp.trcy.mi.charter.com) Quit (Read error: Operation timed out)
[3:17] * berant_ is now known as berant
[3:17] * peetaur (~peter@CPEbc1401e60493-CMbc1401e60490.cpe.net.cable.rogers.com) has joined #ceph
[3:18] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[3:18] * xarses (~andreww@204.11.231.50.static.etheric.net) Quit (Ping timeout: 480 seconds)
[3:22] * nerdtron (~kenneth@202.60.8.252) has joined #ceph
[3:23] * freedomhui (~freedomhu@117.79.232.247) Quit (Quit: Leaving...)
[3:24] <ntranger> I've gone throught and set the ntp.conf to point at our time server, and I'
[3:24] <ntranger> I'm still getting HEALTH_WARN clock skew detected on mon.ceph02, mon.ceph03
[3:24] <ntranger> after reboot
[3:25] <ntranger> the pgs errors are gone nwo
[3:25] <ntranger> now
[3:25] <dmick> ntpq can tell you how well ntp is doing
[3:25] <dmick> ntpq -p, maybe?...
[3:26] * xiaoxi (~xiaoxi@192.102.204.38) has joined #ceph
[3:26] <ron-slc> sometimes ntp daemon can take 10-15 minutes to fix a few minuets off. If the clock is too far off NTP won't adjust it. You can stop ntp daemon, run ntpdate -sb; for an immediate update.
[3:26] <ron-slc> if your clock is ahead, it will then stall the clock, until realtime catches up.
[3:27] * yy-nm (~Thunderbi@218.74.35.201) Quit (Quit: yy-nm)
[3:28] <ntranger> its weird, it just shows them as a couple seconds off
[3:31] <ntranger> I wonder if the time was way off when I created the mon's and its thrown them off
[3:31] <dmick> a couple seconds is a long way if they're connecting to ntp
[3:31] <dmick> should be in ms
[3:33] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[3:34] <dmick> default allowable is 50ms
[3:34] <ntranger> will this cause a problem for ceph? the last thing I need to do is create a mount, and start moving files, but I don't want to proceed until I get this resolved, in case is messes with anything else
[3:34] <dmick> yes; paxos depends on having a reasonably-accurate clock, hence the checks
[3:35] <ntranger> I kinda thought so, but wasn't 100% sure
[3:35] <ntranger> I'll go get a drink and come back and check it in a few minutes. :)
[3:38] * xarses (~andreww@c-71-202-167-197.hsd1.ca.comcast.net) has joined #ceph
[3:39] * freedomhui (~freedomhu@117.79.232.248) has joined #ceph
[3:40] * zackc (~zack@0001ba60.user.oftc.net) Quit (Ping timeout: 480 seconds)
[3:40] * newbie|2 (~kvirc@111.172.32.75) has joined #ceph
[3:41] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[3:43] * glzhao (~glzhao@117.79.232.248) has joined #ceph
[3:50] <xiaoxi> hello... I am xiaoxi from Intel asia R&D..great to have a chance as geek on duty today
[3:51] * diegows (~diegows@190.190.11.42) Quit (Ping timeout: 480 seconds)
[4:00] <mech422> hi xiaoxi :-)
[4:01] <xiaoxi> hi!
[4:04] * angdraug (~angdraug@204.11.231.50.static.etheric.net) Quit (Quit: Leaving)
[4:12] * bandrus (~Adium@12.248.40.138) Quit (Quit: Leaving.)
[4:17] * peetaur (~peter@CPEbc1401e60493-CMbc1401e60490.cpe.net.cable.rogers.com) Quit (Ping timeout: 480 seconds)
[4:20] <xiaoxi> a bit lonely,haha
[4:21] <mech422> heh - if it will make you feel better, I'm just about to start the 'quick start' of my cluster :-)
[4:21] <mech422> I'm sure I'll have lots of dumb questions to ask in about 10-15 minutes :-)
[4:22] <xiaoxi> I am OK with it, haha
[4:22] <ntranger> hey dmick, I'm still getting that same error. anything you think I might look at next?
[4:23] <mech422> I'm trying to add as much of the setup to saltstack ( http://www.saltstack.org ) as I can
[4:23] <dmick> ntranger: did you check your ntpq results?
[4:26] * yy-nm (~Thunderbi@218.74.35.201) has joined #ceph
[4:27] <ntranger> when I type ntpq -p, I get "connection refused"
[4:27] <dmick> seems like you need to get straight about the state of your ntpd's
[4:28] <ntranger> yeah
[4:33] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[4:35] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[4:37] * ScOut3R (~scout3r@91EC1DC5.catv.pool.telekom.hu) Quit (Remote host closed the connection)
[4:37] * sagelap1 (~sage@2600:1012:b012:cd8d:e186:c089:3fcf:2938) Quit (Read error: No route to host)
[4:38] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Ping timeout: 480 seconds)
[4:41] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[4:41] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[4:41] <dlan> xiaoxi: is there any project in intel doing with ceph? just curious.
[4:43] <xiaoxi> Basically, yes, we have several teams working on ceph , my team mainly focus on performance,including ceph-as-object-store and ceph-as-ebs.
[4:43] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[4:45] <dlan> xiaoxi: sounds great!
[4:45] <xiaoxi> some customer facing team even try to perswade customer to use ceph in their private cloud, instead of enterprise solution(emc or something)
[4:45] <dlan> is that OTC center?
[4:46] <xiaoxi> Not accurately, but similar ,we are in a different center, but also, inside ssg
[4:47] * sagelap (~sage@2600:1012:b012:cd8d:e186:c089:3fcf:2938) has joined #ceph
[4:47] * berant (~blemmenes@gw01.ussignalcom.com) Quit (Quit: berant)
[4:47] <yanzheng> xiaoxi, how does radosgw performance compare to swift
[4:51] * houkouonchi-work (~linux@12.248.40.138) Quit (Remote host closed the connection)
[4:51] <xiaoxi> the radosgw itself has some scalability issue, we have to walk around this by start some radosgw instances on the proxy node, and use some loadbalancer before it.
[4:53] <xiaoxi> With this tricks, ceph's performance seems better than swift, mainly because swift is too cpu-intensive and likely to hit cpu bottlnect. We are reflushing the data with new bay and new enterprise level HDD
[4:54] <yanzheng> glad to hear that
[4:55] * Karcaw_ (~evan@68-186-68-219.dhcp.knwc.wa.charter.com) has joined #ceph
[4:57] * Karcaw (~evan@68-186-68-219.dhcp.knwc.wa.charter.com) Quit (Ping timeout: 480 seconds)
[4:57] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[5:00] * angdraug (~angdraug@c-98-248-39-148.hsd1.ca.comcast.net) has joined #ceph
[5:04] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[5:05] * fireD (~fireD@93-139-157-255.adsl.net.t-com.hr) has joined #ceph
[5:05] * freedomhui (~freedomhu@117.79.232.248) Quit (Quit: Leaving...)
[5:05] <mech422> pro-tip: shutting down your ceph cluster while the kernel rbd module thinks its doing something ends up wedging it :-P
[5:06] <mech422> I was trying to rmmod it, but it wasn't having any of that
[5:06] <lurbs> Speaking of Swift et al, anyone know if much work has been done to incorporate usage statistic reporting from Ceph/RADOS Gateway into Ceilometer?
[5:06] * mrprud (~mrprud@ANantes-554-1-246-148.w2-1.abo.wanadoo.fr) has joined #ceph
[5:07] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[5:07] * fireD_ (~fireD@93-142-237-144.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[5:08] * S0d0 (~joku@a88-113-108-239.elisa-laajakaista.fi) has joined #ceph
[5:08] <sagelap> yanzheng: comment on pull 590
[5:08] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[5:09] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[5:09] <sagelap> mech422: there is umount -f for the fs, but nothing similar for umapping a block device that is in use
[5:10] * mrprud (~mrprud@ANantes-554-1-246-148.w2-1.abo.wanadoo.fr) Quit (Read error: Operation timed out)
[5:10] <mech422> sagelap: yeah - my bad for not noticing it was there...
[5:11] <mech422> it wasn't mounted, but some how the ubuntu initramdisk update thing started probing it
[5:11] <xiaoxi> sagelap: long time ago, in the design summit of dumpling, there is an BP want to have a restful admin API for ceph, even want to export some profilling metrics in admin_socket , but I lost track of this .. is it been implemented?
[5:12] <yanzheng> sagelap, sorry. re-pushed
[5:12] <dmick> xiaoxi: there is a "rest"ful interface in dumpling, yes. see ceph-rest-api
[5:12] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[5:13] * xmltok (~xmltok@pool101.bizrate.com) Quit (Ping timeout: 480 seconds)
[5:13] <dmick> statistics have been in the admin socket for some time; I don't think they've changed drastically, except that the schema may have been cleaned up
[5:13] <dmick> I'm just working on the Ceph collector for Diamond as we speak
[5:13] <dmick> https://github.com/BrightcoveOS/Diamond
[5:14] <xiaoxi> dmick: is there any doc available? or the code submit id I can look at? from the doc I can only find some radows GW related admin api
[5:14] <dmick> there is a manpage
[5:14] * clayb (~kvirc@199.172.169.79) Quit (Read error: Connection reset by peer)
[5:15] <sagelap> xiaoxi: there is 'ceph daemon <name> perf dump' that can be slurped by by collectd or diamond or wahtever; that has been there for ages actually (although it was ceph --admin-daemon <path> ..')
[5:15] <sagelap> there is also a new thing, 'ceph osd perf' that has a few select stats available via the monitor
[5:15] <dmick> !
[5:15] <sagelap> also, what dmick said. i need to learn to read ahead before replying :)
[5:16] <dmick> no, your information was completely disjoint from mine
[5:16] * haomaiwang (~haomaiwan@117.79.232.248) Quit (Read error: Connection reset by peer)
[5:17] * haomaiwang (~haomaiwan@li498-162.members.linode.com) has joined #ceph
[5:19] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[5:19] <xiaoxi> good to know, the performance counter is really helpful when doing performance tuning and profiling
[5:19] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Bye!)
[5:20] * haomaiwa_ (~haomaiwan@117.79.232.248) has joined #ceph
[5:21] <xiaoxi> currently I am using an ugly way to do it:using a script to call ceph --admin-daemon .... perf dump >> log_file every few seconds
[5:22] * tserong_ (~tserong@203-57-208-89.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[5:23] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit ()
[5:24] * jeff-YF (~jeffyf@pool-173-66-21-43.washdc.fios.verizon.net) has joined #ceph
[5:26] * carif (~mcarifio@146-115-183-141.c3-0.wtr-ubr1.sbo-wtr.ma.cable.rcn.com) has joined #ceph
[5:27] * haomaiwang (~haomaiwan@li498-162.members.linode.com) Quit (Ping timeout: 480 seconds)
[5:28] * tserong_ (~tserong@203-57-208-89.dyn.iinet.net.au) has joined #ceph
[5:29] * jeff-YF_ (~jeffyf@67.23.123.228) has joined #ceph
[5:29] * sprachgenerator (~sprachgen@va-71-48-143-23.dhcp.embarqhsd.net) Quit (Quit: sprachgenerator)
[5:32] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) Quit (Ping timeout: 480 seconds)
[5:32] * jeff-YF (~jeffyf@pool-173-66-21-43.washdc.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[5:32] * jeff-YF_ is now known as jeff-YF
[5:35] * paravoid (~paravoid@scrooge.tty.gr) Quit (Ping timeout: 480 seconds)
[5:36] * paravoid (~paravoid@scrooge.tty.gr) has joined #ceph
[5:37] <ntranger> dmick yeah, I'm an idiot. ntpd and ntpdate were both off in chkconfig
[5:37] <ntranger> I turned them on and now getting results
[5:37] <dmick> woot
[5:39] <ntranger> and how's about that. HEALTH_OK. :)
[5:39] * gaveen (~gaveen@175.157.98.206) Quit (Quit: Leaving)
[5:40] * shang (~ShangWu@207.96.227.9) Quit (Ping timeout: 480 seconds)
[5:43] * AfC (~andrew@2407:7800:200:1011:6e88:14ff:fe33:2a9c) has joined #ceph
[5:43] <dmick> \o/
[5:48] * sagelap (~sage@2600:1012:b012:cd8d:e186:c089:3fcf:2938) Quit (Read error: Connection reset by peer)
[5:51] * sagelap (~sage@2600:1012:b012:cd8d:e186:c089:3fcf:2938) has joined #ceph
[5:58] * grepory (~Adium@2600:1003:b012:ad7b:4d1:414e:caf7:d91a) has joined #ceph
[6:07] * sagelap (~sage@2600:1012:b012:cd8d:e186:c089:3fcf:2938) Quit (Ping timeout: 480 seconds)
[6:09] * jmlowe1 (~Adium@c-50-172-105-141.hsd1.in.comcast.net) Quit (Quit: Leaving.)
[6:17] * mnash (~chatzilla@66-194-114-178.static.twtelecom.net) has joined #ceph
[6:18] * jeff-YF (~jeffyf@67.23.123.228) Quit (Quit: jeff-YF)
[6:25] * yasu` (~yasu`@dhcp-59-166.cse.ucsc.edu) Quit (Remote host closed the connection)
[6:26] * vbellur (~vijay@122.166.159.63) has joined #ceph
[6:40] * newbie|2 (~kvirc@111.172.32.75) Quit (Ping timeout: 480 seconds)
[6:43] * yasu` (~yasu`@99.23.160.231) has joined #ceph
[6:58] * Cube (~Cube@12.248.40.138) Quit (Quit: Leaving.)
[7:04] * newbie|2 (~kvirc@111.172.32.75) has joined #ceph
[7:12] * sagelap (~sage@2600:1010:b007:5bac:e186:c089:3fcf:2938) has joined #ceph
[7:19] * jmlowe (~Adium@149-166-55-33.dhcp-in.iupui.edu) has joined #ceph
[7:22] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[7:26] * grepory (~Adium@2600:1003:b012:ad7b:4d1:414e:caf7:d91a) has left #ceph
[7:32] * penguinLord (~penguinLo@14.139.82.8) Quit (Quit: irc2go)
[7:33] * vbellur (~vijay@122.166.159.63) Quit (Ping timeout: 480 seconds)
[7:41] * jmlowe (~Adium@149-166-55-33.dhcp-in.iupui.edu) Quit (Quit: Leaving.)
[7:44] <mech422> well.. I hosed that box :-P
[7:45] <mech422> 2+ hours trying to figure out how to get the java based IPMI console to work, and it turns out something is hosed with my module system
[7:45] <mech422> screw it - file that for tommorrow....
[7:45] <mech422> now, back to the cluster!!
[7:53] <nigwil> I think this message could be fixed, when Ceph says "wait" it really means "I'm not going to do the change, try again later":
[7:53] <nigwil> root@ceph3:~# ceph osd pool set data pgp_num 3000
[7:53] <nigwil> still creating pgs, wait
[7:53] <nigwil> I thought pgp_num was changed, went and looked and it was the old value
[7:53] <mech422> its for DMV values of 'wait' ? :-)
[7:54] <nigwil> yes, it is being far too polite :-)
[7:55] * AfC (~andrew@2407:7800:200:1011:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[7:55] * vbellur (~vijay@nat-pool-blr-t.redhat.com) has joined #ceph
[7:59] * xiaoxi (~xiaoxi@192.102.204.38) Quit (Remote host closed the connection)
[8:00] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[8:03] * sagelap (~sage@2600:1010:b007:5bac:e186:c089:3fcf:2938) Quit (Read error: No route to host)
[8:03] * sagelap1 (~sage@2600:1010:b007:5bac:e5fc:33ad:d879:cc84) has joined #ceph
[8:10] * itamar_ (~itamar@82.166.185.149) has joined #ceph
[8:15] * foosinn (~stefan@office.unitedcolo.de) has joined #ceph
[8:15] <itamar_> Hi all,
[8:16] <mech422> Morning
[8:16] <itamar_> I am interested in implementing rbd cache writeback on my libvirt machines.
[8:16] <itamar_> can anyone comment about the risk of data loss when suffering from a crash such as a power failure?
[8:22] * carif (~mcarifio@146-115-183-141.c3-0.wtr-ubr1.sbo-wtr.ma.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[8:22] * mrprud (~mrprud@ANantes-554-1-246-148.w2-1.abo.wanadoo.fr) has joined #ceph
[8:23] * mrprud (~mrprud@ANantes-554-1-246-148.w2-1.abo.wanadoo.fr) Quit (Remote host closed the connection)
[8:24] <xarses> iirc writeback = wait for io ack; writethrough = send io along and keep going
[8:25] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[8:27] <xarses> in the ack required case, there shouldn't be any data loss, im not sure about ceph, but swift requires at least two copies before the write is acked, id have to assume that ceph does something similar based on the crushmap requirements
[8:29] <xarses> in the no ack case, i cant speak exactly how to ceph handles it, but i'd assume that like in most systems, there is a risk of dataloss because the system cant guarantee the writes unless your hardware can
[8:30] <xarses> you'd probably best probe the inktank guys in PTD (GMT-8) daytime or post a comment to the mailing list so someone better could comment
[8:35] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[8:35] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[8:39] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[8:42] * Vjarjadian (~IceChat77@05453253.skybroadband.com) Quit (Quit: Hard work pays off in the future, laziness pays off now)
[8:45] <itamar_> thanks xarses, I'd open a ticket to Inktank (we're customers) but wanted to try and get some answers before sundown ;)
[8:45] <itamar_> Thanks!
[8:48] <xarses> A large number of the inktank guys are on in here, it just quite late for them
[8:51] * jmlowe (~Adium@c-50-172-105-141.hsd1.in.comcast.net) has joined #ceph
[8:55] <nigwil> "late"=midnight, I would hope they are asleep :-)
[8:57] * jmlowe (~Adium@c-50-172-105-141.hsd1.in.comcast.net) Quit (Read error: Operation timed out)
[9:08] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[9:13] <mech422> hmm - does ceph-deploy osd format drives/partitions for you ?
[9:13] <nigwil> yes it can
[9:13] <mech422> using this notation:
[9:13] <mech422> ceph-deploy osd create ceph-node:sdb:/dev/ssd1 ceph-node:sdc:/dev/ssd2
[9:13] <nigwil> if you do a --zap-disk it will overwrite the drive too
[9:14] <nigwil> or you can target partitions
[9:14] <mech422> I'd like to use all /dev/sdb for ceph, and use /dev/sda1 for the journal - BUT my OS is on /dev/sda as well
[9:14] <nigwil> journals can co-exist on the OSD
[9:15] <mech422> I was just splitting the to be fancy and get a lil more performance....
[9:15] <mech422> but I don't see where I'm supposed to format them, or list the 'extra attribute' stuff
[9:16] <nigwil> it will do that
[9:16] <mech422> ugh - what are those called ? xfs has them , ext4 'fakes' it ?
[9:16] <nigwil> xattr's
[9:16] <mech422> yeah - thanks! :-)
[9:16] <nigwil> Extended Attributes
[9:16] <nigwil> you should use XFS
[9:17] <mech422> btrfs is still not prime time eh ? dam...I keep wanting to play with that...
[9:17] <mech422> so in my case - if I format the drives manually - how do I tell ceph-deploy that they are XFS filesystems ?
[9:17] <nigwil> you can have a mix of OSD formats, maybe choose to run BTRFS on a couple to see
[9:18] <nigwil> by default it will choose XFS
[9:18] <mech422> sweet
[9:18] <nigwil> or do: --fs-type btrfs
[9:19] <mech422> now watch me format the wrong drives :-;
[9:19] <nigwil> oh don't do that :-)
[9:19] <nigwil> you'll have a bad day...
[9:19] <nigwil> time
[9:19] <mech422> I'm on like 48 hours now...
[9:19] <mech422> dunno whats wrong with me - can't sleep
[9:20] <wogri_risc> mech422 should stop drinking coffee :)
[9:20] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[9:20] <mech422> I've worked nights most of my life - so now, my body is just sorta "hey! lets go!"
[9:24] <mech422> so will ceph take care of mount options using ceph-deploy ?
[9:25] <nigwil> yes
[9:25] <mech422> or do I still need to add stuff to /etc/fstab and mount it ? oh ... coolio!
[9:25] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:26] <mech422> ceph-deploy seems a lot 'smarter' then a few months ago :-D
[9:26] <nigwil> for small test-setups, ceph-deploy does a good job, only thing I've noticed so far is the PGs are too small by default, but that can be fixed afterwards
[9:27] <nigwil> it doesn't support more exotic networking setups either, but that can be added later
[9:29] * allsystemsarego (~allsystem@188.25.134.128) has joined #ceph
[9:40] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has left #ceph
[9:41] <nerdtron> is there a way to lower the assigned PGs on a pool?
[9:42] <nigwil> if the pool set is a smaller value, does that work?
[9:43] <mech422> hmm - the OSD's didn't come up
[9:43] <mech422> health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
[9:44] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[9:44] * mschiff (~mschiff@port-49377.pppoe.wtnet.de) has joined #ceph
[9:48] * Bada (~Bada@195.65.225.142) has joined #ceph
[9:49] <mech422> odd - I formatted /dev/sdb as XFS, but ceph-deploy disk list shows this line:
[9:49] <mech422> [storx2][INFO ] /dev/sdb1 ceph data, prepared, cluster ceph
[9:49] <mech422> looks like it re-partitioned the drive ?
[9:50] * LeaChim (~LeaChim@054073b1.skybroadband.com) has joined #ceph
[9:51] * jmlowe (~Adium@2601:d:a800:511:cd78:1131:e333:3b45) has joined #ceph
[9:54] <loicd> yanzheng: https://github.com/ceph/ceph/pull/590#issuecomment-24296515 " I have trouble to access the teuthology machines because openvpn traffic is blocked by the China great firewall." This is bound to be a collector ;-)
[9:55] <yanzheng> ;)
[9:57] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[9:57] * ChanServ sets mode +v andreask
[9:57] <mech422> ceph-deploy doesn't add a 'OSD' section to ceph.conf? is there someplace I can look to see what it thinks the OSDs are using for storage ?
[9:58] * ScOut3R (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) has joined #ceph
[9:58] * Clabbe (~oftc-webi@alv-global.tietoenator.com) has joined #ceph
[9:59] <Clabbe> Hi, Im trying to create a ceph cluster manually for puppet deployment
[9:59] <Clabbe> But I seem to be missing some steps? I have created and started a monitor successfully but now I have issues with starting the osd
[10:00] * jmlowe (~Adium@2601:d:a800:511:cd78:1131:e333:3b45) Quit (Ping timeout: 480 seconds)
[10:00] <Clabbe> At what point do I need to create the client admin key?
[10:04] <Clabbe> ceph auth get-or-create client.admin mds 'allow' osd 'allow *' mon 'allow *' > /etc/ceph/keyring is the command I try to execute
[10:05] <Clabbe> Do I have to start the monitor before this?
[10:10] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[10:14] * ScOut3R_ (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) has joined #ceph
[10:16] * ScOut3R (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) Quit (Read error: Operation timed out)
[10:16] <Clabbe> ceph -k /var/lib/ceph/mon/mon.1/keyring -c /etc/ceph/ceph.conf auth list just gives me "Error EACCES: access denied"
[10:19] <Clabbe> anyone?
[10:22] <mech422> sorry man - I'm still working thru the quickstart :-)
[10:33] <mech422> ceph-deploy new could use a '--clean' flag - if you try to remake your cluster, there are ceph droppings all over - /etc/ceph/ceph.conf existing causes the python config writing module to die:
[10:33] * KindTwo (~KindOne@h130.212.89.75.dynamic.ip.windstream.net) has joined #ceph
[10:33] <mech422> crud - its not in my scrollback anymore - but the config writer wants a --force-overwrite switch
[10:33] <mech422> and refuses to overwrite the existing file
[10:34] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:34] * KindTwo is now known as KindOne
[10:37] * yasu` (~yasu`@99.23.160.231) Quit (Remote host closed the connection)
[10:45] * BillK (~BillK-OFT@58-7-172-nwork.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[10:45] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[10:46] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[10:52] * jmlowe (~Adium@c-50-172-105-141.hsd1.in.comcast.net) has joined #ceph
[10:54] * xdeller_ (~xdeller@91.218.144.129) Quit (Quit: Leaving)
[10:54] * xdeller (~xdeller@91.218.144.129) has joined #ceph
[10:55] * yasu` (~yasu`@99.23.160.231) has joined #ceph
[10:59] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:59] * KindOne (~KindOne@0001a7db.user.oftc.net) has joined #ceph
[11:00] * jmlowe (~Adium@c-50-172-105-141.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[11:04] * BManojlovic (~steki@91.195.39.5) Quit (Remote host closed the connection)
[11:05] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[11:06] * yanzheng (~zhyan@134.134.137.71) Quit (Remote host closed the connection)
[11:06] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Read error: Connection reset by peer)
[11:07] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[11:07] * ChanServ sets mode +v andreask
[11:10] * yy-nm (~Thunderbi@218.74.35.201) Quit (Quit: yy-nm)
[11:13] * claenjoy (~leggenda@37.157.33.36) has joined #ceph
[11:14] * claenjoy (~leggenda@37.157.33.36) has left #ceph
[11:16] * yasu` (~yasu`@99.23.160.231) Quit (Remote host closed the connection)
[11:16] * yasu` (~yasu`@99.23.160.231) has joined #ceph
[11:16] * yasu` (~yasu`@99.23.160.231) Quit (Remote host closed the connection)
[11:20] * allsystemsarego (~allsystem@188.25.134.128) Quit (Quit: Leaving)
[11:30] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[11:31] <mech422> Whee! we have cluster
[11:44] * pieter_ (~pieter@105-236-155-27.access.mtnbusiness.co.za) has joined #ceph
[11:45] <nerdtron> Clabbe, check the permission of the ceph.conf file and the keyrin
[11:45] <pieter_> hi guys, I've created a ceph cluster with ceph-deploy. 3x mons, 2x osds. Problem is, the servers with osd on them don't automatically start when rebooted
[11:45] <nerdtron> it should be 644
[11:45] <pieter_> I need to manually activate osd again
[11:45] <nerdtron> pieter_, i have the same problem
[11:46] <pieter_> (I notice my ceph.conf doesn't have any osd info inside it.)
[11:46] <nerdtron> i can't make them activate on their own so i added a script on the startup that will activate the osd
[11:46] <nerdtron> pieter_, you followed the quickstart?
[11:46] <mech422> pieter_: Mine either - strange eh ?
[11:46] <pieter_> ah, yes nerdtron
[11:46] <nerdtron> mine too..
[11:46] <mech422> but mine DO restart....somehow ? :-P
[11:47] <nerdtron> mech422, what did you do?
[11:47] <mech422> errr....Magic ?
[11:47] <nerdtron> you mean, when a node ( not the admin) reboots, the osd are activated again?
[11:48] <mech422> the problem I had was I had to xfs format my space FIRST...
[11:48] <mech422> then I just: ceph-deploy osd create storx22:/dev/sdb:/dev/sda1
[11:48] <mech422> yeah - I just rebooted the whole cluster to check
[11:48] <mech422> granted -its only worked once so far :-P
[11:48] <nerdtron> i did that too... i formatted the drives firsts with xfs
[11:49] <nerdtron> mech422, nah...on a working cluster, reboot one node and we'll talk
[11:49] <mech422> nerdtron: sounds like a plan - pick a number from 1 to 5 ?
[11:50] <nerdtron> 4
[11:50] <mech422> oh - I fould ceph-deploy confuses itself if you don't 'cleanup' between recreating ... I make the cluster like a dozen times tonight and ceph-deploy was really not happy
[11:50] <mech422> nerdtron: we should know in about 2 minutes....
[11:52] <mech422> nerdtron: so far so good: osdmap e39: 5 osds: 5 up, 5 in
[11:52] <pieter_> mech422: please show your ceph.conf
[11:53] * jmlowe (~Adium@c-50-172-105-141.hsd1.in.comcast.net) has joined #ceph
[11:53] <mech422> lol - I just got that skimpy thing ceph-deploy creates...
[11:53] <pieter_> lol :) what os?
[11:53] <mech422> debian
[11:53] <pieter_> fuck, me too
[11:53] <nerdtron> i'm on ubuntu
[11:53] <pieter_> ah, well I'm on ubuntu server
[11:53] <mech422> [global]
[11:53] <mech422> fsid = 8e9503eb-d783-4fea-afdd-5bd785339a4c
[11:53] <mech422> mon_initial_members = storx2, storx11, storx12, storx21, storx22
[11:53] <mech422> mon_host = 10.10.1.2,10.10.1.11,10.10.1.12,10.10.1.21,10.10.1.22
[11:53] <mech422> auth_supported = cephx
[11:53] <mech422> osd_journal_size = 1024
[11:53] <mech422> filestore_xattr_use_omap = true
[11:53] <pieter_> maybe an ubuntu related bug?
[11:53] <nerdtron> nothing special on conf either
[11:54] <pieter_> as ceph-deploy did mention something about upstart...
[11:54] <mech422> I _did_ notice ceph-deploy is unhappy if you don't create all your stuff at once...
[11:54] <mech422> this is what finally worked for me:
[11:54] <mech422> # ceph-deploy new storx{2,11,12,21,22}.storage.dmn.com
[11:54] <mech422> # ceph-deploy install storx{2,11,12,21,22}.storage.dmn.com
[11:54] <mech422> # ceph-deploy mon create storx{2,11,12,21,22}.storage.dmn.com
[11:54] <mech422> # ceph-deploy gatherkeys storx{2,11,12,21,22}.storage.dmn.com
[11:54] <mech422> #
[11:54] <mech422> # OK - now you HAVE to login to EVERY osd node and MAKE FS's
[11:54] <mech422> # for some reason, ceph-deploy screws the pooch otherwise
[11:54] <mech422> #
[11:54] <mech422> # mkfs.xfs /dev/sda1 && mkfs.xfs /dev/sdb
[11:54] <mech422> #
[11:54] <mech422> # NOW you can create your OSD's one at a time
[11:54] <mech422> #
[11:54] <mech422> # ceph-deploy osd create storx22:/dev/sdb:/dev/sda1
[11:55] <mech422> didn't work if new, install and mon create didn't mention ALL the mons...but the OSDs I made one at a time
[11:55] <nerdtron> check...check...chekc
[11:55] <nerdtron> that exactly what my steps are
[11:55] <pieter_> I suspect it's to do with upstart nerdtron
[11:56] <pieter_> [ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts cephfs1:cephfs1
[11:56] <pieter_> [ceph_deploy.mds][DEBUG ] Distro Ubuntu codename precise, will use upstart
[11:56] <nerdtron> yeah..and i don't know how to configure it
[11:56] <mech422> oh - if you made the cluster more then once tonight - rm -rf /etc/ceph && rm -rf /var/lib/ceph before you do it again
[11:56] <pieter_> oh sorry that was for mds
[11:56] <nerdtron> mech422, or you can purge the installation and start again
[11:56] <mech422> yeah - I do that too :-P
[11:57] <mech422> --purge each package by name.... not that I'm superstitious
[11:57] <mech422> :-D
[11:57] * odyssey4me (~odyssey4m@165.233.71.2) has joined #ceph
[11:58] <mech422> so what does your log say after reboot ?
[11:58] <odyssey4me> Is there a command-line way to remove hosts from your crush map, or do you have to dump/manipulate/upload it?
[11:58] <odyssey4me> (these are hosts for which all osd's have been removed)
[11:59] <mech422> I think its a dump/upload thing
[12:00] * penguinLord (~penguinLo@14.139.82.8) has joined #ceph
[12:01] * jmlowe (~Adium@c-50-172-105-141.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[12:03] * Bada (~Bada@195.65.225.142) Quit (Remote host closed the connection)
[12:06] * newbie|2 (~kvirc@111.172.32.75) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[12:08] <nerdtron> mech422, ceph-deploy purge (node) and not purge in apt-get
[12:09] <nerdtron> odyssey4me, w8..i tried it before
[12:09] <mech422> I just went right to the FS... I wasn't really trusting ceph-deploy atm
[12:10] <nerdtron> odyssey4me, you want to remove hosts with mon or host with osd?
[12:17] * Tamil (~tamil@38.122.20.226) Quit (Read error: Connection reset by peer)
[12:18] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Read error: Operation timed out)
[12:26] * jlogan1 (~Thunderbi@2600:c00:3010:1:1::40) Quit (Quit: jlogan1)
[12:27] <Clabbe> ceph-create-keys --cluster ceph --id 1 INFO:ceph-create-keys:Talking to monitor... 2013-09-12 12:08:11.379119 7f5d8afd4700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication 2013-09-12 12:08:11.379640 7f5d8afd4700 0 librados: mon. initialization error (2) No such file or directory Error connecting to cluster: ObjectNotFound INFO:ceph-create-keys:Cannot get or create admin key, permission denied
[12:28] <Clabbe> hmm
[12:29] <Clabbe> http://pastebin.com/Hw79kdWC
[12:29] <Clabbe> I have a problem trying to create the admin keyring
[12:30] <mech422> I wonder if thats the bootstrap_* stuff that ceph-deploy creates
[12:34] <Clabbe> http://pastebin.com/KCD84TAy this is what Im doing so far, what is the step to create the admin keyring ?
[12:35] <mech422> umm - doesn't ceph-deploy new do that first thing ?
[12:36] <Clabbe> then Im trying this: ceph auth get-or-create client.admin mds 'allow' osd 'allow *' mon 'allow *' > /etc/ceph/keyring
[12:36] * ShaunR (~ShaunR@staff.ndchost.com) Quit (Ping timeout: 480 seconds)
[12:36] * glzhao (~glzhao@117.79.232.248) Quit (Quit: leaving)
[12:36] <Clabbe> but then it just says Cannot get or create admin key, permission denied
[12:36] <mech422> client.admin
[12:36] <mech422> key: AQBghTFSSBBlHxAAseJU6Ror/bbcpbrBifkCvQ==
[12:36] <mech422> caps: [mds] allow
[12:36] <mech422> caps: [mon] allow *
[12:36] <mech422> caps: [osd] allow *
[12:37] <mech422> yeah - caps look the same
[12:37] <joelio> sure that's not your local unix perms - rather than Ceph auth?
[12:37] <nerdtron> Clabbe, the permissions on the keyring.. is it 644?
[12:39] <Clabbe> nerdtron: 777
[12:39] <nerdtron> how about the folder which contains the keys?
[12:40] <Clabbe> "/var/lib/ceph/mon/mon.1/"
[12:40] <Clabbe> in the ceph.conf -> [mon] keyring=/var/lib/ceph/mon/mon.$id/keyring
[12:43] * nerdtron (~kenneth@202.60.8.252) Quit (Remote host closed the connection)
[12:49] * sprachgenerator (~sprachgen@va-71-48-143-23.dhcp.embarqhsd.net) has joined #ceph
[12:52] * sprachgenerator (~sprachgen@va-71-48-143-23.dhcp.embarqhsd.net) Quit ()
[12:54] * jmlowe (~Adium@c-50-172-105-141.hsd1.in.comcast.net) has joined #ceph
[13:02] * jmlowe (~Adium@c-50-172-105-141.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[13:02] * peetaur (~peter@CPEbc1401e60493-CMbc1401e60490.cpe.net.cable.rogers.com) has joined #ceph
[13:08] * mattt (~mattt@92.52.76.140) has joined #ceph
[13:09] * shang (~ShangWu@207.96.227.9) has joined #ceph
[13:09] <mattt> what is the object_prefix cap required for ?
[13:10] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[13:10] * ChanServ sets mode +v andreask
[13:11] * sprachgenerator (~sprachgen@va-71-48-143-23.dhcp.embarqhsd.net) has joined #ceph
[13:26] * yanzheng (~zhyan@jfdmzpr02-ext.jf.intel.com) has joined #ceph
[13:33] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Quit: Leaving.)
[13:34] * ScOut3R_ (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[13:36] * vbellur (~vijay@nat-pool-blr-t.redhat.com) Quit (Ping timeout: 480 seconds)
[13:40] * pieter__ (~pieter@105-236-155-27.access.mtnbusiness.co.za) has joined #ceph
[13:44] * pieter_ (~pieter@105-236-155-27.access.mtnbusiness.co.za) Quit (Ping timeout: 480 seconds)
[13:45] * tziOm (~bjornar@194.19.106.242) has joined #ceph
[13:50] * vbellur (~vijay@nat-pool-sin2-t.redhat.com) has joined #ceph
[13:54] * mech422 (~steve@ip68-2-159-8.ph.ph.cox.net) has left #ceph
[13:54] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[13:55] * jmlowe (~Adium@c-50-172-105-141.hsd1.in.comcast.net) has joined #ceph
[13:59] * shang (~ShangWu@207.96.227.9) Quit (Ping timeout: 480 seconds)
[14:00] <Clabbe> how is the client.admin keyring created?!
[14:01] * diegows (~diegows@190.190.11.42) has joined #ceph
[14:02] * BillK (~BillK-OFT@58-7-172-nwork.dyn.iinet.net.au) has joined #ceph
[14:03] * jmlowe (~Adium@c-50-172-105-141.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[14:06] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Quit: Leaving.)
[14:08] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[14:14] <odyssey4me> nerdtron - sorry, was busy with something else... the hosts that used to have osd's
[14:15] <Clabbe> odyssey4me: what is creating the client.admin.keyring?
[14:15] * alfredo|afk (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[14:15] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[14:16] * alfredo|afk is now known as alfredodeza
[14:18] * mancdaz (~darren.bi@94.236.7.190) has joined #ceph
[14:19] * pieter_ (~pieter@105-236-155-27.access.mtnbusiness.co.za) has joined #ceph
[14:19] <mancdaz> does live migration with nova/cinder and rbd actually work? nova seems to want shared storage mounted over /var/lib/nova/instances
[14:20] <mattt> mancdaz: i saw same as you describe when i tested
[14:22] * pieter__ (~pieter@105-236-155-27.access.mtnbusiness.co.za) Quit (Ping timeout: 480 seconds)
[14:23] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Remote host closed the connection)
[14:24] * S0d0 (~joku@a88-113-108-239.elisa-laajakaista.fi) Quit (Ping timeout: 480 seconds)
[14:24] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[14:24] * sprachgenerator (~sprachgen@va-71-48-143-23.dhcp.embarqhsd.net) Quit (Quit: sprachgenerator)
[14:25] * ScOut3R (~scout3r@91EC1DC5.catv.pool.telekom.hu) has joined #ceph
[14:27] * pieter_ (~pieter@105-236-155-27.access.mtnbusiness.co.za) Quit (Read error: Operation timed out)
[14:27] * pieter_ (~pieter@105-236-155-27.access.mtnbusiness.co.za) has joined #ceph
[14:27] * claenjoy (~leggenda@37.157.33.36) has joined #ceph
[14:29] * KindOne (~KindOne@0001a7db.user.oftc.net) has joined #ceph
[14:30] * shang (~ShangWu@64.34.151.178) has joined #ceph
[14:36] * andreask1 (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[14:36] * ChanServ sets mode +v andreask1
[14:36] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Read error: Connection reset by peer)
[14:36] * andreask1 is now known as andreask
[14:52] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[14:53] <odyssey4me> Clabbe - it depends on how you did your installation, but the keyrings are generated on the mons.
[14:53] <odyssey4me> Your client.admin keyring is created on installation - you can't issue a create for it as far as I know.
[14:53] * nhm (~nhm@184-97-187-196.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[14:53] <odyssey4me> So if you want to get the keyring you can fetch it.
[14:55] * jmlowe (~Adium@c-50-172-105-141.hsd1.in.comcast.net) has joined #ceph
[14:56] <Clabbe> odyssey4me: just realized that I need to create a keyring with both mon and client.admin and mkfs my mon with that
[14:56] <Clabbe> can add it later :D
[14:57] <Clabbe> odyssey4me: Im doing a "bare installation", no ceph-deploy
[14:57] * grepory (~Adium@8.25.24.2) has joined #ceph
[14:57] <joelio> Clabbe: why? :)
[14:57] <Clabbe> joelio: Im creating my own puppet module for ceph
[14:58] <joelio> already is one right?
[14:58] <joelio> not best to extend that one rather than make a new one?
[14:58] <Clabbe> not working to well, I want mine to be able to deploy without having to run 3-4 puppet runs to get it working
[14:58] <Clabbe> also to have it support osd and mon on same host
[14:59] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit (Remote host closed the connection)
[14:59] <Clabbe> I didnt like the structure that much in the present ceph puppet module also
[14:59] <joelio> ok, well let me know how you get on!
[15:00] <Clabbe> joelio: will do
[15:00] <joelio> maybe leveraging some of the REST APi into a custom type/provider might be win
[15:01] * pieter__ (~pieter@105-236-213-164.access.mtnbusiness.co.za) has joined #ceph
[15:02] <foosinn> when upgrading 0.67.2 to 0.67.3 is it enouth to simply do a "sudo service ceph-all restart" to restart all services?
[15:02] <foosinn> btw you may want to update the channels topic
[15:02] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[15:03] * pieter_ (~pieter@105-236-155-27.access.mtnbusiness.co.za) Quit (Read error: Operation timed out)
[15:03] * jmlowe (~Adium@c-50-172-105-141.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[15:08] * sjm (~sjm@64.34.151.178) has joined #ceph
[15:09] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[15:12] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) Quit (Quit: Leaving.)
[15:13] * vbellur (~vijay@nat-pool-sin2-t.redhat.com) Quit (Ping timeout: 480 seconds)
[15:14] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) has joined #ceph
[15:14] * shang_ (~ShangWu@64.34.151.178) has joined #ceph
[15:15] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[15:17] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit ()
[15:17] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) Quit (Read error: Connection reset by peer)
[15:17] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[15:18] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) has joined #ceph
[15:22] * thomnico (~thomnico@64.34.151.178) has joined #ceph
[15:23] * joao changes topic to 'Latest stable (v0.67.3 "Dumpling" or v0.61.8 "Cuttlefish") -- http://ceph.com/get || CDS Vids and IRC logs posted http://ceph.com/cds/'
[15:23] <joao> foosinn, yes, restarting should be all you need
[15:23] <foosinn> joao, thanks :)
[15:24] * aardvark (~Warren@2607:f298:a:607:8d7c:bc47:8ee1:520d) Quit (Read error: Connection reset by peer)
[15:24] * WarrenUsui (~Warren@2607:f298:a:607:8d7c:bc47:8ee1:520d) Quit (Read error: Connection reset by peer)
[15:24] * aardvark (~Warren@2607:f298:a:607:8d7c:bc47:8ee1:520d) has joined #ceph
[15:24] * wusui (~Warren@2607:f298:a:607:8d7c:bc47:8ee1:520d) has joined #ceph
[15:29] * gucki (~smuxi@84-73-190-65.dclient.hispeed.ch) has joined #ceph
[15:29] <gucki> hi guys
[15:30] <gucki> i just upgraded my monitors from cuttlefish to dumpling and so far everything seems to work ok. however the monitor logs grow quite fast with a lot of lines like "2013-09-12 13:29:17.256486 7fc2df765700 1 mon.c@2(peon).paxos(paxos active c 29587859..29588490) is_readable now=2013-09-12 13:29:17.256489 lease_expire=2013-09-12 13:29:22.244578 has v0 lc 29588490". is this normal? how reduce the loglevel? why is the default setting so verbose?
[15:31] * berant (~blemmenes@gw01.ussignalcom.com) has joined #ceph
[15:33] <joao> gucki, that is perfectly normal, aside that we should probably change the debug level to >1
[15:33] <gucki> joao: yeah that would be nice. can i supress those somehow without supressing other important logging?
[15:33] <joao> err, actually, the thing is that we adjusted the *default* mon debug level to 1 instead of the previous (default: 0)
[15:34] <joao> we should increase the debug level on that message to >1 nonetheless
[15:34] <joao> gucki, well, you can always set 'debug mon = 0'
[15:34] <joao> anything that is really critical will be outputted on level 0 anyway
[15:35] <gucki> ok. same for osds etc? so is it safe to set debug = 0 globally?
[15:35] <joao> we raised that level to make sure e captured other debug messages that we consider important, but won't be critically needed unless the monitors misbehave
[15:35] <joao> gucki, just set 'debug mon = 0' (even globally), and that will affect only the monitors
[15:36] <joao> unless you're noticing messages you don't really care about on the osd logs, you probably shouldn't be changing those levels
[15:36] <joao> or you can set all to 0 if you prefer; option would be 'debug osd = 0'
[15:36] <pieter__> I'm trying to create an osd using ceph deploy, but get: raise Error('Device is in use by a device-mapper mapping (dm-crypt?)' % dev, ','.join(holders))
[15:37] <pieter__> (though these drives are newly installed.)
[15:37] <joao> gucki, fyi, default osd debug level is already set at 0; so unless you changed it, it should not need to adjust it manually
[15:39] <gucki> joao: mh, I just added "[mon]\ndebug mon = 0" and restartet all monitors. however they are still logging?
[15:40] <gucki> joao: of course i copied the ceph.conf to all hosts before...
[15:40] <gucki> joao: so this is quite strange?
[15:41] <joao> they will still log everything with a debug level of 0 or lower
[15:42] <joao> or do you mean they're still logging level 1 messages?
[15:42] <gucki> yes, they still log those paxos messages..
[15:42] <gucki> level 1 (if the debug level is the number before the mon.c...")
[15:42] <joao> oh, doh
[15:42] <joao> silly me
[15:43] <joao> paxos has a debug level of its own :)
[15:43] <joao> 'debug paxos = 0'
[15:43] <gucki> :-)
[15:43] <gucki> i'll try that, thanks
[15:44] * jmlowe (~Adium@2601:d:a800:511:514c:1d73:94ce:641f) has joined #ceph
[15:47] <gucki> ok, looks good now :)
[15:47] <gucki> now going over to restarting the osds ... :-)
[15:48] * shang_ (~ShangWu@64.34.151.178) Quit (Quit: Ex-Chat)
[15:51] * shang (~ShangWu@64.34.151.178) Quit (Remote host closed the connection)
[15:51] * shang (~ShangWu@64.34.151.178) has joined #ceph
[15:51] * carif (~mcarifio@pool-96-233-32-122.bstnma.fios.verizon.net) has joined #ceph
[15:54] * vbellur (~vijay@122.166.159.63) has joined #ceph
[15:55] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[16:01] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[16:01] * tziOm (~bjornar@194.19.106.242) Quit (Remote host closed the connection)
[16:02] * sjm (~sjm@64.34.151.178) has joined #ceph
[16:06] * KevinPerks (~Adium@64.34.151.178) has joined #ceph
[16:06] * topro (~topro@host-62-245-142-50.customer.m-online.net) Quit (Read error: Connection reset by peer)
[16:08] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:09] * grepory (~Adium@8.25.24.2) Quit (Quit: Leaving.)
[16:09] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[16:13] * benner (~benner@193.200.124.63) Quit (Ping timeout: 480 seconds)
[16:13] * yanzheng (~zhyan@jfdmzpr02-ext.jf.intel.com) Quit (Ping timeout: 480 seconds)
[16:14] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[16:17] * benner (~benner@193.200.124.63) has joined #ceph
[16:22] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Ping timeout: 480 seconds)
[16:22] * zack_ (~zack@formosa.juno.dreamhost.com) has joined #ceph
[16:22] * zack_ (~zack@formosa.juno.dreamhost.com) Quit ()
[16:23] * zackc (~zack@formosa.juno.dreamhost.com) has joined #ceph
[16:23] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[16:23] * zackc is now known as Guest6443
[16:23] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[16:24] * grepory (~Adium@8.25.24.2) has joined #ceph
[16:25] * ScOut3R (~scout3r@91EC1DC5.catv.pool.telekom.hu) Quit (Remote host closed the connection)
[16:27] <gucki> ok, seemed to work :
[16:30] * alphe (~alphe@0001ac6f.user.oftc.net) has joined #ceph
[16:30] <alphe> hello all
[16:30] <alphe> I have weird issues with my ceph cluster ...
[16:33] <alphe> after rebooting them the ceph cluster is a total mess: the osds don t want to get up because the disks are not mounted automatically I have to go throught ceph-deploy osd activate serv{01..10}:/dev/sda1 serv{01..10}:/dev/sdb1 the third monitor runs avok instead of being a cool and quiet peon it calls for elections every moment ...
[16:34] <alphe> to solve those issues I have to ceph-deploy mon destroy serv03 ceph-deploy mon create serv03 then everything is fine
[16:35] <alphe> it is like my conf was stored in a ramfs each time I reboot the whole thing is gone this is so strange ...
[16:37] * claenjoy (~leggenda@37.157.33.36) Quit (Remote host closed the connection)
[16:38] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[16:38] <joao> alphe, your monitor issue is likely a clock skew
[16:42] <alphe> hum ... it would say that on monitor3 the clock is skew but still sync
[16:43] * barryo (~borourke@cumberdale.ph.ed.ac.uk) has left #ceph
[16:44] <alphe> joao I though of that too and checked but the difference bitwin my mons was of 0.00006639 sec
[16:45] <alphe> so I destroyed that monitor and recreated it and magically it was back in the quorum as a nice and quiet peon
[16:45] * Vjarjadian (~IceChat77@05453253.skybroadband.com) has joined #ceph
[16:45] <Karcaw_> is there a way to set ceph to prioritize recovery of objects, so they recover faster?
[16:45] <alphe> karcaw_ good question ...
[16:46] <alphe> I doubt you can ... this is why it is called self healing ...
[16:46] * DarkAceZ (~BillyMays@50.107.55.36) has joined #ceph
[16:47] <alphe> joao I had to shut down the third monitor wait for the quorum to recover peace stabilise etc ...
[16:47] <alphe> then I could destroy / create that mon
[16:48] <alphe> karcaw_ the only way I know to speed recovery of ceph is by specifying a replication private network
[16:51] * Guest6443 (~zack@formosa.juno.dreamhost.com) Quit (Quit: leaving)
[16:51] * zackc_ (~zack@formosa.juno.dreamhost.com) has joined #ceph
[16:51] * zackc_ is now known as zackc
[16:52] <sagelap1> zackc: happened again; teuthology-dumpling disappeared this time.
[16:52] <sagelap1> wtf!
[16:52] <Karcaw_> hmm.. it just seems to be working on it slowly, and i've see it go faster in the past. averaging 2-4 objects recovered per second
[16:53] <zackc> sagelap1: what the crap
[16:54] * itamar_ (~itamar@82.166.185.149) Quit (Remote host closed the connection)
[16:54] <alphe> sagelap1 is there a way to monitor who removed it on git ?
[16:55] <sagelap1> might be a clue in /a/worker_logs... sort by time and grep for plana and find the first one that errorred out
[16:55] <sagelap1> alphe: it's a checkout thats disappearing on the qa machine for some reason
[16:56] <sagelap1> fwiw i restarted the plana workers last night; the previous crop had all died around 5pm
[16:56] <zackc> i'm looking at them now
[16:56] <sagelap1> cool. gotta run!
[16:56] * markbby (~Adium@168.94.245.4) has joined #ceph
[16:58] * mancdaz_ (~darren.bi@94.236.7.190) has joined #ceph
[17:00] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[17:01] * mancdaz (~darren.bi@94.236.7.190) Quit (Ping timeout: 480 seconds)
[17:01] * mancdaz_ is now known as mancdaz
[17:02] * alram (~alram@ip-64-134-147-141.public.wayport.net) has joined #ceph
[17:04] * sagelap1 (~sage@2600:1010:b007:5bac:e5fc:33ad:d879:cc84) Quit (Ping timeout: 480 seconds)
[17:05] * gucki (~smuxi@84-73-190-65.dclient.hispeed.ch) Quit (Remote host closed the connection)
[17:08] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[17:09] * foosinn (~stefan@office.unitedcolo.de) Quit (Quit: Leaving)
[17:13] * topro (~topro@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[17:14] * thb (~me@port-17003.pppoe.wtnet.de) has joined #ceph
[17:16] * terje- (~root@135.109.220.9) Quit (Ping timeout: 480 seconds)
[17:19] * sjm (~sjm@64.34.151.178) has joined #ceph
[17:19] * grepory (~Adium@8.25.24.2) Quit (Quit: Leaving.)
[17:20] * odyssey4me (~odyssey4m@165.233.71.2) Quit (Ping timeout: 480 seconds)
[17:24] * grepory (~Adium@8.25.24.2) has joined #ceph
[17:24] * todin (tuxadero@kudu.in-berlin.de) Quit (Read error: Connection reset by peer)
[17:27] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[17:33] * jlogan1 (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[17:38] * S0d0 (joku@a88-113-108-239.elisa-laajakaista.fi) has joined #ceph
[17:43] * claenjoy (~leggenda@37.157.33.36) has joined #ceph
[17:47] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:49] * LPG (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) has joined #ceph
[17:50] * Vjarjadian (~IceChat77@05453253.skybroadband.com) Quit (Quit: Always try to be modest, and be proud about it!)
[17:51] * LPG (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) Quit (Remote host closed the connection)
[17:52] * dmsimard (~Adium@108.163.152.2) Quit (Quit: Leaving.)
[17:53] <JayBox> Hi.. I got a question..! I got a server with 4 Spinners of 2 Tb Each - Is it ok to use 1 Spinner for OS and Journal and other 3 as OSD's ?
[17:53] <JayBox> any better config... ?
[17:54] * yehuda_hm (~yehuda@99-48-177-65.lightspeed.irvnca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[17:55] * yehuda_hm (~yehuda@2602:306:330b:1410:8178:aace:6e68:e9f2) has joined #ceph
[17:56] <janos> i'd probably maintain isolation of concerns and go ahead and do up 3 spinners - each with osd and it's journal for that situation
[17:56] <janos> you're bottlenecking yourself on a single spinner (shared by OS) with the journal
[17:57] <janos> the only benefit you get with separate journal disk is when the journal is something like an SSD and quite discernably faster than the osd
[18:04] <janos> (and not oversubscribed)
[18:13] <pieter__> I mounted cephfs, and did an rsync to it...causing my system's load to be 100 :(
[18:13] <pieter__> is it still that unstable?
[18:14] <JayBox> So Janos, you saying to install journal on same osd spinners ? is better
[18:14] <JayBox> OSD Spinner disks*
[18:15] * mancdaz (~darren.bi@94.236.7.190) Quit (Quit: mancdaz)
[18:15] * grepory (~Adium@8.25.24.2) Quit (Quit: Leaving.)
[18:17] <janos> in that scenario i would
[18:17] * mattt (~mattt@92.52.76.140) Quit (Read error: Connection reset by peer)
[18:17] <janos> JayBox ^
[18:18] <janos> let's say your spinners can write at 120MB/s
[18:18] <janos> you could have 3 doing that for their OSD's each. or one that's handling writing for not only the OS, but 3 other OSD's
[18:18] <janos> the second option is not so good
[18:19] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) has joined #ceph
[18:19] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) Quit (Quit: Leaving.)
[18:23] <JayBox> Else we should have an SSD for journal of 3 OSD's rite
[18:23] <JayBox> Also another question is how many replication is good enough ? for a data.. 2 or 3 ?
[18:23] <JayBox> we not using Raid.
[18:23] <janos> 2 is pretty standard
[18:24] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[18:24] <janos> if you do decide to go with an SSD journal, keep in mind how many OSD's it's handling. like you wouldn'tload up one SSD journal with carrying the load for 12 OSD's for example
[18:25] * xarses (~andreww@c-71-202-167-197.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:25] <JayBox> If it's 2 then g8t :)
[18:28] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[18:29] <JayBox> Another is...! if it's a Ceph storage cluster...! Their is no single point of failure rite.... But how is get object works ? We planning to take some dedicated servers. And use that to create a ceph cluster with 3 separate monitor servers. Each of the 5 server we planned for OSD's are 100mbps unmetered outgoing.
[18:31] <JayBox> Is it like utilize each of the servers BW. or...!
[18:35] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:36] * dmick (~dmick@2607:f298:a:607:d8d5:ca8e:728f:e4c9) Quit (Ping timeout: 480 seconds)
[18:40] * mschiff (~mschiff@port-49377.pppoe.wtnet.de) Quit (Remote host closed the connection)
[18:40] * vbellur (~vijay@122.166.159.63) Quit (Ping timeout: 480 seconds)
[18:40] * kiorky (~kiorky@cryptelium.net) has joined #ceph
[18:41] <kiorky> hi, we are trying openstack, and as part of a deelopment cluster, is that possible to use a loopback filesystem with ceph osd ?
[18:42] <claenjoy> kiorky maybe is better to divide the partition in 2 one for the OS e one for the DATA
[18:43] <claenjoy> let's waiting for the experts
[18:43] <kiorky> claenjoy: well, the "data" partition is already used, so my idea was to use a huge file in that partition to contain the ceph partition
[18:44] <claenjoy> I m not sure I have just split my partition in 2 , with LVM tool
[18:44] * dmick (~dmick@2607:f298:a:607:c42d:b36e:f045:4105) has joined #ceph
[18:44] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Quit: Leaving.)
[18:46] * dmsimard (~Adium@MTRLPQ02-1177996309.sdsl.bell.ca) has joined #ceph
[18:46] * dmsimard (~Adium@MTRLPQ02-1177996309.sdsl.bell.ca) Quit ()
[18:47] * xarses (~andreww@204.11.231.50.static.etheric.net) has joined #ceph
[18:51] * jeff-YF (~jeffyf@67.23.117.122) Quit (Quit: jeff-YF)
[18:52] * vbellur (~vijay@122.166.165.112) has joined #ceph
[18:53] * pieter__ (~pieter@105-236-213-164.access.mtnbusiness.co.za) Quit (Ping timeout: 480 seconds)
[18:54] * ScOut3R (~scout3r@91EC1DC5.catv.pool.telekom.hu) has joined #ceph
[18:55] * grepory (~Adium@8.25.24.2) has joined #ceph
[18:57] * dmsimard (~Adium@MTRLPQ02-1177996309.sdsl.bell.ca) has joined #ceph
[18:57] * ScOut3R (~scout3r@91EC1DC5.catv.pool.telekom.hu) Quit (Remote host closed the connection)
[18:57] * Tamil (~tamil@38.122.20.226) has joined #ceph
[18:57] * ScOut3R (~scout3r@91EC1DC5.catv.pool.telekom.hu) has joined #ceph
[18:57] * WarrenUsui (~Warren@2607:f298:a:607:8d7c:bc47:8ee1:520d) has joined #ceph
[19:06] * dmsimard (~Adium@MTRLPQ02-1177996309.sdsl.bell.ca) Quit (Quit: Leaving.)
[19:09] * thb (~me@port-17003.pppoe.wtnet.de) Quit (Quit: Leaving.)
[19:11] <paravoid> yehuda_hm: what's the rgw cache bug that you mentioned?
[19:11] <yehuda_hm> paravoid: there's some O(n) call there that shouldn't be
[19:12] * ShaunR (~ShaunR@staff.ndchost.com) has joined #ceph
[19:12] <yehuda_hm> paravoid: you can try disabling the cache, see how it affects your performance
[19:13] * Clabbe (~oftc-webi@alv-global.tietoenator.com) Quit (Remote host closed the connection)
[19:13] <yehuda_hm> paravoid: I'm not too sure how many buckets + users you use, if it's a large number then there's a possibility that this may help
[19:13] <paravoid> 37k buckets, 1 account
[19:14] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[19:14] <yehuda_hm> yeah, so you should definitely try it
[19:15] <yehuda_hm> paravoid: what version are you currently running?
[19:15] * thomnico (~thomnico@64.34.151.178) Quit (Read error: No route to host)
[19:15] <paravoid> 0.67.2
[19:16] <paravoid> I haven't done anything since we last talked
[19:16] <yehuda_hm> I can push the fix to a branch on top of recent dumpling for you to test
[19:17] <paravoid> I don't think I can provide meaningful feedback atm
[19:17] * diegows (~diegows@190.190.11.42) Quit (Ping timeout: 480 seconds)
[19:17] <paravoid> but thanks for the offer :)
[19:17] <ircolle> paravoid - large performance gain in 67.3
[19:18] * nwat (~nwat@eduroam-237-79.ucsc.edu) has joined #ceph
[19:18] * grepory (~Adium@8.25.24.2) Quit (Quit: Leaving.)
[19:19] * dmsimard (~Adium@MTRLPQ02-1177996309.sdsl.bell.ca) has joined #ceph
[19:19] * dmsimard (~Adium@MTRLPQ02-1177996309.sdsl.bell.ca) Quit ()
[19:20] <yehuda_hm> paravoid: I pushed the fix to wip-6286-dumpling in case you'd want to do anything with it
[19:21] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[19:21] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[19:24] * mschiff (~mschiff@port-49377.pppoe.wtnet.de) has joined #ceph
[19:24] * grepory (~Adium@8.25.24.2) has joined #ceph
[19:32] * angdraug (~angdraug@c-98-248-39-148.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[19:32] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:32] * carif (~mcarifio@pool-96-233-32-122.bstnma.fios.verizon.net) Quit (Quit: Ex-Chat)
[19:33] * KindOne (~KindOne@0001a7db.user.oftc.net) has joined #ceph
[19:34] * dmick (~dmick@2607:f298:a:607:c42d:b36e:f045:4105) Quit (Ping timeout: 480 seconds)
[19:37] * thomnico (~thomnico@64.34.151.178) has joined #ceph
[19:43] * dmick (~dmick@2607:f298:a:607:c42d:b36e:f045:4105) has joined #ceph
[19:44] * S0d0 (joku@a88-113-108-239.elisa-laajakaista.fi) Quit (Ping timeout: 480 seconds)
[19:47] * ntranger (~ntranger@proxy2.wolfram.com) Quit (Ping timeout: 480 seconds)
[19:49] * buck (~buck@bender.soe.ucsc.edu) has joined #ceph
[19:53] * alram (~alram@ip-64-134-147-141.public.wayport.net) Quit (Ping timeout: 480 seconds)
[19:54] * grepory (~Adium@8.25.24.2) Quit (Quit: Leaving.)
[19:54] * vbellur (~vijay@122.166.165.112) Quit (Read error: Operation timed out)
[19:56] * angdraug (~angdraug@204.11.231.50.static.etheric.net) has joined #ceph
[19:57] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[19:58] * grepory (~Adium@8.25.24.2) has joined #ceph
[20:06] * dmsimard (~Adium@MTRLPQ02-1177996309.sdsl.bell.ca) has joined #ceph
[20:07] * dmsimard (~Adium@MTRLPQ02-1177996309.sdsl.bell.ca) Quit ()
[20:08] * vbellur (~vijay@122.172.196.110) has joined #ceph
[20:10] * mschiff (~mschiff@port-49377.pppoe.wtnet.de) Quit (Remote host closed the connection)
[20:15] * alram (~alram@ip-64-134-147-141.public.wayport.net) has joined #ceph
[20:16] * mschiff (~mschiff@port-49377.pppoe.wtnet.de) has joined #ceph
[20:18] * xmltok (~xmltok@pool101.bizrate.com) has joined #ceph
[20:22] * mancdaz (~darren.bi@94-195-16-87.zone9.bethere.co.uk) has joined #ceph
[20:27] * alram (~alram@ip-64-134-147-141.public.wayport.net) Quit (Ping timeout: 480 seconds)
[20:27] * grepory (~Adium@8.25.24.2) Quit (Quit: Leaving.)
[20:32] * clayb (~kvirc@proxy-ny2.bloomberg.com) has joined #ceph
[20:33] * Snow- (~snow@sputnik.teardrop.org) has joined #ceph
[20:33] * alram (~alram@ip-64-134-147-141.public.wayport.net) has joined #ceph
[20:34] * mancdaz (~darren.bi@94-195-16-87.zone9.bethere.co.uk) Quit (Quit: mancdaz)
[20:36] * markbby (~Adium@168.94.245.1) has joined #ceph
[20:38] * dmsimard (~Adium@MTRLPQ02-1177996309.sdsl.bell.ca) has joined #ceph
[20:39] * sagelap (~sage@2600:1010:b021:92ae:3424:4060:73c3:2dac) has joined #ceph
[20:44] <mikedawson> davidzlap: Is there a gitbuilder wip with the fix for 6291 on top of dumpling that I could test?
[20:45] <davidzlap> mikedawson: no, not yet.
[20:46] <mikedawson> davidzlap: ok. if one emerges, could you let me know?
[20:48] * tziOm (~bjornar@ti0099a340-dhcp0395.bb.online.no) has joined #ceph
[20:50] <davidzlap> mikedawson: sure
[20:52] <sjustlaptop> nwat: let me know if you've got more questions
[20:52] * dmsimard (~Adium@MTRLPQ02-1177996309.sdsl.bell.ca) Quit (Quit: Leaving.)
[20:52] <nwat> sjustlaptop: sure thing. i'm gonna go think about it some more and come back with some specific questions :) thanks!
[20:52] <sjustlaptop> k
[20:54] * sagelap (~sage@2600:1010:b021:92ae:3424:4060:73c3:2dac) Quit (Read error: Connection reset by peer)
[20:56] <mikedawson> davidzlap: looks like the quick work-around is to make osd_recovery_max_single_start be as low as osd_recovery_max_active, right?
[20:56] * terje- (~root@135.109.220.9) has joined #ceph
[20:57] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[20:59] <mikedawson> davidzlap: Is the ratio of osd_recovery_op_priority vs. osd_client_op_priority expected to prevent client i/o issues? Seems like a more elegant solution to this problem than forcing thread count, but it hasn't worked for me thus far.
[20:59] * thomnico (~thomnico@64.34.151.178) Quit (Ping timeout: 480 seconds)
[21:01] <davidzlap> mikedawson: Setting osd_recovery_max_single_start to 0 would prevent starting more than osd_recovery_max_active ops at any given time.
[21:01] <kiorky> :b '"
[21:01] <davidzlap> Don't forget to set it back once you have installed the fix.
[21:04] <mikedawson> davidzlap: yes, thx. Any idea if the op_priority ratios are fully implemented / expected to do anything useful (like make my rbd guests keep working during recovery)?
[21:05] * BillK (~BillK-OFT@58-7-172-nwork.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[21:06] <Kioob> Is there a way to have usage stats (reads per sec & writes per sec), per RBD device ?
[21:07] <Kioob> Or should I use dedicated pools per client ?
[21:07] <mikedawson> Kioob: yes, enable rbd admin sockets for each rbd client, then do a perf dump
[21:07] <davidzlap> mikedawson: Don't know. FYI, the default values of osd_client_op_priority is 63 and osd_recovery_op_priority is 10. According to code comments they should be between 1 − 63. So at best you could lower osd_recovery_op_priority a little, but not sure if that is a good idea.
[21:07] <Kioob> thanks mikedawson, I will look at that
[21:09] <Kioob> ok, it's available with qemu client
[21:09] * thomnico (~thomnico@64.34.151.178) has joined #ceph
[21:09] <Kioob> and with kernel client I need to use the standard kernel interface
[21:09] * kislotniq (~kislotniq@193.93.77.54) Quit (Read error: Operation timed out)
[21:09] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[21:11] <mikedawson> davidzlap: I've tried osd_recovery_op_priority at 1 and osd_client_op_priority defaulting to 63, but I couldn't keep some guests running during recovery. Reads get backed up when recovery creates spindle contention. Writes seem better, perhaps helped by rbd writeback caching.
[21:14] * kislotniq (~kislotniq@193.93.77.54) has joined #ceph
[21:15] <davidzlap> mikedawson: I wonder if lowering the osd_recovery_op_priority created a situation in which a client read ran into a missing object and had to wait for an even lower prio recovery op to get that object from another osd.
[21:15] <mikedawson> davidzlap: aren't those reads remapped at that point?
[21:19] * sjm_ (~sjm@64.34.151.178) has joined #ceph
[21:20] * aliguori (~anthony@72.183.121.38) Quit (Remote host closed the connection)
[21:21] * erice_ (~erice@50.240.86.181) has joined #ceph
[21:21] * erice (~erice@50.240.86.181) Quit (Read error: Connection reset by peer)
[21:24] * sjm_ (~sjm@64.34.151.178) has left #ceph
[21:30] * dmsimard (~Adium@MTRLPQ02-1177996309.sdsl.bell.ca) has joined #ceph
[21:32] * grepory (~Adium@8.25.24.2) has joined #ceph
[21:39] <sjustlaptop> davidzlap: it uses client priority when doing a push/pull required to fulfill a client op
[21:40] <davidzlap> mikedawson, sjustlaptop: cool
[21:40] <sjustlaptop> mikedawson: do you use snapshots?
[21:41] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Ping timeout: 480 seconds)
[21:41] * penguinLord (~penguinLo@14.139.82.8) Quit (Remote host closed the connection)
[21:41] <mikedawson> sjustlaptop: not intentionally, but we do use openstack/cinder/rbd copy on write - does that make use of snapshots?
[21:42] <sjustlaptop> mm, it might
[21:42] <sjustlaptop> what version are you running?
[21:42] <sjustlaptop> 67.3?
[21:42] <dmsimard> Can someone clarify my understanding of using ceph with Openstack for Object storage ? From what I understand, you use swift but use a ceph cluster for your storage backend as opposed to a swift object store backend ?
[21:42] <mikedawson> yes
[21:43] <sjustlaptop> mikedawson: ok, it's not the thing I was thinking of then
[21:43] <mikedawson> sjustlaptop: haven't tried recovery or scrubbing with 67.3, but we've had these issues from pre-cuttlefish through 67.2
[21:43] <sjustlaptop> yeah
[21:43] * ntranger (~ntranger@proxy2.wolfram.com) has joined #ceph
[21:43] <sjustlaptop> dumpling with the fix david mentioned may help
[21:44] <sjustlaptop> it should land in dumpling in a few days
[21:45] <mikedawson> sjustlaptop: yeah, that bug caused us to always have 5 recovery threads, despite trying to turn them down. I'll test the workaround (osd_recovery_max_single_start to 0) sometime soon.
[21:46] <sjustlaptop> oh, that workaround will recover cuttlefish behavior
[21:46] <sjustlaptop> the bug was in a mechanism which itself should improve recovery
[21:46] <sjustlaptop> osd_recovery_max_single_start that is
[21:46] <mikedawson> sjustlaptop: ahh, I know cuttlefish behavior wasn't working
[21:47] <sjustlaptop> so once the actual fix lands, osd_recovery_max_single_start should actually help
[21:47] * erice_ (~erice@50.240.86.181) Quit (Ping timeout: 480 seconds)
[21:49] * erice (~erice@50.240.86.181) has joined #ceph
[21:50] <ntranger> i'm trying to do a cephfs mount on my linux box to test ceph that I got setup last night, and I'm following the instructions on the ceph site, and when I try to run ceph-fuse on my machine, its telling me the command isn't found. I think the documentation might be missing something.
[21:50] <ntranger> http://ceph.com/docs/master/cephfs/fuse/
[21:50] <mikedawson> sjustlaptop: I can tell for sure that when spindle contention becomes an issue, my VMs suffer (as expected). I just want to find the magic config/patch where Ceph doesn't create the problematic spindle contention. Under normal load, my spinners are between 10% and 20%.
[21:54] <dmsimard> xarses: ping
[21:56] <mikedawson> sjustlaptop: I run with noscrub and nodeep-scrub. If I re-enable either, spindle contention shoots from under 20% to 100% and client i/o gets hosed on some of my guests. If I stop the scrubs/deep-scrubs, everyone is happy immediately.
[22:00] * mancdaz (~darren.bi@94-195-16-87.zone9.bethere.co.uk) has joined #ceph
[22:02] * mancdaz (~darren.bi@94-195-16-87.zone9.bethere.co.uk) Quit ()
[22:03] <xarses> dmsimard: pong
[22:04] <dmsimard> xarses: Looking into ceph for object storage in Openstack .. Don't have my hands in that just yet but I can't seem to find my answer in the docs..
[22:05] <dmsimard> If you use ceph for object storage in Openstack, are you really using Swift and somewhat pointing it to a ceph cluster instead of a swift object store ?
[22:05] * grepory (~Adium@8.25.24.2) Quit (Quit: Leaving.)
[22:05] <xarses> dmsimard, I assume you are referring to using it as a s3/swift api and not glance/cinder?
[22:06] <dmsimard> Well, ultimately I might use ceph as well for glance and cinder - but I was interested in using ceph for the object storage too. Your name came to mind while I was reading http://www.mirantis.com/blog/object-storage-openstack-cloud-swift-ceph/
[22:07] <dmsimard> But see, for billing/metering with ceilometer for example. Is ceph an abstraction ? Would ceilometer talk to "swift" or essentially radosgw ?
[22:08] <dmsimard> Hope what I'm asking makes sense, hard to put succintly into words
[22:10] <xarses> well, ya it will work for the object storage interfaces
[22:10] <xarses> we would register it in keystone as a 'swift' provider
[22:10] * tobru_ (~quassel@217-162-50-53.dynamic.hispeed.ch) has joined #ceph
[22:11] <ntranger> hey xarses, you have a second to help me out with ceph-fuse? I'm sure I've completely missed something, but not 100% sure.
[22:11] <dmsimard> Okay, so it doesn't ultimately matter if it's really swift or ceph - especially now that ceph has keystone integration
[22:12] <xarses> we just tell keystone that its a swift provider
[22:12] <xarses> so anywhere that swift worked, ceph can stand in
[22:12] <dmsimard> Ok, makes sense, I will do some experimentation in that direction. Thanks
[22:13] <xarses> dmsimard, there is some code in https://github.com/xarses/fuel/tree/ceph-fuel-1/deployment/puppet/ceph/manifests (keystone.pp/radosgw.pp) that some one had used previously in or organization to set up radiosgw and add it into keystone
[22:14] <xarses> i haven't refactored it to work in the current iteration of the puppet ceph module yet
[22:14] <xarses> but if you can read puppet it should give you some context about what was done
[22:15] * xmltok (~xmltok@pool101.bizrate.com) Quit (Quit: Bye!)
[22:17] <xarses> it mostly looks like after yo have radiosgw set up
[22:17] <xarses> you just define the keystone service "swift"
[22:17] <xarses> and then keystone endpoint "RegionOne/swift" pointing to the ragios gw
[22:17] * markbby (~Adium@168.94.245.1) Quit (Remote host closed the connection)
[22:18] <xarses> ntranger: i can try, but i haven't used any of the mds parts of ceph
[22:19] * xmltok (~xmltok@pool101.bizrate.com) has joined #ceph
[22:19] * peetaur (~peter@CPEbc1401e60493-CMbc1401e60490.cpe.net.cable.rogers.com) Quit (Ping timeout: 480 seconds)
[22:19] <dmsimard> Doesn't sound too complicated, i'll keep that in mind.
[22:22] <xarses> let me know, I should be working on that code by early next week so I'll have better answers soon
[22:22] * Vjarjadian (~IceChat77@05453253.skybroadband.com) has joined #ceph
[22:25] * yasu` (~yasu`@99.23.160.231) has joined #ceph
[22:29] * sglwlb (~sglwlb@221.12.27.202) Quit (Read error: Connection reset by peer)
[22:29] * madkiss (~madkiss@2001:6f8:12c3:f00f:c153:d4ce:d913:33cd) has joined #ceph
[22:30] * sglwlb (~sglwlb@221.12.27.202) has joined #ceph
[22:31] * claenjoy (~leggenda@37.157.33.36) Quit (Quit: Leaving.)
[22:32] <mikedawson> is loadavg from and osd perf dump equal to linux load * 100?
[22:32] <mikedawson> s/and/an/
[22:34] * berant (~blemmenes@gw01.ussignalcom.com) Quit (Quit: berant)
[22:34] * madkiss1 (~madkiss@2001:6f8:12c3:f00f:bc93:8b74:49cd:c096) Quit (Ping timeout: 480 seconds)
[22:37] <mikedawson> Also, is osd_scrub_load_threshold (which defaults to 0.5) used in relation to loadavg? I get loadavg of ~150 on a 16 core machine (Linux shows a load of ~1.5). Does that mean loadavg would need to dip to under 50 (0.5 *100) to trigger a scrub?
[22:38] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[22:38] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[22:40] * DarkAceZ (~BillyMays@50.107.55.36) has joined #ceph
[22:41] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[22:41] * ChanServ sets mode +v andreask
[22:44] * darkfader (~floh@88.79.251.60) Quit (Quit: tadaaa)
[22:44] * darkfader (~floh@88.79.251.60) has joined #ceph
[22:44] * darkfader (~floh@88.79.251.60) Quit ()
[22:46] <davidzlap> mikedawson: Ceph uses the library function getloadavg() to get the load average during the last 1 minute. It gives the same as the uptime command's first load average number.
[22:48] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Remote host closed the connection)
[22:49] <mikedawson> davidzlap: so it should be equivalent to the output of cat /proc/loadavg | awk '{ print $1 }'?
[22:50] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[22:52] <davidzlap> mikedawson: looks like it
[22:52] * darkfader (~floh@88.79.251.60) has joined #ceph
[22:52] <mikedawson> davidzlap: but my perf dumps report loadavg at 100x that value. I'm wondering if my PGs wont get scrubed because they are always suppressed by load average of 1.7, which is greater than osd_scrub_load_threshold's default of 0.5.
[22:53] <mikedawson> davidzlap: so "normal" scrub distribution is suppressed by load, then they are forced to srub when osd_scrub_max_interval gets hit
[22:54] * mancdaz (~darren.bi@94-195-16-87.zone9.bethere.co.uk) has joined #ceph
[22:54] <mikedawson> davidzlap: it's like the scrub_load_threashold should be 0.5 * number or cores in the system instead of 0.5, perhaps
[22:55] <mikedawson> davidzlap: so it would scrub at any load < 8 on a 16 core system for instance
[22:55] <davidzlap> mikedawson: yes, but by default the "normal" scrub will occur daily, unless loads don't allow it. After a week a deep-scrub will occur because osd_scrub_max_interval == osd_deep_scrub_interval.
[22:56] * diegows (~diegows@200.68.116.185) has joined #ceph
[22:58] * mancdaz (~darren.bi@94-195-16-87.zone9.bethere.co.uk) Quit ()
[22:58] * nhm (~nhm@63.110.51.11) has joined #ceph
[22:59] * darkfaded (~floh@88.79.251.60) has joined #ceph
[22:59] * darkfader (~floh@88.79.251.60) Quit (Read error: Connection reset by peer)
[23:00] * dmsimard (~Adium@MTRLPQ02-1177996309.sdsl.bell.ca) Quit (Quit: Leaving.)
[23:01] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[23:01] <davidzlap> mikedawson: I think you have a good point about osd_scrub_load_threshold * # of cores. Maybe you should file a bug report. In the mean time if you have uniform 16 core hardware, set to threshold to 8.0.
[23:02] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[23:03] * darkfaded (~floh@88.79.251.60) Quit ()
[23:04] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[23:04] <mikedawson> davidzlap: will do. thanks for talking it through with me
[23:08] * tobru_ (~quassel@217-162-50-53.dynamic.hispeed.ch) Quit (Remote host closed the connection)
[23:11] * darkfader (~floh@88.79.251.60) has joined #ceph
[23:15] * thomnico (~thomnico@64.34.151.178) Quit (Ping timeout: 480 seconds)
[23:15] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[23:19] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[23:22] * nwat (~nwat@eduroam-237-79.ucsc.edu) Quit (Read error: Operation timed out)
[23:24] <mikedawson> davidzlap: http://tracker.ceph.com/issues/6296
[23:29] * miniyo (~miniyo@0001b53b.user.oftc.net) Quit (Ping timeout: 480 seconds)
[23:29] * jeff-YF (~jeffyf@67.23.117.122) Quit (Ping timeout: 480 seconds)
[23:31] * jmlowe1 (~Adium@c-50-172-105-141.hsd1.in.comcast.net) has joined #ceph
[23:31] <wrencsok> since updating to dumpling (currently 67.3) i have a /var/log issue. every node of my cluster creates a log for every daemon in the cluster on each node. each node only populates to log file for the daemons running on it, but empty logs are created for all the other osd/mon daemons that are not local to that node. is that a known issue?
[23:35] * shang (~ShangWu@64.34.151.178) Quit (Read error: Operation timed out)
[23:35] * ntranger_ (~ntranger@proxy2.wolfram.com) has joined #ceph
[23:36] * jmlowe (~Adium@2601:d:a800:511:514c:1d73:94ce:641f) Quit (Ping timeout: 480 seconds)
[23:39] * KevinPerks (~Adium@64.34.151.178) Quit (Quit: Leaving.)
[23:41] * ntranger (~ntranger@proxy2.wolfram.com) Quit (Ping timeout: 480 seconds)
[23:42] <mikedawson> wrencsok: did you originally deploy with mklcephfs? My guess is you have all OSDs specified in your ceph.conf on each node. I have the same issue, not sure how to transition from the mkcephfs world to the ceph-deploy world
[23:44] <wrencsok> yeah, we do it the old way, its too big of beast and our config paths are not defaults, so i either need to mod ceph-deploy or redo a lot of paths across a 50+ nodes.
[23:47] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[23:48] <wrencsok> but its becoming quite the cthulu and in a few weeks after some re-design of the hardware and software, i should be an order of magnitude or two faster than amazon, rackspace, and google and others. I am already about 5 times faster on most of their workflows. still have tuning to do for db centric ones. then we'll bring in some outside help for finer tuning once i finish the kludges of previous inexperience.
[23:49] * thomnico (~thomnico@207.96.227.9) has joined #ceph
[23:56] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[23:56] <wrencsok> those extra log files are annoying my tidy center brain.
[23:57] <wrencsok> centeric brain
[23:57] <wrencsok> i plan on integrating with logstash and that will be an issue. i'd rather not have to fix with an extra script.
[23:57] * alram (~alram@ip-64-134-147-141.public.wayport.net) Quit (Quit: leaving)
[23:57] * miniyo (~miniyo@0001b53b.user.oftc.net) has joined #ceph
[23:57] <dmick> wrencsok: that doesn't make any sense to me. the log files are created by the daemons themselves.
[23:58] <wrencsok> that's what i thought. somehow every single node has a log for each daemon.
[23:58] <wrencsok> its odd.
[23:58] <wrencsok> if the daemon isn't local its empty... but its still there.
[23:58] <dmick> maybe you could use inotify to figure out who's creating them
[23:59] <wrencsok> i'll look into that.
[23:59] <dmick> inotify{wait,watch} are cmdline tools
[23:59] <mikedawson> dmick: I've seen the same thing since 0.67
[23:59] * tziOm (~bjornar@ti0099a340-dhcp0395.bb.online.no) Quit (Quit: Leaving)
[23:59] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) Quit (Read error: Connection reset by peer)
[23:59] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) has joined #ceph
[23:59] <dmick> that's really weird.

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.