#ceph IRC Log

Index

IRC Log for 2013-07-01

Timestamps are in GMT/BST.

[0:05] * andrei (~andrei@host86-155-31-94.range86-155.btcentralplus.com) has joined #ceph
[0:08] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[0:15] * haomaiwang (~haomaiwan@li565-182.members.linode.com) has joined #ceph
[0:23] * haomaiwang (~haomaiwan@li565-182.members.linode.com) Quit (Ping timeout: 480 seconds)
[0:25] * joao (~JL@89.181.151.112) Quit (Ping timeout: 480 seconds)
[0:26] * joao (~JL@89-181-151-112.net.novis.pt) has joined #ceph
[0:26] * ChanServ sets mode +o joao
[0:35] * joshd1 (~jdurgin@2602:306:c5db:310:c813:2dd2:ee73:9c35) Quit (Quit: Leaving.)
[0:43] * LeaChim (~LeaChim@90.221.247.164) Quit (Ping timeout: 480 seconds)
[0:48] * sebastiandeutsch (~sebastian@p5DE839AD.dip0.t-ipconnect.de) has joined #ceph
[1:02] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[1:16] * haomaiwang (~haomaiwan@117.79.232.249) has joined #ceph
[1:17] * tnt (~tnt@109.130.72.62) Quit (Ping timeout: 480 seconds)
[1:24] * haomaiwang (~haomaiwan@117.79.232.249) Quit (Ping timeout: 480 seconds)
[1:24] * missing (~lotreck@c-71-62-97-150.hsd1.va.comcast.net) Quit (Quit: missing)
[1:58] * jebba (~aleph@72.19.178.3) Quit (Quit: Leaving.)
[2:14] * mikedawson (~chatzilla@c-68-58-243-29.hsd1.sc.comcast.net) Quit (Quit: ChatZilla 0.9.90 [Firefox 22.0/20130618035212])
[2:17] * haomaiwang (~haomaiwan@211.155.113.201) has joined #ceph
[2:25] * haomaiwang (~haomaiwan@211.155.113.201) Quit (Ping timeout: 480 seconds)
[2:32] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[2:43] * DarkAce-Z (~BillyMays@50.107.52.142) has joined #ceph
[2:46] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) Quit (Remote host closed the connection)
[2:48] * DarkAceZ (~BillyMays@50.107.52.142) Quit (Ping timeout: 480 seconds)
[2:59] * DarkAce-Z is now known as DarkAceZ
[3:07] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) has joined #ceph
[3:17] * haomaiwang (~haomaiwan@117.79.232.249) has joined #ceph
[3:18] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) Quit (Read error: Connection reset by peer)
[3:25] * haomaiwang (~haomaiwan@117.79.232.249) Quit (Ping timeout: 480 seconds)
[3:26] * jebba (~aleph@2601:1:a300:8f:f2de:f1ff:fe69:6672) has joined #ceph
[3:27] * haomaiwang (~haomaiwan@117.79.232.249) has joined #ceph
[3:38] * andrei (~andrei@host86-155-31-94.range86-155.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[3:39] * sebastiandeutsch (~sebastian@p5DE839AD.dip0.t-ipconnect.de) Quit (Quit: sebastiandeutsch)
[3:48] * sebastiandeutsch (~sebastian@p5DE80AEB.dip0.t-ipconnect.de) has joined #ceph
[3:53] * Qu310 (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) Quit ()
[3:53] * Qu310 (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[4:10] * portante|afk is now known as portante
[4:17] * miniyo (~miniyo@0001b53b.user.oftc.net) Quit (Ping timeout: 480 seconds)
[4:25] <Psi-jack> Hmmm.
[4:27] * diegows (~diegows@190.190.2.126) Quit (Ping timeout: 480 seconds)
[4:29] <Psi-jack> Well, anyway.. If anyone can comment on http://ceph.com/docs/master/install/os-recommendations/ -- Regarding CentOS 6.3, why Note 3 isn't flagged on CentOS/RHEL 6.3 due to both kernel 2.6.32 (<2.6.39) and glibc 2.12 (<2.14), I'd be appreciative.
[4:55] * Yen (~Yen@2a00:f10:103:201:ba27:ebff:fefb:350a) Quit (Quit: Exit.)
[5:01] * fireD1 (~fireD@93-142-205-151.adsl.net.t-com.hr) has joined #ceph
[5:07] * fireD (~fireD@93-139-183-222.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[5:12] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Quit: Leaving)
[5:13] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[5:13] * ChanServ sets mode +o elder
[5:14] * zhangbo (~zhangbo@221.226.39.82) has joined #ceph
[5:30] * zhangjf_zz2 (~zjfhappy@222.128.1.105) has joined #ceph
[5:58] * sebastiandeutsch_ (~sebastian@p57A07A27.dip0.t-ipconnect.de) has joined #ceph
[6:01] * sebastiandeutsch (~sebastian@p5DE80AEB.dip0.t-ipconnect.de) Quit (Read error: Operation timed out)
[6:01] * sebastiandeutsch_ is now known as sebastiandeutsch
[6:01] * zhangjf_zz2 (~zjfhappy@222.128.1.105) Quit (Read error: Connection reset by peer)
[6:02] * zhangjf_zz2 (~zjfhappy@222.128.1.105) has joined #ceph
[6:02] * andreask (~andreask@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[6:02] * ChanServ sets mode +v andreask
[6:03] * andreask (~andreask@h081217068225.dyn.cm.kabsi.at) has left #ceph
[6:15] * sebastiandeutsch (~sebastian@p57A07A27.dip0.t-ipconnect.de) Quit (Quit: sebastiandeutsch)
[6:40] * topro_ (~tobi@ip-109-43-141-0.web.vodafone.de) Quit (Quit: Konversation terminated!)
[6:47] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[6:49] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[6:55] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[7:14] <topro> again, can't get my MDS to server requests though "ceph -s" reports "health ok". seems like MDS hangs in some cache trimming endless loop on startup, any ideas, please?
[7:17] <topro> http://pastebin.com/nrCGVrHQ thats what it keeps repeating in its log (about once every 2 seconds) with log level 9.
[7:19] * BillK (~BillK@220-253-132-55.dyn.iinet.net.au) Quit (Quit: ZNC - http://znc.in)
[7:21] * BillK (~BillK@220-253-132-55.dyn.iinet.net.au) has joined #ceph
[7:21] * BillK (~BillK@220-253-132-55.dyn.iinet.net.au) Quit ()
[7:21] * BillK (~BillK@220-253-132-55.dyn.iinet.net.au) has joined #ceph
[7:37] * BillK (~BillK@220-253-132-55.dyn.iinet.net.au) Quit (Quit: ZNC - http://znc.in)
[7:46] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[7:48] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[7:49] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[8:06] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[8:11] * tnt (~tnt@109.130.72.62) has joined #ceph
[8:12] * BillK (~BillK@124-169-221-120.dyn.iinet.net.au) has joined #ceph
[8:12] * fridudad (~oftc-webi@fw-office.allied-internet.ag) has joined #ceph
[8:14] * hujifeng (~hujifeng@221.226.39.82) has joined #ceph
[8:14] <hujifeng> hello
[8:15] <hujifeng> anyone use chef deploy ceph cluster?
[8:15] <hujifeng> hello
[8:15] <hujifeng> hello
[8:35] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[8:43] * miniyo (~miniyo@0001b53b.user.oftc.net) has joined #ceph
[8:47] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[8:49] * Vjarjadian (~IceChat77@90.214.208.5) Quit (Quit: If at first you don't succeed, skydiving is not for you)
[8:49] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit ()
[8:50] * DarkAceZ (~BillyMays@50.107.52.142) Quit (Read error: Operation timed out)
[9:07] * tnt (~tnt@109.130.72.62) Quit (Ping timeout: 480 seconds)
[9:10] * Macmonac (~opera@194.199.107.6) has left #ceph
[9:10] * Macmonac (~opera@194.199.107.6) has joined #ceph
[9:12] <ccourtaut> morning
[9:16] <Macmonac> morning
[9:19] <Macmonac> ccourtaut: have you an idea where I can found solution for my problem of placement in all osd ( I have one osd on 18 which is 95% full )
[9:19] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[9:20] <ccourtaut> Macmonac, nop sry, could have been using librados directly so objects didn't stripes, but should have been nb_replica osd that should be 95% full, and as i remember, you told me you uses rbd
[9:21] * hujifeng (~hujifeng@221.226.39.82) Quit (Read error: Connection reset by peer)
[9:21] * hujifeng (~hujifeng@221.226.39.82) has joined #ceph
[9:22] <Macmonac> ccourtaut: i don't now if it's rbd but y use ceph in filesystem mod
[9:23] <ccourtaut> yes true, sorry, don't know much about cephfs, only that it is still expermental
[9:23] * tnt (~tnt@212-166-48-236.win.be) has joined #ceph
[9:23] * horsey (~horsey@203.92.58.165) has joined #ceph
[9:26] <ccourtaut> Macmonac, did you read this : http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/12960
[9:27] <ccourtaut> seems quite similar, maybe it could help
[9:27] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:27] * wogri (~wolf@nix.wogri.at) Quit (Remote host closed the connection)
[9:29] <ccourtaut> loicd, http://highscalability.com/blog/2013/6/27/paper-xoring-elephants-novel-erasure-codes-for-big-data.html
[9:31] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[9:34] * wogri (~wolf@nix.wogri.at) has joined #ceph
[9:35] * leseb (~Adium@83.167.43.235) has joined #ceph
[9:35] * ScOut3R (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) has joined #ceph
[9:36] <Macmonac> ccourtaut: thanx
[9:37] <Macmonac> so the solution seems to be to change the weight of osd.
[9:38] <Macmonac> maybe i must make a cron script that make this change periodically ?
[9:40] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[9:42] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[9:44] * n3c8-35575 (~mhattersl@pix.office.vaioni.com) has joined #ceph
[9:45] * tziOm (~bjornar@194.19.106.242) has joined #ceph
[9:49] * bergerx_ (~bekir@78.188.101.175) has joined #ceph
[10:01] * sebastiandeutsch (~sebastian@p5DE82B7B.dip0.t-ipconnect.de) has joined #ceph
[10:03] <loicd> ccourtaut: thanks, good blog entry indeed
[10:08] <ccourtaut> Macmonac, don't think it is the way you should fix, but might be a temporary fix
[10:08] <ccourtaut> loicd, great! :)
[10:12] * X3NQ (~X3NQ@195.191.107.205) has joined #ceph
[10:12] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has joined #ceph
[10:29] * LeaChim (~LeaChim@90.221.247.164) has joined #ceph
[10:36] * ScOut3R_ (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) has joined #ceph
[10:37] <hujifeng> anyone use this cookbook to deploy ceph?
[10:37] <hujifeng> https://github.com/ceph/ceph-cookbooks
[10:41] * ScOut3R (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[11:02] * mschiff (~mschiff@tmo-097-98.customers.d1-online.com) has joined #ceph
[11:03] <madkiss> hujifeng: are you experiencing any problems with it? :-)
[11:03] * zhangjf_zz2 (~zjfhappy@222.128.1.105) Quit (Read error: Connection reset by peer)
[11:05] * zhangjf_zz2 (~zjfhappy@222.128.1.105) has joined #ceph
[11:15] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[11:38] <sebastiandeutsch> I've playing with the Rados S3 API right now. When I transfer large files (but <2 gb) I sometimes get a warning of a wrong signature: WARNING: MD5 signatures do not match: computed=b022f5c6897f4dfffbf4482e06eb9433, received="1df3d7bf282e98bad9fbfb2d7ec3548c-148" - but the downloaded file looks ok. Any idea where this warning comes from?
[11:39] <madkiss> hm. i fear only a developer might have the exact technical background
[11:43] <sebastiandeutsch> nevermind: I think s3cmd is the culprit
[11:52] * mschiff (~mschiff@tmo-097-98.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[12:00] * mschiff (~mschiff@tmo-097-98.customers.d1-online.com) has joined #ceph
[12:05] * vipr (~vipr@78-21-229-169.access.telenet.be) Quit (Remote host closed the connection)
[12:16] * leseb (~Adium@83.167.43.235) Quit (Quit: Leaving.)
[12:16] * vipr (~vipr@78-21-229-169.access.telenet.be) has joined #ceph
[12:20] * mschiff (~mschiff@tmo-097-98.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[12:22] * andrei (~andrei@host217-46-236-49.in-addr.btopenworld.com) has joined #ceph
[12:23] * guppy (~quassel@guppy.xxx) Quit (Remote host closed the connection)
[12:23] * guppy (~quassel@guppy.xxx) has joined #ceph
[12:25] * mschiff (~mschiff@tmo-097-98.customers.d1-online.com) has joined #ceph
[12:26] <andrei> hello guys
[12:27] <andrei> could some one please help me with determining what is causing slow requests?
[12:27] <andrei> i am having a rather large number of slow requests
[12:27] <andrei> which happen occasionally, but they tend to happen on a large number of osds
[12:28] <andrei> and they are happening when the cluster is not under load
[12:28] <andrei> for instance, the last time slow requests happened on the 30th in the morning. They lasted for about 40 minutes
[12:29] <andrei> i am looking at the ceph.log file and i can see that there is not a great deal of activity prior to slow requests
[12:29] * BillK (~BillK@124-169-221-120.dyn.iinet.net.au) Quit (Quit: ZNC - http://znc.in)
[12:29] * zhangjf_zz2 (~zjfhappy@222.128.1.105) Quit (Remote host closed the connection)
[12:29] <andrei> like under 1MB/s writes
[12:29] * BillK (~BillK@124-169-221-120.dyn.iinet.net.au) has joined #ceph
[12:30] <andrei> but from what I can see, about 2 minutes before slow requests happened ceph started doing scrubbing on 2 pgs
[12:30] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[12:30] <andrei> from the ceph.log i see this: 2 active+clean+scrubbing+deep;
[12:31] <andrei> which changed to 4 active+clean+scrubbing+deep
[12:32] <andrei> after several minutes it became: 1 peering, 2 remapped+peering, 2 active+clean+scrubbing+deep;
[12:33] <andrei> followed by the first slow request about 40 seconds later
[12:33] <andrei> not sure if this relates somehow
[12:33] <andrei> about 10 seconds after the first slow request i can see: 21 active+recovery_wait, 201 peering, 1 active+clean+scrubbing+deep, 2 active+recovering;
[12:34] <andrei> at about the same time my virtual machines have started panicking as they couldn't write to the disk
[12:34] <andrei> and all crashed (((
[12:35] <andrei> i've checked physical hard disks and I do not see any problems
[12:35] <andrei> i've done some smart tests and also read all osds with dd
[12:35] <andrei> no issues
[12:36] <andrei> networking hasn't droped and there are no errors on the network interface
[12:37] <fridudad> andrei: which version?
[12:37] <andrei> 0.61.4
[12:37] <andrei> very very basic setup
[12:37] <andrei> 2 servers
[12:37] <andrei> 16 osds
[12:37] <fridudad> andrei: ok strange i had the same with 0.61.3 but it was fixed with 0.61.4
[12:37] <andrei> ipoib network with low latency
[12:38] <andrei> at 10gbit/s+
[12:38] <andrei> are you using ubuntu 12.04?
[12:38] <fridudad> andrei: no debian wheezy
[12:38] <fridudad> but shouldn't matter
[12:39] <andrei> what kernel version do you use?
[12:40] <fridudad> andrei: custom 3.8.13 - do you use btrfs or xfs?
[12:41] * zhangbo (~zhangbo@221.226.39.82) Quit (Remote host closed the connection)
[12:43] <andrei> i am on 3.8.0, the one that comes with ubuntu
[12:43] <andrei> using xfs
[12:46] * leseb (~Adium@83.167.43.235) has joined #ceph
[12:47] * diegows (~diegows@190.190.2.126) has joined #ceph
[12:51] <fridudad> andrei: that sounds good. the problem is the peering in your case. I just knew that there was a bug in 0.61 - 0.61.3 no idea for 0.61.4 sorry
[12:51] <andrei> do you know what is happening with peering,
[12:51] <andrei> in terms of what is it and how come i have an issue with it?
[12:57] * leseb (~Adium@83.167.43.235) Quit (Read error: Connection reset by peer)
[12:59] <fridudad> andrei: sorry no can't help with that.
[12:59] * leseb1 (~Adium@83.167.43.235) has joined #ceph
[13:00] <fridudad> andrei: you should wait until the ceph team is up (mostly at 5pm GMT)
[13:01] <andrei> thanks
[13:06] * stacker666 (~stacker66@213.229.187.105) has joined #ceph
[13:06] <stacker666> hi all
[13:08] <stacker666> somebody have a solution for kernel panics?
[13:09] <stacker666> i have tested diferent versions and diferent kernels but kernel panic appears
[13:11] * Psi-jack peeks in, and notices no response yet for his question over the weekend. :)
[13:12] * AfC (~andrew@2001:44b8:31cb:d400:4db8:c983:f0e5:48f8) has joined #ceph
[13:13] <andrei> stacker666: are you having kernel panics on the storage servers or on virtual machines stored on ceph?
[13:13] <stacker666> storage servers
[13:13] <stacker666> andrei: storage servers
[13:14] <andrei> ah, not seen that much
[13:14] <andrei> which os are you on?
[13:14] <stacker666> andrei: i have 1 mon server and 2 mon,osd servers
[13:14] <stacker666> andrei: ubuntu 12.04
[13:14] <andrei> same here
[13:14] <stacker666> andrei: 3.2.0-40-lowlatency
[13:15] <stacker666> andrei: it seems that bug
[13:15] <andrei> is your network infiniband?
[13:15] <stacker666> andrei: http://tracker.ceph.com/issues/3204
[13:15] <andrei> i have seen some stability issues with the low latency kernels
[13:15] <stacker666> andrei: ethernet with bondings 2Gb/s
[13:15] <stacker666> andrei: RR
[13:15] <andrei> and switched to the 13.04 kernel, which is called slightly different
[13:15] <Psi-jack> If anyone can comment on http://ceph.com/docs/master/install/os-recommendations/ -- Regarding CentOS 6.3, why Note 3 isn't flagged on CentOS/RHEL 6.3 due to both kernel 2.6.32 (<2.6.39) and glibc 2.12 (<2.14), I'd be appreciative.
[13:16] <andrei> i am also using ubuntu 12.04 but i've apt-get kernel from raring
[13:16] <andrei> try their 3.8 kernel branch, which is what i am using
[13:16] <andrei> i've not seen any panics yet
[13:17] <stacker666> andrei: ok i try it thanks
[13:17] <stacker666> andrei: kernel from ceph repositories?
[13:17] <andrei> do you need to use the lowlatency kernel?
[13:17] <andrei> stacker666: i am not aware of ceph kernel repos
[13:17] <andrei> are there any?
[13:17] <andrei> could you send me a link?
[13:17] <stacker666> andrei: yes, wait
[13:17] <Psi-jack> andrei: Heh. 3.8 kernel branch? On Ubuntu 12.04 LTS?
[13:18] <andrei> psi-jack: yeah
[13:18] <Psi-jack> Wouldn't that be the raring kernel backport?
[13:18] <andrei> linux-image-generic-lts-raring
[13:18] <andrei> that's the name of the package
[13:18] <Psi-jack> Yeah. That's what I thought.
[13:18] <stacker666> andrei: http://gitbuilder.ceph.com/
[13:18] <stacker666> andrei: here
[13:19] <Psi-jack> andrei: Heh, That's why I'm wondering, regarding my curious question on the os-recommendations, why CentOS isn't flagged for its kernel/glibc, because I know they're both behind the recommended versions.
[13:19] <Psi-jack> hehe
[13:20] <andrei> psi-jack: sorty, not really sure. I am a user, not from the ceph team
[13:20] <andrei> ))
[13:20] <Psi-jack> Yeah, I know. :)
[13:21] <andrei> stacker666: thanks.
[13:21] <Psi-jack> I'm hoping elder or joao or someone else come around sometime soon to see my question. :)
[13:21] <andrei> do they have ubuntu kernels there?
[13:21] <stacker666> andrei: thanks to you
[13:22] <Psi-jack> I know that CentOS's team is considering, if not already working on, a spin-release, or whatever they call it, for Ceph specifically, like they've done for Xen. (bleh, Xen!)
[13:23] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) has joined #ceph
[13:23] * hujifeng (~hujifeng@221.226.39.82) Quit (Read error: Operation timed out)
[13:24] * hujifeng (~hujifeng@221.226.39.82) has joined #ceph
[13:24] <stacker666> linux-image-generic-lts-raring <- this kernel works fine?
[13:25] <Psi-jack> stacker666: One would hope so. raring's kernel is 3.8.0, which is definitely reasonably sufficient for good Ceph support.
[13:26] <andrei> this is what i am using
[13:26] <stacker666> psi-jack: ok, thanks!!
[13:28] <Psi-jack> I just run Xubuntu 13.04, so could easily tell which kernel it was. My Ceph servers are still on Arch, until I can schedule time to start the conversion process to Ubuntu 12.04, or more preferably, CentOS 6.4. :)
[13:29] <andrei> stacker666: the URL which you've sent, does it include the kernels created specifically for ceph?
[13:29] <stacker666> andrei: yes
[13:29] <andrei> psi-jack: i've got several client servers which are running centos 6.4
[13:30] <andrei> seems to work okay, but i've not done any extensive testing yet
[13:30] <Psi-jack> andrei: Yeah, me too. Using the elrepo kernel-ml.
[13:30] <andrei> i am on their stock kernel
[13:30] <Psi-jack> But, I'm talking about convering my Ceph servers from Arch to CentOS, also using the kernel-ml kernel.
[13:30] <stacker666> andrei: i have tested a few of them but the kernel panic appears. probably testing kernels
[13:30] <Psi-jack> andrei: Eww, that means you're using the fuse client then.
[13:31] <andrei> nope, i am not using fuse
[13:31] <andrei> i use rbd
[13:31] <Psi-jack> At least, for cephfs, if you use that. RBD may be a different story.
[13:31] <Psi-jack> Ahh gotcha. :)
[13:31] <andrei> via librados i believe
[13:31] <andrei> with kvm integration
[13:31] <Psi-jack> Then it sounds more like you're using qemu-rbd.
[13:31] <andrei> yeah
[13:32] <Psi-jack> Yeah, that's different. Not kernel based at all. :)
[13:32] <Psi-jack> I have a mail server, and two webservers using cephfs which kernel 2.6.32 didn't support ceph at all, was either using fuse client, or kernel-ml from elrepo, which is the route I took. :)
[13:33] <Psi-jack> Since I did that, I've been considering the idea of using CentOS for my ceph storage servers, instead of Ubuntu, despite inktank more specifically supporting Ubuntu.
[13:34] <Psi-jack> Hence... This week, I'm trying to finish researching, and communicating with the ceph guys about it, and seeing if there's any pitfalls they might know about, like the whole syncfs(2) system call issue.
[13:37] <Psi-jack> Interesting. It actually kind of looks like syncfs is at least defined in /usr/include on CentOS 6.4.
[13:38] <Psi-jack> So... May actually have it, even in stock kernel.
[13:39] * hujifeng (~hujifeng@221.226.39.82) Quit (Read error: Operation timed out)
[13:40] * markit (~marco@151.78.74.112) has joined #ceph
[13:41] <Psi-jack> Looks more specifically CentOS 6.4's kernel, not CentOS 6.3's.
[13:41] <markit> hi Psi-jack :)
[13:41] <Psi-jack> Moin, markit
[13:42] <markit> btw, I'm trying to investigate ceph, sounds really good, just I've problem with find the right hw reliable but not too expensive, i.e. dell sells ssd at a HUGE price
[13:42] <wogri_risc> you don't need ssd's to make ceph work.
[13:42] <Psi-jack> markit: YOu don't necessarily need all SSD.. A decent setup is using 1 SSD per 3~4 spinners with Ceph.
[13:43] <markit> anyone here has suggestion for a storage node that can hold 6 drivers (one ssd for journal, one hd for boot, 3 for storage) and possible be expanded with 10GB ethernet (so pci-express 8x)ù
[13:43] * Yen (~Yen@2a00:f10:103:201:ba27:ebff:fefb:350a) has joined #ceph
[13:43] <markit> Psi-jack: yes, but a 100GB ssd is priced 380euros!
[13:43] <Psi-jack> Using the SSD for ceph-journal, and even optionally xfs-logdev (XFS's journal device)
[13:43] <wogri_risc> I believe supermicro is a good brand for what you're looking at.
[13:44] <Psi-jack> markit: Yikes! Road kill!
[13:44] <markit> wogri_risc: thanks, hope there is an italian reseller
[13:44] <markit> I've wasted all the morning flipping from dell to fujitsu site
[13:44] <markit> you start cheap, but as soon as you add what YOU need you discover: a) is at stellar price b) is not supported/available
[13:45] <Psi-jack> markit: What about OCZ SSD's? I've tested the hell out of those and find them quite reliable. Unlike other brands I've tested, such as Intel.
[13:45] * AfC (~andrew@2001:44b8:31cb:d400:4db8:c983:f0e5:48f8) Quit (Quit: Leaving.)
[13:45] <Psi-jack> Intel and Crucial, both horrible SSD drives.
[13:45] <andrei> psi-jack: really, intel is not good?
[13:46] <andrei> i am using intel at the moment
[13:46] <andrei> their 520 series
[13:46] <maswan> How much size does on need for the journals, really?
[13:46] <markit> Psi-jack: the problem is that if you setup a branded server/PC, and you add your own device, you loose the warrainty on ALL the PC
[13:46] <Psi-jack> andrei: Yeah, really. I destroyed literally hundreds of their SSD drives in 2 weeks.
[13:46] <andrei> really?
[13:46] <Psi-jack> Really/
[13:46] <andrei> damn
[13:46] <andrei> what series were you using?
[13:47] <markit> and what Crucial are you using ?
[13:47] <markit> (just to play safe ;P)
[13:47] <Psi-jack> And I'm not exagerating. I did several model series, and kept RMAing.. Finally Intel gave up sending me replacements, because they kept getting sent back in T-minus 2 weeks. :)
[13:47] <Psi-jack> markit: The m4.
[13:47] <Psi-jack> That was the only Crucial I tested.
[13:48] <markit> Psi-jack: sorry, what OCZ are you using?
[13:48] <Psi-jack> I've tested and currently use the OCZ Agility 3, and Vertex 3 MAXIOPS
[13:49] <Psi-jack> I haven't yet tried any of the 4's yet, that use their Indilinx or whatever they call it. :)
[13:49] <markit> if 1gbit is not enough, what people use to connect ceph nodes? Any suggestion about "cheap" but reliable (and compatible) 10GB ehterne adapters and switch?
[13:50] <markit> I would stay on Intel nics, but they are about 380euros
[13:51] <Psi-jack> I would more push on Infiniband or FiberChannel, personally.
[13:52] <fridudad> psi-jack: how did you destroy the intel ssds? I'm running around 280 of them from series 160 to series 520. The oldest is around 3-4 years. I haven't seen a single one dying.
[13:53] <markit> Psi-jack: Infiniband or FiberChannel, an I've no time to become a guru of that tecnology
[13:54] <markit> ehm, the message was "never used/seen Infiniband or FiberChanne..."
[13:54] * DarkAceZ (~BillyMays@50.107.54.174) has joined #ceph
[13:55] <maswan> FC I'd probably consider both slower and more expensive than 10GigE. IB is cheaper and faster though.
[13:56] <Yen> markit: Ethernet tends to be expensive above 1GbE.
[13:57] <markit> Yen: yes, you could only use bond interfaces, but max 2-3 I suppose
[13:57] <ofu> Intel NICs and Arista-Switches with twinax cabling
[13:57] <markit> do I have to set network redundancy among nodes? i.e. 2 nic bonded and going to 2 different switches? what happens if switch is turned off for some time?
[13:57] <ofu> we use active-passive bonding
[13:58] <markit> ofu: so only one working at any time and the other as 'backup', right?
[13:58] <Psi-jack> fridudad: I wrote a script that basically constantly hammered it with writes, reads, and deletes non-stop until it was specifically stopped, or it could no longer continue due to total failure.
[13:58] <markit> wouldn't be better a balanced-rr?
[13:58] <fridudad> psi-jack: but that would kill any ssd...
[13:59] <Psi-jack> maswan: Yeah, FC6 is slower than 10GbE, by a little bit, depending on the disks themselves that they're attached.
[13:59] <fridudad> psi-jack: what i've red about ocz was not good at all that they put in old half baked chips and so on
[14:00] <markit> btw, what is "an active + clean state" ?
[14:00] <Psi-jack> fridudad: Sure, but in 2 weeks? Not usually.
[14:02] <ofu> markit: bandwitdh is not the problem and we like redundancy
[14:02] <andrei> i am using infiniband and pretty happy with it
[14:03] <ofu> only one link active -> only 10gbit of bandwidth per node
[14:03] <andrei> fast and low latency
[14:03] <andrei> plus it was way cheaper compared with the 10G
[14:04] <Psi-jack> fridudad: Heh, I haven't seen any proof of that, that's for sure. I mean they bought out the chips they used to use, making it their own, used in the Vertex 4 now.
[14:04] <Psi-jack> andrei: Yeah, that's why I keep suggesting Infiniband to markit.
[14:05] <andrei> do you use IB?
[14:05] <Psi-jack> It's actually /cheaper/ than 10GbE.
[14:05] <Psi-jack> I have used it.
[14:05] <andrei> yeah
[14:05] <Yen> Psi-jack: Intel SSDs should reliable.
[14:05] <andrei> especially if you get it second hand
[14:05] <Psi-jack> I'm in the process of buying IB equipment for home. :)
[14:05] <ofu> cheaper and better or only cheaper?
[14:05] <andrei> from ebay
[14:05] <Psi-jack> Yen: *ducks from the grenade!*
[14:05] <andrei> ofu: better? depends on what
[14:06] <andrei> ethernet is widely used and easier to setup
[14:06] <andrei> IB is pretty easy as well if everything is working
[14:06] <markit> ofu: ah, you use 10Gb, I understood you were using 2 x 1 GB in bonding
[14:06] <ofu> and much easier to troubleshoot in my case
[14:06] <andrei> if there are issues, it is not that easy to find help
[14:07] <andrei> plus there are a lot of nice features with IB
[14:07] <andrei> like virtual nics, etc
[14:07] <andrei> if you are looking for a commercial IB vendor, I would suggest looking at Xsigo
[14:08] <markit> ofu: do you remember what model of intel nic you are using? with what OS? Model of the switch? Just to have a look of availability here and prices
[14:08] <andrei> they do a lot of great stuff
[14:08] <ofu> markit: X520SR2 aka 01:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01)
[14:08] <Psi-jack> ofu: Ahh nice equipment. :)
[14:09] <ofu> right now we use 7124SX from Arista, but when the ceph cluster goes live, we will probably use 7050S-64
[14:10] <markit> ofu: thanks, I'm reading http://en.wikipedia.org/wiki/Twinaxial_cabling and I never heard of such thing!
[14:10] <ofu> yes, twinax is cool as you do not need that much power as 10gig via rj45
[14:11] <ofu> and its cheaper than glas
[14:12] <markit> ofu: oh, 7050S-64 -> 24K $, more than my pèlanned entire setup lol
[14:13] <ofu> yes, the 24port switches are less than half of it
[14:14] <markit> netgear 10 ports 10GB copper is about 1,700 $
[14:14] <markit> that's the price I have in mind :)
[14:16] <markit> I've (well, the company I work for) to do an offer for a 2 node proxmox cluster, and I was thinking on sheepdog at the beginning, now I would love to have a separate storage cluster with ceph
[14:16] <markit> I've no money to buy different hardware and do tests, so I'm a little puzzled
[14:16] <markit> i.e. are sata 7200 drivers + SSD for journal good or I need WD velociraptor or sas?
[14:16] <markit> is gbid (maybe bonded) enough or I need 10GB?
[14:17] <markit> usually we have single servers with RAID10 and writeback cache, that performance is what I want achieve
[14:17] <markit> any tips/suggestion form "real life" world? :)
[14:18] * sebastiandeutsch (~sebastian@p5DE82B7B.dip0.t-ipconnect.de) Quit (Quit: sebastiandeutsch)
[14:23] * markbby (~Adium@168.94.245.1) has joined #ceph
[14:31] * sebastiandeutsch (~sebastian@p5DE82B7B.dip0.t-ipconnect.de) has joined #ceph
[14:42] <joao> Psi-jack, what question were you hoping to get an answer to?
[14:42] * mschiff_ (~mschiff@tmo-107-63.customers.d1-online.com) has joined #ceph
[14:42] <joao> read the backlog and nothing popped up
[14:42] <Psi-jack> joao: Ahh, cool.
[14:42] <Psi-jack> http://ceph.com/docs/master/install/os-recommendations/ -- Regarding CentOS 6.3, why Note 3 isn't flagged on CentOS/RHEL 6.3 due to both kernel 2.6.32 (<2.6.39) and glibc 2.12 (<2.14).
[14:43] <Psi-jack> Is it because syncfs(2) was backported to CentOS's kernel, and since 0.55+ doesn't need glibc, it's fine?>
[14:43] * mschiff (~mschiff@tmo-097-98.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[14:43] <markit> Psi-jack: I know the "official" recomendations, would love to know what people do in real life ;)
[14:43] <joao> Psi-jack, I'm afraid I am unable to give you an answer
[14:44] <markit> and is a question open to everyone, not Psi-jack only
[14:44] <joao> Gary Lowell might know better; or maybe the list?
[14:44] <Psi-jack> joao: Happen to know whom might be able? This week I'm investigating it so I can come up with a solution for this weekend to migrate my Arch-based Ceph cluster to CentOS 6.4 or Ubuntu 12.04.
[14:44] <Psi-jack> markit: I'm a Linux Systems Engineer with ~21 years hardcore experience. :)
[14:45] <markit> Psi-jack: I remember it, I've a long experience too but not in this field... I mean, ceph is younger so maybe different people have experienced different scenarios
[14:45] <Psi-jack> joao: Hmm, Gary? Is he on this channel ? ;)
[14:46] <markit> you already told me about your setup some days ago, I took note :)
[14:46] <Psi-jack> hehe
[14:46] <joao> Psi-jack, glowell, but he's not online just yet
[14:46] * Psi-jack nods.
[14:46] <joao> should be around later today
[14:46] <Psi-jack> joao: Thanks!
[14:46] <joao> sage might know as well; sage knows everything
[14:47] <Psi-jack> So far, Arch's been running bobtail since January, but I'm obviously trying to run as far away from Arch as possible. :)
[14:47] <Psi-jack> Yeah, true, sage does tend to know everything. :)
[14:47] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) has joined #ceph
[14:56] * andreask (~andreask@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[14:56] * ChanServ sets mode +v andreask
[14:56] * andreask (~andreask@h081217068225.dyn.cm.kabsi.at) has left #ceph
[14:56] * markit wants to meet mr. sage
[14:57] <Psi-jack> Heh, sage is pretty cool. He's the creator of Ceph. Basically the father of Ceph. :)
[14:58] * jebba (~aleph@2601:1:a300:8f:f2de:f1ff:fe69:6672) Quit (Quit: Leaving.)
[15:11] * jebba (~aleph@70-90-113-25-co.denver.hfc.comcastbusiness.net) has joined #ceph
[15:18] <stacker666> psi-jack: if i install this raring-lts kernel, iscsi module don't works. you had found solution for this?
[15:18] <Psi-jack> Why would you use iSCSI with Ceph? That's stepping backards.
[15:18] <Psi-jack> That, and I don't use UBuntu's raring-lts kernel anywhere.
[15:20] <stacker666> psi-jack: I use it to create a Datastorage to esx
[15:20] <stacker666> psi-jack: the only way is to export that with iscsi
[15:21] <stacker666> psi-jack: raring-lts is not recomended?
[15:21] <markit> Psi-jack: on ceph site, I've read that the only full tested distro is Precise 12.04 (probably with updated kernel from lts branch), why don't you use it?
[15:21] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit (Remote host closed the connection)
[15:22] <Psi-jack> Ahh... Yeah... Sorry. No clue for ya there. Is that ietd or open-iscsi?
[15:22] <Psi-jack> Or LIO?
[15:22] <stacker666> psi-jack: ietd
[15:22] <stacker666> psi-jack: iscsitarget
[15:22] * markbby (~Adium@168.94.245.1) Quit (Remote host closed the connection)
[15:22] * markbby (~Adium@168.94.245.1) has joined #ceph
[15:22] <Psi-jack> stacker666: What issue are you specifically hitting?
[15:23] <stacker666> psi-jack: with a generic 3.2 kernel i have kernel panic
[15:23] <Psi-jack> markit: As mentioned before, my Ceph cluster is currently on Arch. I used to maintain the Ceph AUR package for Arch, until I found out the nightmare that Arch actually is.
[15:24] * andrei (~andrei@host217-46-236-49.in-addr.btopenworld.com) Quit (Ping timeout: 480 seconds)
[15:24] <Psi-jack> 3.2 kernel?
[15:24] <Psi-jack> Oh, that's what comes with 12.04.
[15:24] <stacker666> psi-jack: yes
[15:24] <markit> Psi-jack: yes, I know, but you mentioned to go for CentOS probably
[15:24] <Psi-jack> markit: Yep. Because I prefer CentOS to Ubuntu.
[15:24] <markit> I see
[15:25] <Psi-jack> USed to be the opposite, but ever since I started delving into CentOS hardcore, I've been appreciating it a lot more, since CentOS 6.4. ;)
[15:25] <stacker666> psi-jack: yes, but at this moment i cannot change the S.O. i want to find a solution with ubuntu
[15:25] <Psi-jack> stacker666: Hmm, What about with raring?
[15:25] <stacker666> psi-jack: i cannot compile de dmks module
[15:25] <stacker666> ps-jack: to use iscsi module
[15:26] <Psi-jack> Hmmm..
[15:26] <Psi-jack> stacker666: Have you tried open-iscsi?
[15:26] <stacker666> psi-jack: this is the client i think
[15:27] <stacker666> psi-jack: i use iscsitarget
[15:27] * rektide (~rektide@192.73.236.68) has joined #ceph
[15:27] <stacker666> psi-jack: but it need iscsitarget-dkms
[15:28] <stacker666> psi-jack: this package fails when it compiles in raring-lts kernel
[15:31] * mschiff_ (~mschiff@tmo-107-63.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[15:33] * redeemed (~quassel@static-71-170-33-24.dllstx.fios.verizon.net) has joined #ceph
[15:33] * PerlStalker (~PerlStalk@72.166.192.70) has joined #ceph
[15:34] <stacker666> psi-jack: mmmm i can try with tgt
[15:35] * sebastiandeutsch (~sebastian@p5DE82B7B.dip0.t-ipconnect.de) Quit (Quit: sebastiandeutsch)
[15:35] <stacker666> psi-jack: it dont need this module to works
[15:35] <stacker666> psi-jack: :)
[15:40] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[15:45] <horsey> Hello, I am looking at storage solution for a web backend storage that will be store git repositories. I just came across ceph. Is ceph a good idea for storing large number of git repositories?
[15:46] <Psi-jack> stacker666: Oh yeah tgt, aka LIO
[15:46] <stacker666> psi-jack: it works :)
[15:46] <Psi-jack> stacker666: I was trying to remember what it was called, tgt uses the LIO implementation. :)
[15:47] <joelio> got a strange issue with live migrations on libvirt. Doing a virsh migrate gives me the error - could not open disk image rbd:one/one-24-478-0:auth_supported=cephx: No such file or directory
[15:47] <joelio> however, if I run that command directly on that system, it's ok
[15:50] <Psi-jack> joelio: Sounds like... a libvirt issue.
[15:52] <joelio> well, sure, just wondering if anyone else has had this issue
[15:56] * drokita (~drokita@199.255.228.128) has joined #ceph
[15:59] * mschiff (~mschiff@tmo-110-15.customers.d1-online.com) has joined #ceph
[16:01] <Psi-jack> I.. Don't, and will never use libvirt for servers ever again. :)
[16:02] * BillK (~BillK@124-169-221-120.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[16:02] * markbby (~Adium@168.94.245.1) Quit (Remote host closed the connection)
[16:05] * mnash (~chatzilla@vpn.expressionanalysis.com) has joined #ceph
[16:08] <joelio> Psi-jack: I have no choice, plus it's actually not that bad imho
[16:09] * portante is now known as portante|afk
[16:13] * portante|afk is now known as portante
[16:15] * portante is now known as portante|afk
[16:26] * tziOm (~bjornar@194.19.106.242) Quit (Remote host closed the connection)
[16:37] <Psi-jack> joelio: Eh, it's not that great, either.
[16:37] <Psi-jack> The Pacemaker integration is reasonable, so far, but libvirt itself is just so breakable it's not even funny. :)
[16:38] <joelio> I've never had any issues with it up until using it with Ceph, to be fair.
[16:38] * stacker666 (~stacker66@213.229.187.105) Quit (Ping timeout: 480 seconds)
[16:39] * sebastiandeutsch (~sebastian@p5DE82B7B.dip0.t-ipconnect.de) has joined #ceph
[16:39] <Psi-jack> Heh, I used libvirt for several months before I went back to using Proxmox VE which had vastly improved with the 2.x series at the time. :)
[16:41] <Psi-jack> joelio: It's "not bad" for generalized use, IMHO, but it's not production-quality yet, either. That's where I'd draw the line.
[16:42] <joelio> umm, it's been around for 7-8 years?
[16:42] <Psi-jack> Or at least, wasn't, :)
[16:42] <joelio> it's in use in many productiion areas
[16:42] <joelio> much more than Proxmox, I can guarantee you ;)
[16:42] <Psi-jack> I dunno about that. I hear much much MUCH more about Proxmox VE than I do libvirt. :)
[16:43] <joelio> How about every Redhat Virtual Suite?
[16:43] <Psi-jack> I rarely ever hear about RHVS in-use much.
[16:44] <joelio> OpenNebula (what I use - hence this issue)
[16:44] <joelio> OpenStack?
[16:45] <Psi-jack> OpenStack's gaining some popularity, primarily on a hosting-services level.
[16:45] <Psi-jack> For in-house virtualization, It's not that great for it's design model.
[16:45] <joelio> I guess it depends how compentent you are
[16:45] <joelio> proxmox easy to install, I used it for Sahanafoundation.org
[16:45] <Psi-jack> I'm quite competant. :)
[16:46] <joelio> for my $WORK now, I use OpenNebula, much more tunable
[16:48] * joelio is speaking at OpenNebula conf, be good to catch up with any Ceph users if going?
[16:48] <darkfaded> <- will be there
[16:50] <joelio> should be fun
[16:50] <Psi-jack> heh
[16:50] <Psi-jack> Bleh, having issue with my workstation's xfce4-session crashing, randomly, again. :/
[16:51] * markbby (~Adium@168.94.245.2) has joined #ceph
[16:52] * joelio fixes libvirt issue with some apparmour abstraction masking
[16:53] <joelio> eat it RBAC
[16:55] * gaveen (~gaveen@175.157.192.219) has joined #ceph
[17:05] * mattbenjamin (~matt@aa2.linuxbox.com) has joined #ceph
[17:06] * jlogan1 (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[17:08] * horsey (~horsey@203.92.58.165) Quit (Ping timeout: 480 seconds)
[17:18] * yehudasa_ (~yehudasa@2602:306:330b:1410:a1cf:686:e9:7ff1) Quit (Ping timeout: 480 seconds)
[17:23] <joelio> am I right in thinking that rbd caching needs to be explicitly put in the libvirt XML config too - not just in ceph.conf?
[17:27] <niklas> What do I do, if a osd does not come up, using ceph-deploy?
[17:27] <niklas> I tried "ceph-deploy osd create host:sdu" and it does not complain
[17:27] <niklas> but on the host, sdu is not mounted
[17:28] <joelio> what does it say with the -v flag set?
[17:28] <niklas> on the osd host three processes like this " /usr/bin/python /usr/sbin/ceph-disk activate /dev/sdu1" are running
[17:30] <joelio> is there disk activity? (iostat -x 1)
[17:30] <joelio> otherswise the key creation may be not right
[17:30] <niklas> but other osds on the same host work…
[17:31] <niklas> no disk activity
[17:31] <niklas> just zapped sdu, waiting for ceph-deploy -v create … to finish
[17:32] <joelio> yea, I wouldn't run tasks in parallel. Just wait for one to finish first. If it doesn't start to take a look at the logs
[17:32] * n3c8-35575 (~mhattersl@pix.office.vaioni.com) Quit (Read error: No route to host)
[17:32] * n3c8-35575 (~mhattersl@pix.office.vaioni.com) has joined #ceph
[17:33] <niklas> ok, I killed any task related to sdu
[17:34] <markit> Psi-jack: is it possible use the journal ssd to install the OS? Having 1 hd for os, 1 ssd and 3xstorage = 5 drivers, and most PC only have 4!
[17:34] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:34] <niklas> and ran ceph-deploy -v osd create host:sdu again
[17:35] <Psi-jack> markit: Heck, I install the OS directly onto the SSD, and use ceph-journals to partitions of the SSD.
[17:35] <Psi-jack> Because the OS itself is barely ever using the disk itself.
[17:35] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Remote host closed the connection)
[17:35] <markit> Psi-jack: no problems doing so? OS swap partition on it too?
[17:35] <Psi-jack> markit: Yep.
[17:35] <markit> great!
[17:36] <niklas> how long should ceph-deploy osd create take?
[17:37] <joelio> niklas: not long, about 30 seconds per OSD iirc
[17:37] <niklas> hmm, its running for 5 Minutes now, seems stuck:
[17:38] <joelio> check the logs
[17:38] <niklas> Preparing cluster ceph disks cs-bigfoot05:/dev/sdu:
[17:38] <niklas> Deploying osd to cs-bigfoot05
[17:38] <niklas> Host cs-bigfoot05 is now ready for osd use.
[17:38] <niklas> Preparing host cs-bigfoot05 disk /dev/sdu journal None activate True
[17:38] <niklas> no log entries
[17:39] <joelio> and it's not showing up in #ceph osd tree
[17:40] <joelio> bot marked in and part of the cluster? 1 1
[17:40] <niklas> nope
[17:40] <niklas> I'm currently setting the cluster up
[17:40] <Psi-jack> niklas: yeah, roughly... Seconds.. 5~30.
[17:41] <niklas> so it has never been in
[17:42] <niklas> Psi-jack: started it 7 Minutes ago, and cpu/ram should not be the problem…
[17:42] <Psi-jack> Hmm yeah.. Sounds like a problem.
[17:42] <niklas> the osd host runs shows two processes:
[17:42] <niklas> /bin/sh /usr/sbin/ceph-disk-prepare -- /dev/sdu
[17:43] <niklas> /usr/bin/python /usr/sbin/ceph-disk prepare -- /dev/sdu
[17:43] <niklas> both of them idle
[17:43] <joelio> they're the same thing
[17:43] <joelio> one is just instatiation
[17:43] <niklas> ok
[17:43] <niklas> should they be running for osds that are already in?
[17:44] <joelio> no, the will already be prepared
[17:44] <joelio> and active I assume
[17:44] * tnt (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[17:44] <niklas> yeah, I guess something went really wrong here
[17:44] <niklas> I only prepared the osds, but still they got up
[17:44] <niklas> Maybe I'll start over
[17:44] <joelio> again, nothing in /var/log/ceph/ceph-osd*log?
[17:45] <niklas> I only have log files for thoose osds that are already up
[17:45] <niklas> *those
[17:49] * grepory (~Adium@50-115-70-146.static-ip.telepacific.net) has joined #ceph
[17:51] <joelio> niklas: have you moved directory or anything from where you ran the osd commands before? ceph-deploy reads from current working dir (don't like this at all, personally)
[17:51] <niklas> I just destroyed the cluster and will start over
[17:51] <joelio> one sec, I will gist up my install doc
[17:52] <niklas> When preparing the osds, I tried to do it in parallel and I think the sshd on the osd-host did not like that many connections at the same time…
[17:52] <joelio> I will post my script
[17:52] <joelio> hang on
[17:52] * tnt (~tnt@109.130.72.62) has joined #ceph
[17:55] * oddomatik (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[17:56] <joelio> niklas: https://gist.github.com/anonymous/c1d6df5b99302d1c53d6
[17:57] * nwat (~Adium@eduroam-251-132.ucsc.edu) has joined #ceph
[17:57] * LPG (~kvirc@c-76-104-197-224.hsd1.wa.comcast.net) has joined #ceph
[17:57] * portante|afk is now known as portante
[18:01] * ScOut3R_ (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[18:02] * alram (~alram@38.122.20.226) has joined #ceph
[18:02] * markit (~marco@151.78.74.112) Quit (Quit: Konversation terminated!)
[18:02] * LPG (~kvirc@c-76-104-197-224.hsd1.wa.comcast.net) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[18:03] * LPG (~kvirc@c-76-104-197-224.hsd1.wa.comcast.net) has joined #ceph
[18:03] * portante is now known as portante|afk
[18:04] <niklas> joelio: thank you
[18:04] <niklas> I'll report back tomorrow
[18:05] <joelio> n/p - hope it all makes sense, if you need clarification, just ask
[18:06] * LPG (~kvirc@c-76-104-197-224.hsd1.wa.comcast.net) Quit ()
[18:07] * LPG (~kvirc@c-76-104-197-224.hsd1.wa.comcast.net) has joined #ceph
[18:07] * gary (~gary@217.33.61.67) has joined #ceph
[18:09] * LPG|2 (~kvirc@c-76-104-197-224.hsd1.wa.comcast.net) has joined #ceph
[18:09] * LPG|2 (~kvirc@c-76-104-197-224.hsd1.wa.comcast.net) has left #ceph
[18:10] <gary> hi, I'm setting up a 3-node configuration - 1 admin node and 2 server nodes. Do I follow the Object storage quick start intructions (e.g. install Apache/FastCGI, etc) for both server nodes?
[18:13] <joelio> gary: the apache/fastcgi instructions are for RADOS g/w - do you need s3?
[18:13] <joelio> otherwise, follow the normal deployment guide
[18:14] <joelio> if you need RADOS gateway, that can be stood up anywhere with access to the cluster
[18:15] <gary> joelio, yes I do. So I was wondering if I only need one gateway - e.g. on servernode1
[18:15] <gary> ok, cool thanks
[18:15] <joelio> well, it's more of a service then, just approach like any other apache sytle service - think about availability etc.
[18:16] <joelio> could create 2 instances and load balance, or whatever :)
[18:19] <gary> I see. For now, I'll create 1 instance as I'm researching it at the moment. Later on, I may create the second instance and LB
[18:19] <joelio> yea, best bet.. keep it simple for starters
[18:21] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[18:24] * mschiff (~mschiff@tmo-110-15.customers.d1-online.com) Quit (Remote host closed the connection)
[18:27] * tziOm (~bjornar@ti0099a340-dhcp0745.bb.online.no) has joined #ceph
[18:27] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[18:33] <joelio> is there a performance hit for using v2 rbd snapshots? just rolled 100VMs' out using the v2 images and snapshots (thin priovisoned), but performance sucks intially. It *seems* to be getting a bit faster, mind
[18:34] <joelio> looking at ceph metrics doesn't seem that much going on
[18:34] <joelio> initially get about 30,000op/s - that settles down. Start doing some tests inside the vm and performance is pretty woeful (10Mb/s)
[18:35] * leseb1 (~Adium@83.167.43.235) Quit (Quit: Leaving.)
[18:35] <joelio> wondering if there's some aquiescing of state going on for the snapshots
[18:36] * mattbenjamin (~matt@aa2.linuxbox.com) Quit (Quit: Leaving.)
[18:39] * sjust (~sam@2607:f298:a:607:baac:6fff:fe83:5a02) has joined #ceph
[18:43] <topro> some CephFS / MDS specialists online? I'm seeing strange MDS cache cleanup behaviour on MDS restart which leads to an endless loop preventing MDS from ever serving clients. the only way to bring cephfs back online was to increase MDS cache size from 100000(default) to 600000 as with that it didn't get into the cleanup endless lopp. known problem?
[18:43] <topro> s/lopp/loop/
[18:44] <gregaf> topro: yeah, known issue; I think I referenced it on the mailing list recently
[18:44] <topro> ^^ with cuttlefish 0.61.4 that is
[18:44] * horsey (~horsey@122.166.181.42) has joined #ceph
[18:44] <gregaf> there's some unknown issue with trimming stale/deleted dentries (plus some known ones)
[18:44] <topro> gregaf: hi, well that finally seems to be the issue with my MDS, remeber?
[18:44] <gregaf> yeah
[18:44] <topro> is there a workaround for the moment?
[18:45] <topro> anything I can do when starting MDS?
[18:45] <gregaf> up the cache size, like you did
[18:45] <topro> will I have to increase size even further for every new start?
[18:46] <gregaf> I don't know :(
[18:46] <topro> hmm, I'll let you know ;)
[18:46] <topro> I assume increased cache size means increased memory footprint, right?
[18:47] * Karcaw (~evan@68-186-68-219.dhcp.knwc.wa.charter.com) Quit (Read error: Connection reset by peer)
[18:47] * Karcaw (~evan@68-186-68-219.dhcp.knwc.wa.charter.com) has joined #ceph
[18:47] <gregaf> yeah, although if you need it then increasing the cache size is going to increase performance anyway
[18:48] <topro> gregaf: well thats strange, the cluster "feels" a lot slower since that
[18:48] * LPG (~kvirc@c-76-104-197-224.hsd1.wa.comcast.net) Quit (Ping timeout: 480 seconds)
[18:49] <topro> its not swapping (yet)
[18:49] <gregaf> hmm, maybe look into the process' memory usage and how the host is doing
[18:49] <gregaf> in a meeting now, bbl
[18:49] <topro> kk, thanks
[18:55] <Psi-jack> Hmmm. Well, I'm finally on the ceph-users mailing list. (was just on -dev)
[18:55] * mattbenjamin (~matt@aa2.linuxbox.com) has joined #ceph
[18:55] * sagelap (~sage@2600:1012:b007:2e00:59ef:e19:c22c:ec80) has joined #ceph
[18:55] <Psi-jack> Ahh, speaking of sage!
[19:00] * yehudasa_ (~yehudasa@2607:f298:a:607:ea03:9aff:fe98:e8ff) has joined #ceph
[19:06] * LPG (~kvirc@c-76-104-197-224.hsd1.wa.comcast.net) has joined #ceph
[19:06] * LPG|2 (~kvirc@c-76-104-197-224.hsd1.wa.comcast.net) has joined #ceph
[19:07] * diegows (~diegows@190.190.2.126) Quit (Ping timeout: 480 seconds)
[19:11] * mschiff (~mschiff@81.92.22.210) has joined #ceph
[19:15] * rturk-away is now known as rturk
[19:22] * sebastiandeutsch (~sebastian@p5DE82B7B.dip0.t-ipconnect.de) Quit (Quit: sebastiandeutsch)
[19:23] <vipr> joelio: i'm doing the exact same thing, haven't really tested speeds yet but feels fast. How are you testing?
[19:23] * mk1 (~kr0t@178.172.139.240) has joined #ceph
[19:25] <joelio> vipr: just some dd's inside the VM for now, they're only 1G each, so ddiing in 5GB each VM
[19:29] <paravoid> sagelap: hey
[19:29] <paravoid> sagelap: #5460 front/back cuttlefish->0.65 issue still unfixed
[19:29] * joshd1 (~joshd@2602:306:c5db:310:6996:4df7:648d:7b25) has joined #ceph
[19:29] <paravoid> and my cluster is not a happy camper :)
[19:29] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[19:30] <vipr> dding to and from dev/null?
[19:30] <joelio> vipr: no, dding from /dev/null won't give you much to look at ;)
[19:30] * oddomatik (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[19:30] <joelio> zero && nul
[19:31] <vipr> Are you using qemu?
[19:31] * Psi-jack tilts his head.
[19:36] <joelio> vipr: qemu-system-x86_64
[19:37] * xmltok (~xmltok@pool101.bizrate.com) Quit (Remote host closed the connection)
[19:38] * xmltok (~xmltok@relay.els4.ticketmaster.com) has joined #ceph
[19:41] * oddomatik (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[19:41] * xmltok_ (~xmltok@pool101.bizrate.com) has joined #ceph
[19:43] * xmltok (~xmltok@relay.els4.ticketmaster.com) Quit (Read error: Operation timed out)
[19:44] * portante|afk is now known as portante
[19:45] * topro_ (~tobi@ip-109-43-141-0.web.vodafone.de) has joined #ceph
[19:46] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has left #ceph
[19:51] * sagelap (~sage@2600:1012:b007:2e00:59ef:e19:c22c:ec80) Quit (Ping timeout: 480 seconds)
[19:51] * buck (~buck@c-24-6-91-4.hsd1.ca.comcast.net) has joined #ceph
[19:53] * mk1 is now known as kr0t
[19:53] * sagelap (~sage@69.sub-70-197-76.myvzw.com) has joined #ceph
[19:54] * kr0t (~kr0t@178.172.139.240) has left #ceph
[19:55] * sebastiandeutsch (~sebastian@p5DE82B7B.dip0.t-ipconnect.de) has joined #ceph
[19:56] * grepory1 (~Adium@50-115-70-146.static-ip.telepacific.net) has joined #ceph
[19:56] * grepory (~Adium@50-115-70-146.static-ip.telepacific.net) Quit (Read error: Connection reset by peer)
[19:57] * kr0t (~kr0t@178.172.139.240) has joined #ceph
[19:58] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Quit: You think I'm not online. But I'm always here. Even if I'm not typing. I'm here. Reading. Judging.)
[20:01] * John (~john@astound-64-85-225-33.ca.astound.net) has joined #ceph
[20:02] <gary> for the command "sudo radosgw-admin key create --subuser=xxxyyyzzz:swift --key-type=swift", this does not return a key. Should it?
[20:03] <gary> this is returned - "swift_keys": [{ "user": "xxxyyyxxx:swift","secret_key": ""}],
[20:04] * kr0t (~kr0t@178.172.139.240) Quit (Quit: Leaving.)
[20:04] * kr0t (~kr0t@178.172.139.240) has joined #ceph
[20:07] * kr0t (~kr0t@178.172.139.240) has left #ceph
[20:07] <sebastiandeutsch> gary: do you create the subuser for a user?
[20:10] * kr0t_ (~oftc-webi@178.172.139.240) has joined #ceph
[20:10] * kr0t_ (~oftc-webi@178.172.139.240) has left #ceph
[20:10] * kr0t_ (~oftc-webi@178.172.139.240) has joined #ceph
[20:12] * sebastiandeutsch (~sebastian@p5DE82B7B.dip0.t-ipconnect.de) Quit (Quit: sebastiandeutsch)
[20:14] * bergerx_ (~bekir@78.188.101.175) Quit (Quit: Leaving.)
[20:17] * KindOne (KindOne@0001a7db.user.oftc.net) has joined #ceph
[20:25] <davidz> paravoid: What version are the MONs running? What version(s) are the OSDs running?
[20:27] <paravoid> mons are on 0.65
[20:27] <paravoid> osds 0.61.3, besides the new ones that I already reported
[20:28] * jluis (~JL@89-181-149-236.net.novis.pt) has joined #ceph
[20:34] * joao (~JL@89-181-151-112.net.novis.pt) Quit (Ping timeout: 480 seconds)
[20:39] <sagelap> sjust: did you look at wip-5470?
[20:40] * xmltok_ (~xmltok@pool101.bizrate.com) Quit (Quit: Bye!)
[20:42] * xmltok (~xmltok@pool101.bizrate.com) has joined #ceph
[20:44] * andreask (~andreask@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[20:44] * ChanServ sets mode +v andreask
[20:44] * andreask (~andreask@h081217068225.dyn.cm.kabsi.at) has left #ceph
[20:44] * xmltok (~xmltok@pool101.bizrate.com) Quit ()
[20:45] * xmltok (~xmltok@pool101.bizrate.com) has joined #ceph
[20:46] * kr0t_ (~oftc-webi@178.172.139.240) Quit (Quit: Page closed)
[20:48] * sagelap (~sage@69.sub-70-197-76.myvzw.com) Quit (Ping timeout: 480 seconds)
[20:50] <gary> sebastiandeutsch - yes, the subuser is xxxyyyzzz:swift
[20:50] * xmltok (~xmltok@pool101.bizrate.com) Quit ()
[20:52] * xmltok (~xmltok@pool101.bizrate.com) has joined #ceph
[20:53] * xmltok (~xmltok@pool101.bizrate.com) Quit ()
[20:54] * xmltok (~xmltok@pool101.bizrate.com) has joined #ceph
[20:55] * xmltok (~xmltok@pool101.bizrate.com) Quit ()
[20:56] * xmltok (~xmltok@pool101.bizrate.com) has joined #ceph
[20:57] * horsey (~horsey@122.166.181.42) Quit (Ping timeout: 480 seconds)
[20:57] * kr0t (~kr0t@178.172.139.240) has joined #ceph
[21:02] <gary> I'm tring to connect to the RADOS g/w with Cyberduck and a pop-up "I/O Error" window saying "Connection failed. Unrecognised SSL message, plaintext connection?" Any thoughts?
[21:03] * scuttlemonkey_ is now known as scuttlemonkey
[21:04] <gary> scuttlemonkey, can you help
[21:05] <rturk> gary: I've only had success using Cyberduck with radosgw via ssl
[21:05] <rturk> have you tried running through http://ceph.com/docs/master/start/quick-rgw/#enable-ssl ?
[21:06] <gary> rturk: I have enabled ssl
[21:06] * diegows (~diegows@200.68.116.185) has joined #ceph
[21:07] <scuttlemonkey> hey Gary, was just looking through your email
[21:07] <scuttlemonkey> your cluster set up correctly now?
[21:07] <scuttlemonkey> ceph -s returning 'health ok'
[21:10] <gary> monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
[21:10] <gary> ceph_tool_common_init failed.
[21:10] <scuttlemonkey> hmm
[21:11] <scuttlemonkey> did you ever make it successfully through the quickstart guide for a small test setup? Or did you jump right in to this?
[21:12] <gary> jumped to this. But should be no problem going from 2 to 3 node?
[21:13] <gary> I think the issue is elsewhere!
[21:13] <scuttlemonkey> well the gateway isn't going to function properly w/o a ceph cluster
[21:13] <scuttlemonkey> so we need to iron that wrinkle first
[21:14] <gary> agreed
[21:15] <gary> I'll email you the steps I took to create the cluster
[21:15] <scuttlemonkey> can you send to 'community@inktank.com' ?
[21:15] <rturk> or paste bin them and post the link here
[21:16] <rturk> so all of us can take a look :)
[21:16] <scuttlemonkey> or that
[21:16] <gary> ok
[21:16] <scuttlemonkey> the rest of my day is pretty well spoken for, but I'll take a look as soon as humanly possible
[21:16] <gary> thanks
[21:26] * sagelap (~sage@120.sub-70-197-74.myvzw.com) has joined #ceph
[21:26] * markl (~mark@tpsit.com) Quit (Ping timeout: 480 seconds)
[21:27] <loicd> sjust: https://github.com/ceph/ceph/blob/master/src/osd/osd_types.h#L1858 is this the ObjectContext you suggest I write tests for ?
[21:29] <kr0t> hi. i have a cluster deployed with mkcephfs. i read this article http://ceph.com/docs/master/rados/deployment/ceph-deploy-transition/ , and my setup looks compatible with ceph-deploy. But when i try gatherkeys from mon0 node i see: ceph_deploy.gatherkeys WARNING Unable to find /var/lib/ceph/mon/ceph-{hostname}/keyring on ['mon0']. Could anybody explain me whereis my mistake or whats wrong?
[21:29] <sjust> loicd: basically
[21:29] <sjust> the interesting part is probably the ObjectContext registry in ReplicatedPG
[21:29] <sjust> in particular, a good first step would be to eliminate the manual refcounting
[21:33] <joelio> kr0t: Do you have a standard default path for the ceph setup (i.e. osds mounted in /var/lib/ceph/osd etc..). I think the tool only reads in standard places and/or may not read the ceph config (I'm not entirely sure). I guess you could use 'ceph auth list' to get the right keyring for the mon, put it in the path expected.
[21:34] <joelio> try and pull the config first too?
[21:34] <joelio> ceph-deploy -v config pull host
[21:34] <joelio> (I think)
[21:35] <joelio> the interactions inside the tool aren't too clear to me, really need to grok the source
[21:35] * loicd looking
[21:35] <gary> rturk/all - http://pastebin.com/kcW6CJ0N
[21:36] <kr0t> yes i have:
[21:36] <kr0t> osd like /var/lib/ceph/osd/ceph-4
[21:36] <kr0t> mon like /var/lib/ceph/mon/ceph-b
[21:37] <joelio> kr0t: is there a keyring in there though?
[21:37] <kr0t> joelio: when i pull i get cluster config
[21:37] <joelio> yea, the tool reads from current working dir
[21:37] <joelio> afaik
[21:41] <kr0t> joelio: i see : ceph.bootstrap-mds.keyring, ceph.bootstrap-mds.keyring, ceph.client.admin.keyring
[21:42] <joelio> just try the gatherkeys again now?
[21:43] <kr0t> Checking mon0 for /var/lib/ceph/mon/ceph-{hostname}/keyring, Unable to find /var/lib/ceph/mon/ceph-{hostname}/keyring on ['mon0'], Have ceph.bootstrap-osd.keyring, Have ceph.bootstrap-mds.keyring
[21:44] <joelio> right, just get the key for the mon then and place it in the path exepcted
[21:46] * dmick (~dmick@2607:f298:a:607:240e:be71:36a7:b401) has joined #ceph
[21:46] <rturk> gary: if you run "ceph -k my-cluster/ceph.client.admin.keyring -s" does it still give you the missing keyring error?
[21:47] <rturk> (run from the host you ran ceph-deploy on)
[21:50] <paravoid> davidz: anything else I can do to help?
[21:51] <kr0t> joelio: can i remove /var/lib/ceph/mon/ceph-a/keyring after put it in the new place?
[21:51] <joelio> is that not the file it's after?
[21:52] <joelio> maybe it's a bug then, looking at the error - it shoul be (I figure) doing the {hostname} variable siubstitution
[21:52] <dmick> kr0t: no, that's part of the monitor
[21:54] <joelio> kr0t: I you don't *need* to use ceph-deploy I'd hold off until it matures a while
[21:54] * sagelap1 (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) has joined #ceph
[21:54] <joelio> if it is a bug though, report it :)
[21:54] * topro_ (~tobi@ip-109-43-141-0.web.vodafone.de) Quit (Quit: Konversation terminated!)
[21:56] * SpamapS (~clint@xencbyrum2.srihosting.com) Quit (Quit: leaving)
[21:57] <gary> rturk: ceph is not installed on my admin node, so I cannot run ceph. Should it be?
[21:57] <paravoid> is it safe to downgrade from 0.65 to 0.64?
[21:57] <paravoid> downgrade osds that is
[21:57] <kr0t> joelio: im not sure bug o not, just dont understand why ceph-deploy use {hostname} in the path ...
[21:57] * SpamapS (~clint@xencbyrum2.srihosting.com) has joined #ceph
[21:57] * sagelap (~sage@120.sub-70-197-74.myvzw.com) Quit (Ping timeout: 480 seconds)
[21:58] <kr0t> thanks, for help
[21:59] <joelio> kr0t: yea, I'm not massively au fait with it. My guess is that it's not reading the cluster ceph.conf properl and/or the implementation's not right. Perhaps that should be var/lib/ceph/mon/ceph-{id}/keyring on ['mon0']?
[22:00] <kr0t> joelio: right, as the article says: http://ceph.com/docs/master/rados/deployment/ceph-deploy-transition/
[22:01] * yehudasa_ (~yehudasa@2607:f298:a:607:ea03:9aff:fe98:e8ff) Quit (Ping timeout: 480 seconds)
[22:01] <joelio> I think it's just some python scripts, so may be possible to monkey patch to test.
[22:06] <rturk> gary: You can also run "ceph-deploy admin [node]" to copy keys to one of your nodes, then run the command from there (but with /etc/ceph/ceph.client.admin.keyring)
[22:07] <gary> rturk: /etc/ceph/ceph.client.admin.keyring exists in the server node and ~/my-cluster/ceph.client.admin.keyring exists in the admin node. And the contents are the same.
[22:07] <rturk> if you use that key to run "ceph -s", does it show a healthy cluster?
[22:08] * vhasi_ is now known as vhasi
[22:12] <davidz> paravoid: no I'm going to try to reproduce
[22:12] <gary> ceph -k /etc/ceph/ceph.client.admin.keyring -s on the server node gives monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication and ceph_tool_common_init failed.
[22:17] <joelio> gary: running as root, or permissions ok for your user on the keyring?
[22:20] <rturk> gary: yeah, ceph-deploy leaves the keyring owned by root and mode 600
[22:21] <gary> Ah! now on server returns ... health HEALTH_WARN clock skew detected on mon.cephserver2
[22:21] <gary> monmap e1: 2 mons at {xyzserver1=xx.xx.xx.xx:6789/0,xyzserver2=x.xx.xx.xx:6789/0}, election epoch 4, quorum 0,1 xyzserver1,xyzserver2
[22:21] <gary> osdmap e38: 6 osds: 6 up, 6 in
[22:21] <gary> pgmap v152: 248 pgs: 248 active+clean; 9921 bytes data, 207 MB used, 11158 GB / 11158 GB avail
[22:21] <gary> mdsmap e4: 1/1/1 up {0=xyzserver1=up:active}
[22:23] * mikedawson (~chatzilla@c-68-58-243-29.hsd1.sc.comcast.net) has joined #ceph
[22:29] * dpippenger (~riven@tenant.pas.idealab.com) has joined #ceph
[22:30] <rturk> gary: cool! using ntp on your hosts?
[22:33] <gary> i thought I was - what's the best way to enforce?
[22:33] <rturk> dunno off the top of my head, just noticing that ceph detected clock skew
[22:34] <gary> yeah, clocks are way off current time
[22:34] <rturk> I dunno if that would cause your radosgw issue
[22:34] <rturk> but it's worth looking into anyway :)
[22:35] <gary> sudo apt-get install ntp doesn't have any effect
[22:35] <janos> did you turn it on?
[22:36] <janos> that's just installing it
[22:36] <rturk> for your radosgw thing, can you pastebin the relevant parts of /var/log/ceph/radosgw.log and your apache error log?
[22:36] <rturk> that might contain some clues
[22:36] <janos> also, you may need to issue an ntpdate -u {some time server here} before turning on ntpd
[22:44] <rturk> gary: I'm going to get a bite to eat, back in a bit :)
[22:46] * fridudad_ (~oftc-webi@p5B09C867.dip0.t-ipconnect.de) has joined #ceph
[22:47] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:49] <gary> The install started the ntpd's
[22:51] <gary> the clocks are pretty close now but sudo ceph -s still showing skew warning
[22:53] <janos> yikes the install started it?
[22:53] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Read error: Connection reset by peer)
[22:53] <janos> scary
[22:54] <janos> i don't like installs doing anything but installing
[22:54] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[23:00] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[23:01] * andrei (~andrei@host86-155-31-94.range86-155.btcentralplus.com) has joined #ceph
[23:01] <andrei> hello guys
[23:01] <gary> radosgw.log empty. nothing of note in the Apache2 log. Need to call it a day now.
[23:01] <andrei> was wondering if anyone from inktank still here?
[23:01] * gary (~gary@217.33.61.67) Quit ()
[23:02] <andrei> i wanted to ask about ubuntu kernels that you package in your repository
[23:02] * mattbenjamin (~matt@aa2.linuxbox.com) has left #ceph
[23:03] <andrei> is anyone using kernel packages supplied by ceph repo for production?
[23:10] <davidz> andrei: Don't know of any, but maybe someone monitoring here might chime in.
[23:11] * jebba (~aleph@70-90-113-25-co.denver.hfc.comcastbusiness.net) Quit (Quit: Leaving.)
[23:13] * diegows (~diegows@200.68.116.185) Quit (Read error: Operation timed out)
[23:14] <andrei> davidz: thanks
[23:14] <andrei> the second question could some one help me to determine the cause of slow requests that I am experiencing on a regular basis.
[23:14] <andrei> this happens on majority of osds
[23:15] <andrei> and I can't blame it on the networking issues as there are no errors on the interface
[23:15] <andrei> it's an ipoib network 10gbit/s
[23:16] <andrei> the last time slow requests lasted for about 40 minutes
[23:16] <andrei> and they started right after the deep scrub
[23:17] <andrei> all virtual machines running on ceph cluster were not able to write to their drives and panicked (((
[23:19] * markl (~mark@tpsit.com) has joined #ceph
[23:21] <davidz> andrei: I would check your memory/CPU utilization. Do you have the minimum recommended RAM for the number of OSDs you are running? How much memory are the current OSDs using? During the slow requests was the CPU utilization high? If you manually initiate a deep-scrub can you reproduce the behavior?
[23:22] * fridudad_ (~oftc-webi@p5B09C867.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[23:23] <andrei> davidz: the servers have 8 osds each and 24gb of ram
[23:23] <andrei> the servers were not utilised at all
[23:24] <andrei> the ceph activity before this happened was below 1mb/s
[23:24] <andrei> i've seen this happening without scrubing
[23:24] <andrei> ceph is under testing at the moment, so, there is not a great deal of activity at all
[23:24] <andrei> i've got 6 vms which are pretty much idle
[23:25] <andrei> i am running 0.61.4
[23:25] <andrei> on ubuntu 12.04 with raring kernel, which is 3.8 branch
[23:26] * jebba (~aleph@2601:1:a300:8f:f2de:f1ff:fe69:6672) has joined #ceph
[23:32] * Tamil (~Adium@cpe-108-184-66-69.socal.res.rr.com) has joined #ceph
[23:35] <sagewk> jluis: ping me when you have a patch to review :)
[23:36] <jluis> sagewk, compiling it now
[23:36] <jluis> I should've get ccache working again
[23:37] <jluis> shifting between branches makes my life a living hell (compile-wise)
[23:37] <sagewk> i keep separate trees checked out, for master/next, cuttlefish, bobtail
[23:37] <sagewk> reduces the pain
[23:38] <jluis> I tried that once
[23:38] <jluis> ended up with a patch set in multiple trees
[23:38] <jluis> sagewk, just going to run it through vstart and a couple of proposals to make sure it's okay :)
[23:38] <andrei> can I use ubuntu kernel that comes with ceph repo for production? Is this recommended over the stock ubuntu 12.04 kernel?
[23:39] <sagewk> andrei: our kenrel is compiled with a random hardware selection that covers our qa lab and a zillion debug options enabled.. not recommended for production
[23:40] * nwat (~Adium@eduroam-251-132.ucsc.edu) has left #ceph
[23:41] * danieagle (~Daniel@186.214.56.159) has joined #ceph
[23:42] <andrei> sagewk: thanks
[23:42] <sagewk> np
[23:42] <andrei> what is the recommended kernel that I should use?
[23:42] <sagewk> for kernel rbd i assume?
[23:42] <andrei> is ubuntu's stock kernel okay for running ceph?
[23:42] <andrei> rbd
[23:42] <andrei> yeah
[23:42] <andrei> with kvm
[23:42] <sagewk> if you're using kvm + librbd, no need for any special kernel
[23:43] <sagewk> you only need recent kernels for the kernel rbd driver (/dev/rbd...)
[23:43] <jebba> andrei: does ubuntu have something like backports? On debian wheezy, I used debian backports to get linux-image-3.9-0.bpo.1-amd64
[23:43] <andrei> sagewk: okay, so my problem is not kernel related i guess
[23:43] <andrei> jebba: i am using backported kernel 3.8
[23:44] <andrei> not sure why, but my cluster stops working when i stop one of the osd servers
[23:44] <andrei> even though my failure domain is set to host
[23:44] <andrei> i can see the following status: health HEALTH_WARN 1480 pgs degraded; 312 pgs peering; 294 pgs stuck inactive; 1657 pgs stuck unclean; recovery 377112/904350 degraded (41.700%); 8/16 in osds are down; noout flag(s) set
[23:45] <sagewk> ceph health detail | head to get a pgid that is peering, then ceph pg <pgid> query to see why
[23:45] <andrei> thanks, will try
[23:46] <andrei> sagewk: the last command just hangs
[23:46] <sagewk> ceph pg map <pgid>
[23:46] <sagewk> and see if the first acting osd is up
[23:46] <sagewk> it may be that daemon is stuck, or they are all down
[23:48] <andrei> osdmap e7451 pg 3.61e (3.61e) -> up [7] acting [7]
[23:48] <sagewk> does 'ceph tell osd.7 version' succeed?
[23:48] <andrei> does that mean that osd.7 is responsible for this pg?
[23:48] <sagewk> yeah
[23:48] <jluis> sagewk, wip-5484
[23:49] <sagewk> and query will normally tell you what it is doing with that pg..
[23:49] <jluis> I honestly don't fancy the solution, but it's the least intrusive I found
[23:49] <andrei> the command hangs
[23:49] <andrei> the log file shows:
[23:49] <andrei> 2013-07-01 22:49:04.960876 7f54e2405700 1 heartbeat_map reset_timeout 'OSD::op_tp thread 0x7f54e2405700' had timed out after 15
[23:49] <jluis> the one that would satisfy me would involve changing the handling on the peons
[23:49] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Read error: Connection reset by peer)
[23:50] <andrei> osd.7 process is running
[23:51] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[23:52] <sagewk> jluis: i think the problem is that we are dealing with bufferlists and not the transactions here. maybe we change propose_new_value and teh Proposal structs to take the transaction, and encode at the last minute
[23:52] <sagewk> ?
[23:52] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Read error: Connection reset by peer)
[23:52] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[23:52] <andrei> sagewk: all other live osds return the version
[23:52] <andrei> apart from osd.7
[23:53] <sagewk> restart that daemon, or attach with gdb so we can see what it is stuck doing
[23:53] <jluis> sagewk, that could work too; I'm not seeing any impediment to that
[23:53] <sagewk> bufferlists are efficiently copy (ref counted) but txns aren't, sort of annoying.
[23:54] <jluis> well, there's that, but we shouldn't keep them in many places anyway
[23:54] <Psi-jack> sagewk: Still handy? ;)
[23:54] <sagewk> jluis: blah
[23:55] <sagewk> let's just move your decode/reencode into the if statement so we only do it on the very first commit
[23:55] <jluis> sagewk, we could also add reference counting to the transactions, but that feels a bit over the top
[23:56] <jluis> sagewk, the last_committed put should go away then
[23:56] <jluis> iirc, we're doing that all the same during commit() and store_state()
[23:57] <andrei> sagewk: http://ur1.ca/ei13i
[23:57] <andrei> this is the gdb attach to the osd.7 process
[23:57] <sagewk> jluis: well, for last_committed we'd need to decode/reencode all txns here
[23:57] <jluis> sagewk, we could consider sending a tx instead of a bl
[23:57] <andrei> does it tell you anything useful?
[23:57] <sagewk> this way we only do the first one
[23:57] <Psi-jack> Hmmm. sage looks busy. :)
[23:58] <dmick> andrei: thread apply all bt
[23:58] <dmick> which will be long, prepare yourwelf
[23:58] <dmick> or your self
[23:58] <andrei> dmick: can i dump the output to a text file?
[23:58] <andrei> with gdb?
[23:59] <sagewk> jluis: that's a bigger change, let's just decode/reencode for the lsat_committed == 1 case.
[23:59] <jluis> sagewk, yeah; besides, the last_committed is already updated on both store_state() and commit() -- just checked
[23:59] <sagewk> yeah
[23:59] <jluis> so it's quite redundant anyway
[23:59] <jluis> also, agree it's a big change
[23:59] <dmick> andrei: I think so; output, maybe?..

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.