#ceph IRC Log

Index

IRC Log for 2014-08-26

Timestamps are in GMT/BST.

[0:00] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[0:01] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[0:02] * dmsimard is now known as dmsimard_away
[0:05] * andrew_ (~oftc-webi@32.97.110.56) Quit (Remote host closed the connection)
[0:08] * joef (~Adium@2620:79:0:8207:e9d1:3116:be96:3a6) Quit (Quit: Leaving.)
[0:09] * ScOut3R (~ScOut3R@254C46CC.nat.pool.telekom.hu) Quit (Quit: Leaving...)
[0:09] * reed (~reed@209.163.164.50) Quit (Ping timeout: 480 seconds)
[0:10] * garphy is now known as garphy`aw
[0:14] * jobewan (~jobewan@snapp.centurylink.net) Quit (Quit: Leaving)
[0:17] * sprachgenerator (~sprachgen@130.202.135.20) Quit (Quit: sprachgenerator)
[0:17] * llpamies (~oftc-webi@pat.hitachigst.com) Quit (Quit: Page closed)
[0:18] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Remote host closed the connection)
[0:27] * rendar (~I@host189-176-dynamic.37-79-r.retail.telecomitalia.it) Quit ()
[0:32] * steki (~steki@net146-179-245-109.mbb.telenor.rs) Quit (Ping timeout: 480 seconds)
[0:36] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[0:42] * sjusthm (~sam@24-205-54-233.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[0:43] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:44] * tremon (~aschuring@d594e6a3.dsl.concepts.nl) Quit (Quit: getting boxed in)
[0:47] * xarses (~andreww@12.164.168.117) has joined #ceph
[0:53] <Sysadmin88> http://www.zdnet.com/most-popular-open-source-cloud-projects-of-2014-7000032856/
[0:53] <Sysadmin88> :)
[0:53] <Sysadmin88> ceph is on there
[0:54] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[0:58] * fsimonce (~simon@host135-17-dynamic.8-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[1:02] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[1:03] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:04] <carmstrong> does anyone know if host names are a hard requirement? or can I always specify IP addresses?
[1:04] <carmstrong> (for OSDs and mons)
[1:11] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[1:13] * rweeks (~rweeks@pat.hitachigst.com) Quit (Quit: Leaving)
[1:15] * zack_dolby (~textual@p843a3d.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:25] * oms101 (~oms101@p20030057EA00B400C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:26] * joef (~Adium@2601:9:280:f2e:e844:8bdc:dcaf:96a4) has joined #ceph
[1:30] * rwheeler (~rwheeler@182.48.222.242) has joined #ceph
[1:30] * Eco (~Eco@184-208-89-37.pools.spcsdns.net) Quit (Remote host closed the connection)
[1:33] * sjusthm (~sam@24-205-54-233.dhcp.gldl.ca.charter.com) has joined #ceph
[1:33] * qhartman (~qhartman@den.direwolfdigital.com) has joined #ceph
[1:34] * oms101 (~oms101@p20030057EA003A00C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:35] * Eco (~Eco@107.36.128.74) has joined #ceph
[1:35] * rmoe (~quassel@12.164.168.117) Quit (Read error: Operation timed out)
[1:36] <grepory> it looks like ceph-deploy requires you to setup secure apt if you are deploying calamari with a local repository for your minions. is that the case? i should create a pgp key and sign the release file?
[1:38] * KevinPerks1 (~Adium@2606:a000:80a1:1b00:bd73:b992:e8cd:b5b1) has joined #ceph
[1:40] * elder (~elder@50.250.6.142) Quit (Quit: Leaving)
[1:40] * yanzheng (~zhyan@171.221.137.238) has joined #ceph
[1:42] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[1:43] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) Quit (Read error: Connection reset by peer)
[1:43] * KevinPerks (~Adium@2606:a000:80a1:1b00:70bd:13f5:e659:2a18) Quit (Ping timeout: 480 seconds)
[1:45] * diegows (~diegows@190.190.5.238) Quit (Read error: Operation timed out)
[1:46] * ikrstic (~ikrstic@109-93-112-236.dynamic.isp.telekom.rs) Quit (Quit: Konversation terminated!)
[1:47] * Sysadmin88 (~IceChat77@054533bc.skybroadband.com) Quit (Quit: Some folks are wise, and some otherwise.)
[1:47] * yanzheng (~zhyan@171.221.137.238) Quit (Quit: This computer has gone to sleep)
[1:47] * yanzheng (~zhyan@171.221.137.238) has joined #ceph
[1:57] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[1:58] * alram (~alram@38.122.20.226) Quit (Quit: leaving)
[1:59] * davidz (~Adium@cpe-23-242-12-23.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[1:59] * yanzheng (~zhyan@171.221.137.238) Quit (Quit: This computer has gone to sleep)
[2:02] * aknapp (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) Quit (Quit: Leaving...)
[2:07] * davidz (~Adium@cpe-23-242-12-23.socal.res.rr.com) has joined #ceph
[2:08] * danieljh (~daniel@0001b4e9.user.oftc.net) Quit (Ping timeout: 480 seconds)
[2:09] * zack_dolby (~textual@e0109-114-22-13-4.uqwimax.jp) has joined #ceph
[2:12] * Eco (~Eco@107.36.128.74) Quit (Quit: Leaving)
[2:14] * ircolle is now known as ircolle-afk
[2:16] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[2:32] <dmick> grepory: are you talking about ceph-deploy calamari connect?
[2:32] <grepory> dmick: yes.
[2:33] <grepory> dmick: it just keeps telling me that there is no release.asc... i'd rather just not use a signed repository, as this is just a "hey what is calamari" that i'm doing
[2:35] <qhartman> trying to install firefly from the debian/ubuntu repos and I'm getting a 403 from apt on the package list, but I can grab it with wget just fine. Any ideas?
[2:38] * JC (~JC@46.189.28.185) Quit (Ping timeout: 480 seconds)
[2:38] <qhartman> looks like something wonky with the proxy config or a redirect or somethign that's making apt sad
[2:39] * angdraug (~angdraug@131.252.204.134) Quit (Ping timeout: 480 seconds)
[2:40] * JC (~JC@46.189.28.228) has joined #ceph
[2:41] <dmick> grepory: it seems like the behavior is controlled by cephdeploy.conf, the way I read the code
[2:41] * yanzheng (~zhyan@171.221.137.238) has joined #ceph
[2:41] <dmick> baseurl and gpgkey
[2:42] <dmick> I imagine you can tweak those and get key-less behavior. Our installation/testing has likely always used signed repos
[2:42] * Pedras (~Adium@216.207.42.129) Quit (Ping timeout: 480 seconds)
[2:42] <dmick> qhartman: proxies are evi;l
[2:42] <grepory> dmick: it is. i think i'm going to try gpgkey=<nothing> and see what happens
[2:42] <dmick> but yeah I'd probably diagnose what the issue is
[2:43] <dmick> grepory: or just remove it, the default is ''
[2:43] <qhartman> dmick, indeed, but I'm not using one on my end that I'm aware of
[2:43] <grepory> dmick: i tried removing it
[2:43] <grepory> dmick: no matter what i do it justsays "Running command: sudo apt-key add release.asc"
[2:43] <grepory> so i'm trying to figure out what triggers that
[2:44] <dmick> qhartman: what's the actual error, and why do you think it's proxy-related?
[2:44] * angdraug (~angdraug@131.252.204.134) has joined #ceph
[2:44] * danieljh (~daniel@0001b4e9.user.oftc.net) has joined #ceph
[2:44] <qhartman> dmick, because wget works
[2:45] <grepory> dmick: ahhh.... i see. if gpgkey isn't set, the debian repo installer expects release.asc to be a file in cwd on the remote host.
[2:45] <dmick> grepory: yeah, maybe the hosts/debian/install.py isn't prepared
[2:45] <grepory> dmick: exactly. :(
[2:45] <qhartman> dmick, any time I've seen this in the past and wget works, it's because some proxy / web load balancer/ something is sending a response that apt doesn't like.
[2:45] <dmick> qhartman: ok. proxy is certainly one of a lot of reasons
[2:46] <qhartman> Sionce I dropped in here I've gotten it to work on another machine, so it's definitely something with my ceph box's network path
[2:46] <qhartman> iow - it's my problem
[2:46] <qhartman> so, whee
[2:46] <qhartman> :D
[2:47] <grepory> dmick: as long as it can download a legit gpg key, it's happy. so i'm just going to point it at a key.
[2:47] * joef (~Adium@2601:9:280:f2e:e844:8bdc:dcaf:96a4) Quit (Quit: Leaving.)
[2:47] * beardo (~sma310@beardo.cc.lehigh.edu) Quit (Read error: Operation timed out)
[2:49] * beardo (~sma310@beardo.cc.lehigh.edu) has joined #ceph
[2:51] <dmick> k
[2:55] <qhartman> Aha, one of the other admins put an apt proxy config on this box and didn't tell me
[2:55] <qhartman> whee
[2:56] <qhartman> thanks for playing rubber ducky
[2:58] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Ping timeout: 480 seconds)
[2:58] * diegows (~diegows@190.190.5.238) has joined #ceph
[2:59] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[2:59] <dmick> arf arf
[2:59] <qhartman> heh
[3:00] <dmick> good instincts tho!
[3:00] <dmick> (you only get thrown from the same bull so many times before you start to change your stance)
[3:00] * infernixx (nix@cl-1404.ams-04.nl.sixxs.net) has joined #ceph
[3:01] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[3:01] <qhartman> indeed so
[3:01] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) Quit (Read error: Connection reset by peer)
[3:01] * infernixx is now known as infernix
[3:04] <qhartman> ok, now I have what I think is a more legit question
[3:05] <qhartman> I'm trying to upgrade from .79 to the latest .80.x (80.5 I think) and I seem to have gotten into a race condition wherein both the ceph and ceph-common packages think they own a file
[3:05] <qhartman> so neither one wants to upgrade
[3:06] <qhartman> I could easily resolve this by removing one or the other, but that would presumably stop the mon and OSDs that are on this machine
[3:06] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[3:06] <dmick> there's been changes there but the pkg directives are supposed to stop that
[3:06] <dmick> which 80.x, nad which file(s)?
[3:06] <qhartman> dpkg: error processing archive /var/cache/apt/archives/ceph-common_0.80.5-1trusty_amd64.deb (--unpack):
[3:06] <qhartman> trying to overwrite '/etc/ceph/rbdmap', which is also in package ceph 0.79-0ubuntu1
[3:07] <dmick> looking
[3:07] <qhartman> and when I try to upgrade ceph, I get the inverse
[3:08] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) has joined #ceph
[3:08] <qhartman> oh, when upgrading ceph, it's a different file it's fighting over:
[3:08] <qhartman> dpkg: error processing archive /var/cache/apt/archives/ceph_0.80.5-1trusty_amd64.deb (--unpack):
[3:08] <qhartman> trying to overwrite '/usr/bin/ceph-rest-api', which is also in package ceph-common 0.79-0ubuntu1
[3:10] <dmick> ceph 0.79-0ubuntu1 is interesting; it seems like that's a distro package?
[3:10] <qhartman> shouldn't be, I installed it using ceph-install script
[3:10] <dmick> (what distro/version is this?)
[3:10] <qhartman> I don't think ceph exists in ubuntu mainline
[3:10] <qhartman> ubuntu trusty (14.04)
[3:10] <dmick> oh yeah, it does
[3:10] <qhartman> hm, k
[3:10] <dmick> so on my trusty machine
[3:10] <dmick> there is a trusty update available
[3:11] <dmick> 0.80.1-0ubuntu1.1
[3:11] <qhartman> ah
[3:11] <dmick> I think that transition was handled in the ceph line at about 0.78-500
[3:11] <qhartman> go to that, then can probably switch
[3:11] * dmsimard_away is now known as dmsimard
[3:11] <dmick> but it could be that the ubuntu 0.79 didn't get that fix somehow
[3:12] <qhartman> yeah, makes sense
[3:12] * angdraug (~angdraug@131.252.204.134) Quit (Quit: Leaving)
[3:12] <qhartman> I'm surprised that the install script thing used the distro packages, it created the sources.list.d entry
[3:12] <qhartman> oh, but the entry in there was for dumpling, which would have been older
[3:13] <qhartman> derp
[3:13] <qhartman> that's it
[3:14] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) Quit (Quit: shimo)
[3:17] <qhartman> yeah, went back to the official packages and got up .80.1 just fine
[3:17] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[3:17] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) has joined #ceph
[3:18] * dmsimard is now known as dmsimard_away
[3:18] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[3:19] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) Quit ()
[3:20] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit ()
[3:21] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[3:25] * dmsimard_away is now known as dmsimard
[3:27] * sjusthm (~sam@24-205-54-233.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[3:29] * dmsimard is now known as dmsimard_away
[3:29] <dmick> cool
[3:31] <qhartman> alright, now to try re-adding the new packages and upgrading to .80.5
[3:31] <qhartman> (I'm trying get passed a bug that was supposedly fixed .80.4)
[3:32] <qhartman> noooooooo
[3:32] <qhartman> same error
[3:32] <qhartman> womp womp
[3:32] <kraken> http://www.youtube.com/watch?v=_-GaXa8tSBE
[3:33] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:33] * Concubidated (~Adium@66-87-67-237.pools.spcsdns.net) Quit (Read error: No route to host)
[3:36] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[3:37] * Concubidated (~Adium@66-87-67-237.pools.spcsdns.net) has joined #ceph
[3:40] * dmick idlely wonders who transmogrified that from "wah wahhhhh" to "womp womp". I see the latter all over the place and I don't know why
[3:40] <dmick> anyway
[3:40] <qhartman> It's an Archer reference
[3:40] <dmick> so this is ubuntu's 80.1 up to ceph 80.5 that's failing now?
[3:41] <qhartman> yeah
[3:41] <dmick> archer? really? I love that show
[3:41] <qhartman> they say "womp womp" in that all the time
[3:41] <kraken> http://www.sadtrombone.com/?play=true
[3:43] * tinklebear (~tinklebea@cpe-066-057-253-171.nc.res.rr.com) Quit (Quit: Nettalk6 - www.ntalk.de)
[3:44] <dmick> I guess you can always --force-something
[3:44] <qhartman> yeah, that's what I'm looking into
[3:44] <dmick> but I'm surprised the ubuntu versions are the old contents
[3:44] <qhartman> it seems that file is effectively empty anyway
[3:44] <qhartman> (it's all comments)
[3:44] <dmick> yeah
[3:45] * cok (~chk@46.30.211.29) Quit (Quit: Leaving.)
[3:46] <dmick> oh.
[3:46] <dmick> that's a newer change than I was thinking
[3:46] <dmick> and it looks like it didn't include a change to the Obsoletes/Breaks
[3:46] <dmick> <sadface>
[3:47] <qhartman> how's that go? womp womp?
[3:47] <kraken> http://www.youtube.com/watch?v=_-GaXa8tSBE
[3:47] <qhartman> :-\
[3:47] <qhartman> :D
[3:47] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) has joined #ceph
[3:47] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[3:48] <qhartman> so after attempting to install
[3:48] <qhartman> a "dpkg -i --force-overwrite /var/cache/apt/archives/ceph-common_0.80.5-1trusty_amd64.deb"
[3:48] <dmick> dpkg....living dangerously...
[3:49] <qhartman> will get it going, then an "aptitude install ceph" will also work
[3:49] <qhartman> needs must
[3:51] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[3:52] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[3:53] * zhaochao (~zhaochao@111.204.252.1) has joined #ceph
[3:55] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[3:56] * yanzheng (~zhyan@171.221.137.238) Quit (Quit: This computer has gone to sleep)
[3:57] <qhartman> welp, mons are rolling .80.5 (supposedly)
[3:57] <qhartman> on the the osds....
[4:00] * nhm (~nhm@107-1-123-195-ip-static.hfc.comcastbusiness.net) has joined #ceph
[4:00] * ChanServ sets mode +o nhm
[4:03] * yanzheng (~zhyan@171.221.137.238) has joined #ceph
[4:03] <dmick> qhartman: http://tracker.ceph.com/issues/9233
[4:04] <qhartman> hooray!
[4:04] <qhartman> my pain served a purpose!
[4:04] <qhartman> :D
[4:04] <qhartman> youre the best dmick
[4:04] <dmick> if you're going to go through that, and drag me with you, *some*thing's gotta come out of it :)
[4:04] <lurbs> As far as I'm aware he's the only dmick.
[4:05] <dmick> www.instantrimshot.com
[4:05] <qhartman> heh
[4:05] <dmick> I'm not tho
[4:05] <qhartman> I know a mick-d
[4:05] <qhartman> but that;s different
[4:05] <dmick> http://www.danmick.com/ <- not me
[4:09] <qhartman> alright, well, presumably that's done
[4:09] <qhartman> and I no longer will have that xfs bug corrupting my pgs
[4:10] <qhartman> wheee
[4:10] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[4:11] * haomaiwa_ (~haomaiwan@223.223.183.114) Quit (Remote host closed the connection)
[4:11] <qhartman> thanks again for the assistance dmick, and for making sure my suffering will help the world in some small way
[4:11] <dmick> yw
[4:11] <qhartman> All in all, I'm pretty happy with this upgrade process
[4:11] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[4:11] <qhartman> aside from the packaging gotchas, it was remarkably smooth
[4:11] <qhartman> makes me think I must have done something wrong
[4:11] <qhartman> >_>
[4:11] <dmick> we do actually test it
[4:11] <qhartman> <_<
[4:17] * haomaiwa_ (~haomaiwan@203.69.59.199) has joined #ceph
[4:17] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Read error: Connection reset by peer)
[4:18] * haomaiwang (~haomaiwan@124.248.208.2) has joined #ceph
[4:25] * haomaiwa_ (~haomaiwan@203.69.59.199) Quit (Ping timeout: 480 seconds)
[4:25] * yanzheng (~zhyan@171.221.137.238) Quit (Quit: This computer has gone to sleep)
[4:29] * apolloJess (~Thunderbi@202.60.8.252) has joined #ceph
[4:31] * haomaiwa_ (~haomaiwan@223.223.183.114) has joined #ceph
[4:34] * joef (~Adium@c-24-130-254-66.hsd1.ca.comcast.net) has joined #ceph
[4:39] * yanzheng (~zhyan@171.221.137.238) has joined #ceph
[4:39] * haomaiwang (~haomaiwan@124.248.208.2) Quit (Ping timeout: 480 seconds)
[4:42] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit (Quit: valeech)
[4:42] * Concubidated (~Adium@66-87-67-237.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[4:47] * zhangdongmao__ (~zhangdong@203.192.156.9) has joined #ceph
[4:49] * djh-work is now known as Guest464
[4:49] * Guest464 (~daniel@141.52.73.152) Quit (Read error: Connection reset by peer)
[4:49] * djh-work (~daniel@141.52.73.152) has joined #ceph
[4:52] * yanzheng (~zhyan@171.221.137.238) Quit (Quit: This computer has gone to sleep)
[5:01] * yanzheng (~zhyan@171.221.137.238) has joined #ceph
[5:08] * michalefty (~micha@ip250461f1.dynamic.kabel-deutschland.de) has joined #ceph
[5:08] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Quit: Leaving.)
[5:08] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[5:17] * shang (~ShangWu@175.41.48.77) has joined #ceph
[5:18] * rwheeler (~rwheeler@182.48.222.242) Quit (Quit: Leaving)
[5:21] * yanzheng (~zhyan@171.221.137.238) Quit (Quit: This computer has gone to sleep)
[5:21] * Qu310 (~Qu310@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[5:28] * yanzheng (~zhyan@171.221.137.238) has joined #ceph
[5:33] * adamcrume (~quassel@2601:9:6680:47:b48f:1c4e:beb1:4aa6) Quit (Remote host closed the connection)
[5:45] * Eco (~Eco@107.36.128.74) has joined #ceph
[5:56] * nhm (~nhm@107-1-123-195-ip-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[5:56] * dis is now known as Guest471
[5:57] * dis (~dis@109.110.67.120) has joined #ceph
[5:57] * michalefty (~micha@ip250461f1.dynamic.kabel-deutschland.de) has left #ceph
[5:58] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[5:58] * Guest471 (~dis@109.110.66.165) Quit (Ping timeout: 480 seconds)
[5:59] * vovo_ (~vovo@88.130.193.115) has joined #ceph
[6:00] * RameshN (~rnachimu@121.244.87.117) has joined #ceph
[6:00] * rwheeler (~rwheeler@209.132.188.8) has joined #ceph
[6:05] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[6:06] * vovo (~vovo@i59F7A45B.versanet.de) Quit (Ping timeout: 480 seconds)
[6:08] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[6:12] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[6:13] * Nats_ (~natscogs@2001:8000:200c:0:c11d:117a:c167:16df) has joined #ceph
[6:14] * yanzheng (~zhyan@171.221.137.238) Quit (Quit: This computer has gone to sleep)
[6:17] * vbellur (~vijay@122.167.169.180) has joined #ceph
[6:21] * Nats (~natscogs@2001:8000:200c:0:8dd6:feef:e0d5:bf65) Quit (Ping timeout: 480 seconds)
[6:23] * KevinPerks1 (~Adium@2606:a000:80a1:1b00:bd73:b992:e8cd:b5b1) Quit (Quit: Leaving.)
[6:27] * yanzheng (~zhyan@171.221.137.238) has joined #ceph
[6:27] * ashishchandra (~ashish@49.32.0.170) has joined #ceph
[6:31] * vbellur (~vijay@122.167.169.180) Quit (Quit: Leaving.)
[6:36] * yanzheng (~zhyan@171.221.137.238) Quit (Quit: This computer has gone to sleep)
[6:43] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[6:50] * joef (~Adium@c-24-130-254-66.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[6:53] * drankis (~drankis__@89.111.13.198) Quit (Ping timeout: 480 seconds)
[6:55] * Concubidated (~Adium@66.87.131.234) has joined #ceph
[6:57] * vbellur (~vijay@121.244.87.117) has joined #ceph
[7:00] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[7:01] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit ()
[7:13] * anticw_ (~anticw@c-24-5-80-188.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[7:13] * john_ (~john@2601:9:6c80:7df:f085:dd1d:cb5a:dbbd) Quit (Ping timeout: 480 seconds)
[7:18] * anticw (~anticw@c-24-5-80-188.hsd1.ca.comcast.net) has joined #ceph
[7:20] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[7:25] * michalefty (~micha@p20030071CE0BE099D516D557AC2661B6.dip0.t-ipconnect.de) has joined #ceph
[7:26] * michalefty (~micha@p20030071CE0BE099D516D557AC2661B6.dip0.t-ipconnect.de) has left #ceph
[7:27] * rdas (~rdas@121.244.87.115) has joined #ceph
[7:31] * john_ (~john@2601:9:6c80:7df:305d:2753:8f8a:6e8) has joined #ceph
[7:40] * blackmen (~Ajit@121.244.87.115) has joined #ceph
[7:47] * JC (~JC@46.189.28.228) Quit (Quit: Leaving.)
[7:57] * yanzheng (~zhyan@171.221.137.238) has joined #ceph
[7:59] * haomaiwa_ (~haomaiwan@223.223.183.114) Quit (Remote host closed the connection)
[8:00] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[8:09] * haomaiwa_ (~haomaiwan@223.223.183.114) has joined #ceph
[8:10] * steki (~steki@93-87-139-17.dynamic.isp.telekom.rs) has joined #ceph
[8:10] * haomaiwa_ (~haomaiwan@223.223.183.114) Quit (Read error: Connection reset by peer)
[8:10] * BManojlovic (~steki@net96-176-245-109.mbb.telenor.rs) has joined #ceph
[8:11] * haomaiwa_ (~haomaiwan@223.223.183.114) has joined #ceph
[8:12] * BManojlovic (~steki@net96-176-245-109.mbb.telenor.rs) Quit ()
[8:12] * BManojlovic (~steki@93-87-139-17.dynamic.isp.telekom.rs) has joined #ceph
[8:12] * steki (~steki@93-87-139-17.dynamic.isp.telekom.rs) Quit (Read error: Connection reset by peer)
[8:12] * zhangdongmao__ (~zhangdong@203.192.156.9) Quit (Remote host closed the connection)
[8:12] * haomaiwa_ (~haomaiwan@223.223.183.114) Quit (Read error: Connection reset by peer)
[8:13] * haomaiwa_ (~haomaiwan@223.223.183.114) has joined #ceph
[8:15] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Read error: Operation timed out)
[8:24] * haomaiwa_ (~haomaiwan@223.223.183.114) Quit (Remote host closed the connection)
[8:24] * yanzheng (~zhyan@171.221.137.238) Quit (Ping timeout: 480 seconds)
[8:24] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[8:25] * RameshN (~rnachimu@121.244.87.117) Quit (Ping timeout: 480 seconds)
[8:27] * yanzheng (~zhyan@171.221.137.238) has joined #ceph
[8:27] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (Ping timeout: 480 seconds)
[8:27] * BManojlovic (~steki@93-87-139-17.dynamic.isp.telekom.rs) Quit (Read error: Operation timed out)
[8:40] * haomaiwa_ (~haomaiwan@203.69.59.199) has joined #ceph
[8:40] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Read error: Connection reset by peer)
[8:41] * karnan (~karnan@121.244.87.117) has joined #ceph
[8:44] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) has joined #ceph
[8:44] * bandrus (~oddo@216.57.72.205) Quit (Read error: Connection reset by peer)
[8:45] * haomaiwang (~haomaiwan@223.223.183.114) has joined #ceph
[8:45] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Quit: Leaving.)
[8:48] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[8:49] * RameshN (~rnachimu@121.244.87.117) has joined #ceph
[8:52] * haomaiwa_ (~haomaiwan@203.69.59.199) Quit (Read error: Operation timed out)
[8:53] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[8:53] * ChanServ sets mode +v andreask
[9:01] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:05] * lcavassa (~lcavassa@89.184.114.246) has joined #ceph
[9:08] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[9:08] * Sysadmin88 (~IceChat77@054533bc.skybroadband.com) has joined #ceph
[9:09] * steveeJ (~junky@HSI-KBW-085-216-022-246.hsi.kabelbw.de) Quit (Remote host closed the connection)
[9:11] * haomaiwang (~haomaiwan@223.223.183.114) Quit (Remote host closed the connection)
[9:11] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[9:12] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has left #ceph
[9:18] * analbeard (~shw@support.memset.com) has joined #ceph
[9:21] * phoenix (~phoenix@vpn1.safedata.ru) has joined #ceph
[9:22] * vbellur (~vijay@209.132.188.8) has joined #ceph
[9:23] <phoenix> hi, i have a little problems. Need help. ceph -s
[9:23] <phoenix> cluster 24f5894b-b8d8-4454-be0e-1034153fb077
[9:23] <phoenix> health HEALTH_WARN 249 pgs degraded; 422 pgs stuck unclean; recovery 52733/4887324 objects degraded (1.079%)
[9:23] <phoenix> monmap e1: 2 mons at {a=10.1.9.51:6789/0,b=10.1.9.52:6789/0}, election epoch 28, quorum 0,1 a,b
[9:23] <phoenix> mdsmap e49: 1/1/1 up {0=a=up:active}, 1 up:standby
[9:23] <phoenix> osdmap e18455: 38 osds: 25 up, 25 in
[9:23] <phoenix> pgmap v1186675: 7296 pgs, 3 pools, 4301 GB data, 1590 kobjects
[9:23] <phoenix> 12862 GB used, 68132 GB / 80995 GB avail
[9:23] <phoenix> 52733/4887324 objects degraded (1.079%)
[9:23] <phoenix> 249 active+degraded
[9:23] <phoenix> 173 active+remapped
[9:23] <longguang> have you successed building rbd.ko on centos6.*
[9:23] <phoenix> 6874 active+clean
[9:23] <phoenix> client io 17611 kB/s rd, 1 op/s
[9:23] <phoenix> any ideas?
[9:23] <longguang> have you successed building rbd.ko on centos6.*
[9:23] <phoenix> how i can repair ceph cluster?
[9:26] * haomaiwa_ (~haomaiwan@223.223.183.114) has joined #ceph
[9:26] * haomaiwa_ (~haomaiwan@223.223.183.114) Quit (Remote host closed the connection)
[9:26] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Read error: Connection reset by peer)
[9:27] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[9:28] * oro (~oro@2001:620:20:16:3196:689e:5894:cae9) has joined #ceph
[9:28] * oro_ (~oro@2001:620:20:16:3196:689e:5894:cae9) has joined #ceph
[9:30] <tnt> phoenix: well, you seem to be missing 13 OSDs ....
[9:31] * fsimonce (~simon@host135-17-dynamic.8-79-r.retail.telecomitalia.it) has joined #ceph
[9:31] * haomaiwa_ (~haomaiwan@223.223.183.114) has joined #ceph
[9:32] * DV_ (~veillard@veillard.com) has joined #ceph
[9:33] <longguang> from which version of centos, it ships with ceph.ko and rbd.ko?
[9:33] * mourgaya (~kvirc@80.124.164.139) has joined #ceph
[9:35] * jtang_ (~jtang@80.111.83.231) has joined #ceph
[9:36] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[9:38] * Daviey (~DavieyOFT@bootie.daviey.com) Quit (Ping timeout: 480 seconds)
[9:38] * zack_dolby (~textual@e0109-114-22-13-4.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[9:39] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Ping timeout: 480 seconds)
[9:40] * Daviey (~DavieyOFT@bootie.daviey.com) has joined #ceph
[9:40] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:42] * andreask (~andreask@zid-vpnn022.uibk.ac.at) has joined #ceph
[9:42] * ChanServ sets mode +v andreask
[9:43] * andreask (~andreask@zid-vpnn022.uibk.ac.at) has left #ceph
[9:44] * Concubidated (~Adium@66.87.131.234) Quit (Quit: Leaving.)
[9:44] * danieljh (~daniel@0001b4e9.user.oftc.net) Quit (Quit: Lost terminal)
[9:46] * jtang_ (~jtang@80.111.83.231) Quit (Ping timeout: 480 seconds)
[9:52] * zack_dolby (~textual@e0109-114-22-13-4.uqwimax.jp) has joined #ceph
[9:55] * ikrstic (~ikrstic@109-93-112-236.dynamic.isp.telekom.rs) has joined #ceph
[10:00] * vbellur (~vijay@209.132.188.8) Quit (Read error: Operation timed out)
[10:00] <phoenix> tnt i know server is broken and all osd down. but % is not reduced.
[10:05] <tnt> phoenix: pastebin ceph osd tree
[10:06] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[10:06] * linjan (~linjan@176.195.82.123) has joined #ceph
[10:06] * jordanP (~jordan@78.193.36.209) has joined #ceph
[10:06] <phoenix> ceph osd tree
[10:06] <phoenix> # id weight type name up/down reweight
[10:06] <phoenix> -1 1 root default
[10:06] <phoenix> -3 1 rack unknownrack
[10:06] <phoenix> -2 1 host datanode1
[10:06] <phoenix> 0 1 osd.0 up 1
[10:06] <phoenix> 1 1 osd.1 up 1
[10:07] <phoenix> 2 1 osd.2 up 1
[10:07] <phoenix> 3 1 osd.3 up 1
[10:07] <phoenix> 4 1 osd.4 up 1
[10:07] <phoenix> 5 1 osd.5 up 1
[10:07] <phoenix> 6 1 osd.6 up 1
[10:07] <phoenix> 7 1 osd.7 up 1
[10:07] <phoenix> 8 1 osd.8 up 1
[10:07] <phoenix> 9 1 osd.9 up 1
[10:07] <phoenix> 10 1 osd.10 up 1
[10:07] <phoenix> 11 1 osd.11 up 1
[10:07] <phoenix> -4 1 host datanode2
[10:07] <phoenix> 12 1 osd.12 down 0
[10:07] <phoenix> 13 1 osd.13 up 1
[10:07] <phoenix> 14 1 osd.14 up 1
[10:07] <phoenix> 15 1 osd.15 up 1
[10:07] <phoenix> 16 1 osd.16 up 1
[10:07] <phoenix> 17 1 osd.17 up 1
[10:07] <phoenix> 18 1 osd.18 up 1
[10:07] <phoenix> -5 1 host datanode3
[10:07] <phoenix> 19 1 osd.19 up 1
[10:07] <phoenix> 20 1 osd.20 up 1
[10:07] <phoenix> 21 1 osd.21 up 1
[10:07] <phoenix> 22 1 osd.22 up 1
[10:07] <phoenix> 23 1 osd.23 up 1
[10:07] <phoenix> 24 1 osd.24 up 1
[10:07] <phoenix> 25 1 osd.25 up 1
[10:07] <phoenix> -6 1 host datanode4
[10:07] <phoenix> 26 1 osd.26 down 0
[10:07] <phoenix> 27 1 osd.27 down 0
[10:07] <phoenix> 28 1 osd.28 down 0
[10:07] <phoenix> 29 1 osd.29 down 0
[10:07] <phoenix> 30 1 osd.30 down 0
[10:07] <phoenix> 31 1 osd.31 down 0
[10:07] <phoenix> 32 1 osd.32 down 0
[10:07] <phoenix> 33 1 osd.33 down 0
[10:07] <phoenix> 34 1 osd.34 down 0
[10:07] <phoenix> 35 1 osd.35 down 0
[10:07] <phoenix> 36 1 osd.36 down 0
[10:07] <phoenix> 37 1 osd.37 down 0
[10:08] <kfei> pastebin please
[10:08] <tnt> I did say _pastebin_ ...
[10:09] <kfei> tnt, didn't see :p
[10:12] * vbellur (~vijay@121.244.87.117) has joined #ceph
[10:13] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[10:13] <tnt> kfei: oh that was meant for phoenix :)
[10:14] * bjornar (~bjornar@ns3.uniweb.no) has joined #ceph
[10:18] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[10:20] * Scar3cr0w (~Scar3cr0w@173-13-173-53-sfba.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[10:25] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[10:25] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[10:30] * jordanP (~jordan@78.193.36.209) Quit (Quit: Leaving)
[10:30] * jordanP (~jordan@78.193.36.209) has joined #ceph
[10:31] * zack_dolby (~textual@e0109-114-22-13-4.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[10:33] * saurabh (~saurabh@209.132.188.8) has joined #ceph
[10:34] * hijacker (~hijacker@213.91.163.5) Quit (Read error: Connection reset by peer)
[10:34] * hijacker (~hijacker@213.91.163.5) has joined #ceph
[10:41] * saurabh (~saurabh@209.132.188.8) Quit (Ping timeout: 480 seconds)
[10:42] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[10:48] * zack_dolby (~textual@e0109-114-22-13-4.uqwimax.jp) has joined #ceph
[10:49] * saurabh (~saurabh@121.244.87.117) has joined #ceph
[10:54] * zack_dolby (~textual@e0109-114-22-13-4.uqwimax.jp) Quit (Read error: Operation timed out)
[10:55] * pressureman (~pressurem@62.217.45.26) has joined #ceph
[10:58] <s3an2> I have an interesting error this morning ' 1/25966427 unfound'
[11:00] * darkling (~hrm@00012bd0.user.oftc.net) has joined #ceph
[11:04] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[11:07] * rendar (~I@host44-179-dynamic.56-79-r.retail.telecomitalia.it) has joined #ceph
[11:08] * cok (~chk@2a02:2350:18:1012:54fc:bc28:ba01:1bf6) has joined #ceph
[11:11] * haomaiwa_ (~haomaiwan@223.223.183.114) Quit (Remote host closed the connection)
[11:11] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[11:12] <phoenix> ok i`m back
[11:13] <phoenix> any ideas?
[11:13] * nhm (~nhm@107-1-123-195-ip-static.hfc.comcastbusiness.net) has joined #ceph
[11:13] * ChanServ sets mode +o nhm
[11:16] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[11:16] * ChanServ sets mode +v andreask
[11:16] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has left #ceph
[11:17] * yanzheng (~zhyan@171.221.137.238) Quit (Quit: This computer has gone to sleep)
[11:19] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[11:24] * vmx (~vmx@pD955C9BC.dip0.t-ipconnect.de) has joined #ceph
[11:28] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[11:30] * haomaiwa_ (~haomaiwan@223.223.183.114) has joined #ceph
[11:33] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Ping timeout: 480 seconds)
[11:38] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[11:41] * pressureman (~pressurem@62.217.45.26) Quit (Quit: Ex-Chat)
[11:42] * garphy`aw is now known as garphy
[11:46] * yanzheng (~zhyan@171.221.137.238) has joined #ceph
[11:50] * haomaiwa_ (~haomaiwan@223.223.183.114) Quit (Ping timeout: 480 seconds)
[11:56] * nhm (~nhm@107-1-123-195-ip-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[11:58] * jtang_ (~jtang@80.111.83.231) has joined #ceph
[12:05] * zack_dolby (~textual@e0109-114-22-31-116.uqwimax.jp) has joined #ceph
[12:07] * zack_dolby (~textual@e0109-114-22-31-116.uqwimax.jp) Quit ()
[12:09] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[12:09] * nhm (~nhm@107-1-123-195-ip-static.hfc.comcastbusiness.net) has joined #ceph
[12:09] * ChanServ sets mode +o nhm
[12:11] * jordanP (~jordan@78.193.36.209) Quit (Quit: Leaving)
[12:12] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[12:13] * thomnico (~thomnico@80-254-69-26.dynamic.monzoon.net) has joined #ceph
[12:15] * yanzheng (~zhyan@171.221.137.238) Quit (Quit: This computer has gone to sleep)
[12:17] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[12:23] * Scar3cr0w (~Scar3cr0w@173-13-173-53-sfba.hfc.comcastbusiness.net) has joined #ceph
[12:23] * vbellur (~vijay@209.132.188.8) has joined #ceph
[12:26] * yanzheng (~zhyan@171.221.137.238) has joined #ceph
[12:35] * ade (~abradshaw@193.202.255.218) has joined #ceph
[12:40] * rendar (~I@host44-179-dynamic.56-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[12:40] * steveeJ (~junky@client156.amh.kn.studentenwohnheim-bw.de) has joined #ceph
[12:40] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[12:41] * yanzheng (~zhyan@171.221.137.238) Quit (Quit: This computer has gone to sleep)
[12:41] * yanzheng (~zhyan@171.221.137.238) has joined #ceph
[12:43] * rendar (~I@host44-179-dynamic.56-79-r.retail.telecomitalia.it) has joined #ceph
[12:43] <s3an2> Is there a methord to track a missing object back to a pool or RBD or radios object?
[12:47] * Scar3cr0w (~Scar3cr0w@173-13-173-53-sfba.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[12:53] * thomnico (~thomnico@80-254-69-26.dynamic.monzoon.net) Quit (Ping timeout: 480 seconds)
[12:56] <ashishchandra> phoenix: hey can you again paste the output: ceph -s
[12:56] * Scar3cr0w (~Scar3cr0w@173-13-173-53-sfba.hfc.comcastbusiness.net) has joined #ceph
[13:00] * yanzheng (~zhyan@171.221.137.238) Quit (Quit: This computer has gone to sleep)
[13:03] * yanzheng (~zhyan@171.221.137.238) has joined #ceph
[13:06] * cok (~chk@2a02:2350:18:1012:54fc:bc28:ba01:1bf6) Quit (Quit: Leaving.)
[13:07] * yanzheng (~zhyan@171.221.137.238) Quit ()
[13:09] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[13:13] * shang (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[13:24] * KevinPerks (~Adium@2606:a000:80a1:1b00:119e:eaf3:e2a0:9451) has joined #ceph
[13:24] <tnt> s3an2: with the object name.
[13:30] * drankis (~drankis__@89.111.13.198) has joined #ceph
[13:34] <s3an2> Hi, I have the oid of the object but how does this help me? 'rb.0.4923.238e1f29.000000000a46'
[13:35] * fdmanana (~fdmanana@bl13-150-5.dsl.telepac.pt) Quit (Quit: Leaving)
[13:35] <tnt> "rb.0.4923.238e1f29" identifies which image it is. 000000000a46 is the offset in the image.
[13:36] <tnt> if you do a "rbd info xxxx" for every image you have, you will get the prefix. Once of them will match "rb.0.4923.238e1f29"
[13:36] * Xiol (~Xiol@shrike.daneelwell.eu) has joined #ceph
[13:39] * zhaochao (~zhaochao@111.204.252.1) has left #ceph
[13:40] <s3an2> Ok I managed to find the RBD - thanks for your help with that.
[13:43] <s3an2> I am happy to makk that object as lost now I know where it is lost from so running 'ceph pg 2.70f mark_unfound_lost revert' - this results in the below so maybe I am missing something 'Error EINVAL: pg has 1 unfound objects but we haven't probed all sources, not marking lost'
[13:45] * apolloJess (~Thunderbi@202.60.8.252) Quit (Quit: apolloJess)
[13:46] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[13:48] * diegows (~diegows@190.190.5.238) has joined #ceph
[13:49] * boichev (~boichev@213.169.56.130) Quit (Quit: Nettalk6 - www.ntalk.de)
[13:49] * boichev (~boichev@213.169.56.130) has joined #ceph
[13:50] * jordanP (~jordan@185.23.92.11) has joined #ceph
[13:51] * danieljh (~daniel@0001b4e9.user.oftc.net) has joined #ceph
[13:51] * vbellur (~vijay@209.132.188.8) Quit (Read error: Operation timed out)
[13:53] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[13:54] * zack_dolby (~textual@e0109-49-132-40-132.uqwimax.jp) has joined #ceph
[13:55] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[13:58] * zack_dolby (~textual@e0109-49-132-40-132.uqwimax.jp) Quit ()
[13:58] * zack_dolby (~textual@e0109-49-132-40-132.uqwimax.jp) has joined #ceph
[14:02] * garphy is now known as garphy`aw
[14:02] * RameshN (~rnachimu@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:02] * dmsimard_away is now known as dmsimard
[14:03] * yanzheng (~zhyan@171.221.137.238) has joined #ceph
[14:04] * vbellur (~vijay@121.244.87.117) has joined #ceph
[14:04] * karnan (~karnan@121.244.87.117) Quit (Read error: Operation timed out)
[14:06] * zack_dolby (~textual@e0109-49-132-40-132.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[14:13] * garphy`aw is now known as garphy
[14:13] * saurabh (~saurabh@121.244.87.117) Quit (Quit: Leaving)
[14:14] * i_m (~ivan.miro@gbibp9ph1--blueice2n1.emea.ibm.com) has joined #ceph
[14:19] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[14:19] * ChanServ sets mode +v andreask
[14:20] * nhm (~nhm@107-1-123-195-ip-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[14:25] <HauM1> any debian maintainer here?
[14:26] * JC (~JC@195.127.188.220) has joined #ceph
[14:27] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has left #ceph
[14:28] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[14:31] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[14:32] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[14:34] * blackmen (~Ajit@121.244.87.115) Quit (Quit: Leaving)
[14:36] * dneary (~dneary@96.237.180.105) has joined #ceph
[14:36] * ganders (~root@200-127-158-54.net.prima.net.ar) has joined #ceph
[14:41] * fdmanana (~fdmanana@bl13-150-5.dsl.telepac.pt) has joined #ceph
[14:44] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[14:49] * smiley_ (~smiley@pool-173-66-4-176.washdc.fios.verizon.net) Quit (Quit: smiley_)
[14:53] * rdas (~rdas@121.244.87.115) Quit (Quit: Leaving)
[14:57] * michalefty (~micha@p20030071CE060320D516D557AC2661B6.dip0.t-ipconnect.de) has joined #ceph
[14:58] * michalefty (~micha@p20030071CE060320D516D557AC2661B6.dip0.t-ipconnect.de) has left #ceph
[14:58] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[15:07] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[15:14] * i_m (~ivan.miro@gbibp9ph1--blueice2n1.emea.ibm.com) Quit (Quit: Leaving.)
[15:14] * i_m (~ivan.miro@gbibp9ph1--blueice2n1.emea.ibm.com) has joined #ceph
[15:16] * dneary (~dneary@96.237.180.105) Quit (Ping timeout: 480 seconds)
[15:18] * brad_mssw (~brad@shop.monetra.com) has joined #ceph
[15:25] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:31] * garphy is now known as garphy`aw
[15:35] * ashishchandra (~ashish@49.32.0.170) Quit (Quit: Leaving)
[15:35] * vbellur (~vijay@209.132.188.8) has joined #ceph
[15:40] * oro_ (~oro@2001:620:20:16:3196:689e:5894:cae9) Quit (Ping timeout: 480 seconds)
[15:41] * oro (~oro@2001:620:20:16:3196:689e:5894:cae9) Quit (Ping timeout: 480 seconds)
[15:41] <flaf> Hi, I have 2 OSDs in my testing cluster. Each OSD has 1 disk 20G for the storage. In my conf, I set "osd pool default size = 2". On a ceph-client, I can create and mount a RADOS block device of 40GB, 80GB etc. Is it normal?
[15:45] <flaf> I thought I'd have a error message with a "size of the RADOS block device" > 20GB. Am I wrong?
[15:47] <singler> flaf: you can create image as large as you want
[15:47] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[15:49] <singler> for example create large image, and before running out of space add more osds to cluster. This way you do not need to resize image, etc
[15:49] * linuxkidd_ (~linuxkidd@2001:420:2280:1272:b0bd:57cb:f985:4d95) has joined #ceph
[15:50] <singler> but some file systems may try to allocate other blocks for usage, so you can run out of space quite fast (discard support can help you with that)
[15:52] <flaf> singler: Ah, ok. And I my case, from 20 GB in my RADOS device, I will have some errors and problems, isn't it?
[15:52] <flaf> *in
[15:53] <singler> not sure about errors, but eventually you'll have problems (before running out of space, OSDs will block IO operations)
[15:53] <flaf> how can I ba warned that I must add OSDs?
[15:53] <flaf> *be
[15:53] <singler> disk space monitoring, ceph status monitoring
[15:55] <flaf> Ok, with "ceph -s --cluster my-cluster"? Because, in my case, space monitoring doesn't show warning (because I have a too large device).
[15:55] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:56] <flaf> Thank you singler for the help. It's up to me to check the size.
[15:58] <singler> yes, no problem
[15:58] <singler> also there may be situation where data is distributed unevenly, so you may need to manually reweight osd's (or ceph osd reweight-by-utilization)
[16:02] * elder (~elder@50.250.6.142) has joined #ceph
[16:02] * rwheeler (~rwheeler@209.132.188.8) Quit (Quit: Leaving)
[16:03] <flaf> The only thing that worries me is that I'm afraid to be notified too late (when I can see something wrong with "ceph -s", it's too late and my fs in my rbd is already crashed).
[16:05] <flaf> I think that is dangeroos to define an image rbd too large.
[16:05] <flaf> *it's
[16:06] <darkling> It sounds like a perfectly normal storage overcommit situation, with perfectly normal storage overcommit problems. :)
[16:08] <flaf> Ah ok. :)
[16:09] <flaf> thx
[16:11] <steveeJ> does anyone know how ceph handels when one osd is in the cache-tier and backend-tier?
[16:12] * yanzheng (~zhyan@171.221.137.238) Quit (Quit: This computer has gone to sleep)
[16:12] <steveeJ> theoretically, it wouldn't have to copy the data again if it's written-back to the same osd
[16:14] * yanzheng (~zhyan@171.221.137.238) has joined #ceph
[16:15] * vbellur (~vijay@209.132.188.8) Quit (Quit: Leaving.)
[16:15] * Eco (~Eco@107.36.128.74) Quit (Remote host closed the connection)
[16:20] <loicd> mo-: thanks for helping fix the calcuation mistakes in the reliability mail thread ;-)
[16:21] * darkling (~hrm@00012bd0.user.oftc.net) Quit (Ping timeout: 480 seconds)
[16:22] <mo-> yw :)
[16:23] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[16:26] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[16:26] * yanzheng (~zhyan@171.221.137.238) Quit (Quit: This computer has gone to sleep)
[16:26] * mourgaya (~kvirc@80.124.164.139) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[16:28] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit ()
[16:29] * nhm (~nhm@nat-pool-bos-u.redhat.com) has joined #ceph
[16:29] * ChanServ sets mode +o nhm
[16:29] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[16:34] * darkling (~hrm@00012bd0.user.oftc.net) has joined #ceph
[16:34] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[16:34] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[16:35] * stephan (~stephan@62.217.45.26) Quit (Ping timeout: 480 seconds)
[16:37] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Ping timeout: 480 seconds)
[16:44] * yanzheng (~zhyan@171.221.137.238) has joined #ceph
[16:47] * john_ (~john@2601:9:6c80:7df:305d:2753:8f8a:6e8) Quit (Quit: Leaving)
[16:50] * i_m (~ivan.miro@gbibp9ph1--blueice2n1.emea.ibm.com) Quit (Quit: Leaving.)
[16:51] * yanzheng (~zhyan@171.221.137.238) Quit (Quit: This computer has gone to sleep)
[16:52] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[16:53] * i_m (~ivan.miro@gbibp9ph1--blueice3n2.emea.ibm.com) has joined #ceph
[16:57] * thomnico (~thomnico@host-78-75-254-5.homerun.telia.com) has joined #ceph
[16:57] * MrBy2 (~MrBy@85.115.23.46) has joined #ceph
[16:57] * Eco (~Eco@99-6-86-41.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[17:01] * jobewan (~jobewan@snapp.centurylink.net) has joined #ceph
[17:02] <runfromnowhere> Hmm, have a cephfs directory exported via CIFS/samba....after a while CephFS locks up and only shutting down the mds server and having another one take over allows it to recover.
[17:03] <runfromnowhere> Is this just a "cephFS is not ready for prime time" issue?
[17:03] * zack_dolby (~textual@p843a3d.tokynt01.ap.so-net.ne.jp) has joined #ceph
[17:04] * MrBy (~MrBy@85.115.23.38) Quit (Ping timeout: 480 seconds)
[17:05] * reed (~reed@209.163.164.50) has joined #ceph
[17:05] * i_m (~ivan.miro@gbibp9ph1--blueice3n2.emea.ibm.com) Quit (Quit: Leaving.)
[17:07] * angdraug (~angdraug@50-196-3-97-static.hfc.comcastbusiness.net) has joined #ceph
[17:07] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[17:09] * reed (~reed@209.163.164.50) Quit ()
[17:09] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:09] <sage> runfromnowhere: broadly, sure, but that's not a known bug, so we would love it if you could generate some logs for us of the hang. it's reproducible i take it?
[17:09] <sage> are you using the samba libcephfs plugin, or is samba running on top of a kernel or fuse mount?
[17:10] * thomnico (~thomnico@host-78-75-254-5.homerun.telia.com) Quit (Ping timeout: 480 seconds)
[17:10] * johntwilkins (~john@2601:9:6c80:7df:305d:2753:8f8a:6e8) has joined #ceph
[17:10] * thomnico (~thomnico@host-78-75-254-5.homerun.telia.com) has joined #ceph
[17:12] * reed (~reed@209.163.164.50) has joined #ceph
[17:15] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[17:17] * terje (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[17:20] * b0e (~aledermue@juniper1.netways.de) Quit (Quit: Leaving.)
[17:23] * kevinc (~kevinc__@client64-180.sdsc.edu) has joined #ceph
[17:24] * RameshN (~rnachimu@101.222.249.254) has joined #ceph
[17:25] <runfromnowhere> sage: Right now samba is running on top of a kernel mount. Would it maybe perform better using libcephfs?
[17:25] <runfromnowhere> I'm not sure if I can generate logs because I can't predict WHEN it will hang....it just always seems to eventually
[17:25] <runfromnowhere> Spawning tons of smbd processes until death
[17:28] <sage> when it does hang, capturing a dump of the mds cache will be a start
[17:28] <sage> ceph mds tell 0 dumpcache /tmp/foo.txt
[17:28] <sage> and open a tracker.ceph.com bug and attach the dump to it
[17:30] <runfromnowhere> Hmm
[17:30] <runfromnowhere> Does that data persist?
[17:30] <runfromnowhere> I shut down the mds daemon after it hung
[17:30] <runfromnowhere> An mds daemon on another host has picked up the work
[17:30] <runfromnowhere> So it's currently sitting there, shutdown. Any way to get that data from the sleeping giant? Or do I have to wait until it's hung live again?
[17:31] <sage> has to still be live
[17:32] <sage> so, next time it hangs ...
[17:32] <runfromnowhere> Sounds good
[17:32] <sage> what kernel version?
[17:32] <sage> on the client?
[17:32] <runfromnowhere> 3.13.0-32-generic
[17:32] <runfromnowhere> Oh hmm
[17:32] <runfromnowhere> 3.13.0-30-generic
[17:32] <sage> that are lots of bug fixes since then. latest kernel.org kernel.org is the best
[17:33] <runfromnowhere> Hmm
[17:33] <runfromnowhere> Do those bug fixes make it to the FUSE implementation?
[17:33] <runfromnowhere> I'm running Ubuntu 14.04 and I'd rather not step out of line with what the distro packages if it's not absolutely necessary
[17:34] <runfromnowhere> So maybe running the FUSE cephfs implementation would be a good compromise between supported kernel version and bug fixes?
[17:34] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[17:35] * sprachgenerator (~sprachgen@130.202.135.20) has joined #ceph
[17:35] * gregsfortytwo2 (~Adium@2607:f298:a:607:218c:9514:5a00:f003) has joined #ceph
[17:36] * alram (~alram@38.122.20.226) has joined #ceph
[17:37] * hasues (~hazuez@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[17:39] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:40] * gregsfortytwo (~Adium@2607:f298:a:607:a071:6a37:6d39:bde1) Quit (Ping timeout: 480 seconds)
[17:41] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Read error: Operation timed out)
[17:42] * thomnico_ (~thomnico@host-78-64-36-179.homerun.telia.com) has joined #ceph
[17:43] * adamcrume (~quassel@50.247.81.99) has joined #ceph
[17:44] * Eco (~Eco@99-6-86-41.lightspeed.sntcca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[17:45] * hasues (~hazuez@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[17:46] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) has joined #ceph
[17:46] * thomnico (~thomnico@host-78-75-254-5.homerun.telia.com) Quit (Ping timeout: 480 seconds)
[17:47] * DV_ (~veillard@veillard.com) Quit (Ping timeout: 480 seconds)
[17:47] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[17:48] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[17:48] * DV_ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[17:51] * jmlowe (~Adium@2601:d:a800:511:104b:e8f4:27af:312d) has joined #ceph
[17:53] <darkling> Is the GNOME VFS layer known to do particularly silly things in reading metadata?
[17:53] * joef (~Adium@2620:79:0:131:c9dc:7c4b:11a2:e334) has joined #ceph
[17:54] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) Quit (Quit: Ex-Chat)
[17:54] <darkling> Iv'e got someone here with a directory on CephFS with 8000 files in it, and it takes a minute or so to list with ls (which is painful, but just about bearable),
[17:54] <darkling> but opening the directory with a GUI viewer just sits there with the spinner going.
[17:56] <jmlowe> Does anybody have opinions about sizing a ceph cluster to back 300 or so openstack nodes?
[17:56] * scuttlemonkey is now known as scuttle|afk
[17:56] <darkling> gvfsd-metadata just sits there at 100% CPU usage on the local machine. There's pretty much zero activity on the MDS.
[17:56] * scuttle|afk is now known as scuttlemonkey
[17:57] * scuttlemonkey is now known as scuttle|afk
[17:57] * scuttle|afk is now known as scuttlemonkey
[17:57] * Pedras (~Adium@216.207.42.140) has joined #ceph
[17:58] * rmoe (~quassel@12.164.168.117) has joined #ceph
[17:58] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) Quit (Quit: Leaving.)
[17:59] <jmlowe> darkling: when I was a gnome user I remember it doing dumb things that revolved around finding the type of every file and then attempting to generate some sort of thumbnail image representation of each and every file
[18:00] * linjan (~linjan@176.195.82.123) Quit (Read error: Operation timed out)
[18:00] <darkling> That might also be unhelpful behaviour. Particularly since half the files have a ".img" extension, but I bet it doesn't know much about MRI scans... :)
[18:01] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) has joined #ceph
[18:03] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[18:11] * bandrus (~oddo@216.57.72.205) has joined #ceph
[18:13] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[18:13] * vbellur (~vijay@122.167.169.180) has joined #ceph
[18:14] * reed (~reed@209.163.164.50) Quit (Ping timeout: 480 seconds)
[18:14] * thomnico_ (~thomnico@host-78-64-36-179.homerun.telia.com) Quit (Ping timeout: 480 seconds)
[18:15] * BManojlovic (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[18:18] <lincolnb> darkling: i have similar problems with lots of files in a directory. what kernel are you running out of curiosity?
[18:18] * adamcrume (~quassel@50.247.81.99) Quit (Read error: Connection reset by peer)
[18:18] * adamcrume (~quassel@50.247.81.99) has joined #ceph
[18:19] <jcsp> darkling: I have seen (admittedly long ago) GNOME bugs where it would spin CPU on thumbnailing, even on a local drive.
[18:22] * Eco (~Eco@107.43.84.86) has joined #ceph
[18:22] * angdraug (~angdraug@50-196-3-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[18:23] * JC (~JC@195.127.188.220) Quit (Quit: Leaving.)
[18:25] * linuxkidd_ (~linuxkidd@2001:420:2280:1272:b0bd:57cb:f985:4d95) Quit (Quit: Leaving)
[18:25] * danieljh (~daniel@0001b4e9.user.oftc.net) Quit (Quit: Lost terminal)
[18:26] * thomnico (~thomnico@host-78-75-254-5.homerun.telia.com) has joined #ceph
[18:28] <darkling> lincolnb: 3.16
[18:29] <darkling> Turning off thumbnailing does help massively, so I'll suggest that people do that.
[18:29] <steveeJ> darkling: are you running any ceph components on that kernel?
[18:29] * rweeks (~rweeks@pat.hitachigst.com) has joined #ceph
[18:29] <darkling> It's still slow, but it's not insanely slow.
[18:33] * terje (~root@135.109.216.239) has joined #ceph
[18:33] * angdraug (~angdraug@50-196-3-97-static.hfc.comcastbusiness.net) has joined #ceph
[18:34] * kevinc (~kevinc__@client64-180.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[18:34] <darkling> steveeJ: Not on that machine. The OSDs and monitors are on a Centos/RHEL kernel, 3.10.0-123.6.3.el7.x86_64
[18:35] <darkling> (Sorry for being slow to reply, I had a user prod me in real life)
[18:36] <darkling> The MDS is also on that Centos/RHEL kernel.
[18:36] * thomnico (~thomnico@host-78-75-254-5.homerun.telia.com) Quit (Read error: Connection reset by peer)
[18:36] * kevinc (~kevinc__@client64-180.sdsc.edu) has joined #ceph
[18:36] * thomnico (~thomnico@host-78-75-254-5.homerun.telia.com) has joined #ceph
[18:38] <darkling> The clients are all Gentoo with 3.16.0.
[18:38] * thomnico (~thomnico@host-78-75-254-5.homerun.telia.com) Quit (Read error: Connection reset by peer)
[18:39] * jordanP (~jordan@185.23.92.11) Quit (Quit: Leaving)
[18:39] * thomnico (~thomnico@host-78-75-254-5.homerun.telia.com) has joined #ceph
[18:40] * JC (~JC@46.189.28.234) has joined #ceph
[18:40] * lcavassa (~lcavassa@89.184.114.246) Quit (Quit: Leaving)
[18:41] * tremon (~aschuring@d594e6a3.dsl.concepts.nl) has joined #ceph
[18:42] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[18:42] * terje (~root@135.109.216.239) Quit (Read error: Operation timed out)
[18:44] * RameshN (~rnachimu@101.222.249.254) Quit (Remote host closed the connection)
[18:44] <steveeJ> my ceph osd's are behaving very memory hungry right now. they're basically filling up my whole RAM. what could be causing this?
[18:46] <runfromnowhere> sage: When you have a minute I'm particularly interested in the fuse vs kernel client issue - is there anywhere I can see a changelog or something that will let me know how improved the cephfs fuse client is over the kernel?
[18:49] * ircolle-afk is now known as ircolle
[18:49] * thomnico (~thomnico@host-78-75-254-5.homerun.telia.com) Quit (Read error: Connection reset by peer)
[18:49] * thomnico (~thomnico@host-78-75-254-5.homerun.telia.com) has joined #ceph
[18:51] <Xiol> Hi gents. We have an unfound object which we're happy to mark as lost - we've been doing for fairly aggressive maintenance which probably caused the loss in the first place - however, when trying to mark the PG as lost, we're getting the error "Error EINVAL: pg has 1 unfound objects but we haven't probed all sources, not marking lost", using command `ceph pg 2.70f mark_unfound_lost revert`
[18:51] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Operation timed out)
[18:55] * MACscr (~Adium@c-98-214-170-53.hsd1.il.comcast.net) Quit (Ping timeout: 480 seconds)
[18:57] <brad_mssw> how do you determine how much storage space is used by a pool? ceph osd pool stats <name> just appears to give performance stats, and ceph osd lspools doesn't give it
[18:57] * srk (~oftc-webi@32.97.110.56) has joined #ceph
[19:00] * srk (~oftc-webi@32.97.110.56) Quit ()
[19:00] * vmx (~vmx@pD955C9BC.dip0.t-ipconnect.de) Quit (Quit: Leaving)
[19:00] <brad_mssw> rbd -p <name> ls -l seems to give some size information ... however these are VM images, I'm assuming they are sparse images, but only see the defined image size not how much data is currently used
[19:03] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[19:03] * darkling (~hrm@00012bd0.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:04] * sadf (~oftc-webi@32.97.110.56) has joined #ceph
[19:04] * adamcrume (~quassel@50.247.81.99) Quit (Remote host closed the connection)
[19:05] * Concubidated (~Adium@66-87-131-234.pools.spcsdns.net) has joined #ceph
[19:05] * vbellur (~vijay@122.167.169.180) Quit (Quit: Leaving.)
[19:05] * srk_ (~oftc-webi@32.97.110.56) has joined #ceph
[19:05] * srk_ (~oftc-webi@32.97.110.56) Quit ()
[19:06] * sadf (~oftc-webi@32.97.110.56) Quit ()
[19:06] <steveeJ> brad_mssw: try rados df
[19:07] * ade (~abradshaw@193.202.255.218) Quit (Ping timeout: 480 seconds)
[19:07] * kevinc (~kevinc__@client64-180.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[19:08] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:09] <brad_mssw> steveeJ: thanks, that helps, now I just need to make sure I'm deciphering this properly
[19:10] * tdb (~tdb@myrtle.kent.ac.uk) Quit (Ping timeout: 480 seconds)
[19:11] * tdb (~tdb@myrtle.kent.ac.uk) has joined #ceph
[19:17] * thomnico (~thomnico@host-78-75-254-5.homerun.telia.com) Quit (Quit: Ex-Chat)
[19:17] * nwat (~textual@eduroam-238-17.ucsc.edu) has joined #ceph
[19:19] * nhm (~nhm@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[19:21] * scuttlemonkey is now known as scuttle|afk
[19:24] * vovo_ is now known as Vacuum
[19:27] * BManojlovic (~steki@95.180.4.243) has joined #ceph
[19:29] * adamcrume (~quassel@2601:9:6680:47:45e8:53c8:8a75:fb5b) has joined #ceph
[19:30] * blackmen (~Ajit@42.104.14.210) has joined #ceph
[19:39] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Quit: sync && halt)
[19:46] * rendar (~I@host44-179-dynamic.56-79-r.retail.telecomitalia.it) Quit (Read error: Operation timed out)
[19:46] * rweeks (~rweeks@pat.hitachigst.com) Quit (Quit: Leaving)
[19:47] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[19:49] * sleinen1 (~Adium@2001:620:0:68::100) has joined #ceph
[19:49] * alram_ (~alram@38.122.20.226) has joined #ceph
[19:52] <steveeJ> whenever i try to cache-flush-evict-all, my osd crashes
[19:55] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[19:55] * alram (~alram@38.122.20.226) Quit (Ping timeout: 480 seconds)
[19:55] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:59] * rendar (~I@host44-179-dynamic.56-79-r.retail.telecomitalia.it) has joined #ceph
[20:00] * angdraug (~angdraug@50-196-3-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[20:03] * xarses (~andreww@12.164.168.117) has joined #ceph
[20:04] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[20:06] * angdraug (~angdraug@50-196-3-97-static.hfc.comcastbusiness.net) has joined #ceph
[20:08] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[20:08] * ChanServ sets mode +v andreask
[20:09] * kevinc (~kevinc__@client64-180.sdsc.edu) has joined #ceph
[20:10] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[20:11] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has left #ceph
[20:13] * angdraug_ (~angdraug@50-196-3-97-static.hfc.comcastbusiness.net) has joined #ceph
[20:13] * angdraug (~angdraug@50-196-3-97-static.hfc.comcastbusiness.net) Quit (Remote host closed the connection)
[20:16] * angdraug_ (~angdraug@50-196-3-97-static.hfc.comcastbusiness.net) Quit ()
[20:16] * sjustwork (~sam@2607:f298:a:607:7c12:c0ee:7ade:8759) has joined #ceph
[20:19] * rendar (~I@host44-179-dynamic.56-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[20:22] * rendar (~I@host44-179-dynamic.56-79-r.retail.telecomitalia.it) has joined #ceph
[20:24] * rldleblanc (~rdleblanc@74-220-196-62.unifiedlayer.com) Quit (Quit: Leaving.)
[20:27] * elder (~elder@50.250.6.142) Quit (Quit: Leaving)
[20:29] * fghaas (~florian@91-119-223-7.dynamic.xdsl-line.inode.at) has joined #ceph
[20:29] * BManojlovic (~steki@95.180.4.243) Quit (Ping timeout: 480 seconds)
[20:30] * kevinc (~kevinc__@client64-180.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[20:32] * Venturi (~Venturi@93-103-91-169.dynamic.t-2.net) has joined #ceph
[20:36] * fghaas (~florian@91-119-223-7.dynamic.xdsl-line.inode.at) has left #ceph
[20:38] * debian112 (~bcolbert@c-24-99-94-44.hsd1.ga.comcast.net) has joined #ceph
[20:40] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[20:41] * elder (~elder@50.250.6.142) has joined #ceph
[20:43] * johntwilkins (~john@2601:9:6c80:7df:305d:2753:8f8a:6e8) Quit (Quit: Leaving)
[20:45] * blackmen (~Ajit@42.104.14.210) Quit (Quit: Leaving)
[20:46] <debian112> hello all
[20:49] <debian112> can anyone recommend any ceph training classes?
[20:50] <jmlowe> <school of hard knocks joke here>
[20:52] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[20:56] <ircolle> debian112 - Hastexo, Inktank (Red Hat) and others offer training classes.
[20:59] * michalefty (~micha@ip250461f1.dynamic.kabel-deutschland.de) has joined #ceph
[21:00] * michalefty (~micha@ip250461f1.dynamic.kabel-deutschland.de) has left #ceph
[21:02] * alram_ (~alram@38.122.20.226) Quit (Read error: Operation timed out)
[21:04] <debian112> ircolle - tried calling, and sending an email and got nothing but crickets
[21:05] <debian112> just a thought. I have ceph going, but just wanted more of a formally training since my company offered to pay
[21:05] * kevinc (~kevinc__@client64-180.sdsc.edu) has joined #ceph
[21:06] <ircolle> debian112 - you should talk to wschulze
[21:07] <wschulze> debian112: This is weird. I would like to get to the bottom of this. Please pm me at wolfgang@redhat.com.
[21:07] <debian112> ok thanks
[21:08] <debian112> wschulze: I just sent you an email
[21:09] * BManojlovic (~steki@net206-137-245-109.mbb.telenor.rs) has joined #ceph
[21:09] <debian112> Subject: Ceph
[21:09] * alram (~alram@38.122.20.226) has joined #ceph
[21:10] <wschulze> debian112: Calling you now???;-
[21:16] * danieljh (~daniel@0001b4e9.user.oftc.net) has joined #ceph
[21:19] <hasues> When deploying a ceph cluster, if I have a single admin node and three osds, where should I deploy the monitors? across all the osds? or does that include the admin node?
[21:26] <kitz> hasues: I've got that setup and I put my MONs on my OSD nodes.
[21:26] <kitz> My admin node is just a VM.
[21:27] <hasues> kitz: Okay, I was confused because I thought I read somewhere that "don't add mons to osds", so I was confused where I was to put them? production clusters require even more hosts for monitors?
[21:28] <steveeJ> i just played around with cache tiers. now i have a cache tier that has unevictable objects which can't be deleted
[21:28] <kitz> hasues: I think the trouble comes from not having fast enough system drives. If the MONs and OSDs are putting the system drive in contention due to logging then you get into real trouble.
[21:28] <hasues> kitz: good to know.
[21:28] <kitz> I'm using 2 SSDs in a RAID1 for my OSD system volumes.
[21:29] <hasues> kitz: Seems like I need to locate some good architecture documentation. The installation documentation doesn't cover this.
[21:30] <kitz> http://ceph.com/docs/master/start/hardware-recommendations/
[21:30] <hasues> kitz: Thanks, I'll read over this now.
[21:30] <kitz> hasues: np
[21:32] <kitz> hasues: there is another document (a pdf maybe) which directly correlates number of OSD processes to CPU models, etc. which will also be worth your while to find.
[21:32] <hasues> kitz: excellent, I'll practice Google fu.
[21:32] <kitz> :)
[21:33] * ade (~abradshaw@h31-3-227-203.host.redstation.co.uk) has joined #ceph
[21:33] <hasues> kitz: This one? http://ceph.com/presentations/20121102-ceph-day/20121102-cluster-design-deployment.pdf
[21:35] <kitz> hasues: ... no...
[21:35] <hasues> yeah, this appears to be a presentation, nevermind.
[21:38] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Remote host closed the connection)
[21:38] <kitz> hasues: http://www.inktank.com/resources/inktank-hardware-configuration-guide/
[21:38] <kitz> that
[21:40] <hasues> kitz: Grabbed it. Thanks again.
[21:42] <Venturi> Is the number of OSDs that belongs to certain placement group dependant on how much replication we do within certain PG?
[21:43] * [fred] (fred@earthli.ng) Quit (Ping timeout: 480 seconds)
[21:45] * steki (~steki@net6-138-245-109.mbb.telenor.rs) has joined #ceph
[21:45] <Venturi> I would like ask also, it is possible within ceph to assign PG to certain disk hardware, let's say that I would like to make tri replicas within PG that has al SSD OSDs?
[21:45] <PerlStalker> Venturi: You can do some magic with the crushmap
[21:47] * kevinc (~kevinc__@client64-180.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[21:48] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[21:48] * scuttle|afk is now known as scuttlemonkey
[21:49] * rendar (~I@host44-179-dynamic.56-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[21:49] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[21:49] * ChanServ sets mode +v andreask
[21:50] * BManojlovic (~steki@net206-137-245-109.mbb.telenor.rs) Quit (Ping timeout: 480 seconds)
[21:51] * adrian (~abradshaw@80-72-52-54.cmts.powersurf.li) has joined #ceph
[21:52] * ade (~abradshaw@h31-3-227-203.host.redstation.co.uk) Quit (Ping timeout: 480 seconds)
[21:53] * rendar (~I@host30-177-dynamic.8-79-r.retail.telecomitalia.it) has joined #ceph
[21:56] <kitz> Venturi: the replication is set at a per pool level, but yes, that is what controls the number of OSDs.
[21:57] <kitz> Venturi: Yes, you can use CRUSH to specify that a particular pool is mapped to specific hardware.
[21:57] <kitz> http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds
[21:58] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has left #ceph
[21:59] <Venturi> Thank you. I really wanted to get some nice picture to imagine PG, POOLs and all that staff and I found it: http://karan-mj.blogspot.com/2014/01/how-data-is-stored-in-ceph-cluster.html
[22:03] * dmsimard is now known as dmsimard_away
[22:11] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[22:11] * adrian (~abradshaw@80-72-52-54.cmts.powersurf.li) Quit (Ping timeout: 480 seconds)
[22:12] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[22:20] * ade (~abradshaw@80-72-52-54.cmts.powersurf.li) has joined #ceph
[22:21] * ade (~abradshaw@80-72-52-54.cmts.powersurf.li) Quit (Remote host closed the connection)
[22:22] <Venturi> Does CEPH already implement quotas within RadosGW?
[22:27] * tremon (~aschuring@d594e6a3.dsl.concepts.nl) Quit (Quit: getting boxed in)
[22:52] * kevinc (~kevinc__@client64-180.sdsc.edu) has joined #ceph
[22:57] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Remote host closed the connection)
[22:59] * ganders (~root@200-127-158-54.net.prima.net.ar) Quit (Quit: WeeChat 0.4.1)
[22:59] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[22:59] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[23:00] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[23:03] * brad_mssw (~brad@shop.monetra.com) Quit (Quit: Leaving)
[23:04] * kevinc (~kevinc__@client64-180.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[23:05] <bens> is there a string I can send to an OSD via netcat that will give me a status?
[23:05] * angdraug (~angdraug@131.252.204.134) has joined #ceph
[23:05] <bens> an ok-test? I am having connectivity issues and want a generic test I can do from almost anywhere
[23:06] * kevinc (~kevinc__@client64-180.sdsc.edu) has joined #ceph
[23:13] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Remote host closed the connection)
[23:14] <seapasul1i> I know when you connect to one it outputs a motd
[23:18] <steveeJ> i have a couple rules to control which host has the primary osd for the applied pool. when i switch rulesets to switch my logical value host-as-primray-osd, how can i make sure the osds inside the hosts are chosen which already contain the data?
[23:21] <steveeJ> i tried to increase chooseleaf_vary_r to 4 but it doesn't help either. from my understanding, there should be no data movement necessary at all in the scneario
[23:22] <steveeJ> has anyone ever done something similar, or is everyone using cache tiering for this?
[23:25] * lightspeed (~lightspee@2001:8b0:16e:1:8326:6f70:89f:8f9c) Quit (Ping timeout: 480 seconds)
[23:29] <s3an2> Hi, I have an unfound object in my cluster - I have tried to mark_unfound_lost to accept the loss after tracking the RBD that this object is from. This however errors with we haven't probed all sources - is there a way to force this or something else I should be doing?
[23:29] * FL1SK (~quassel@159.118.92.60) Quit (Read error: Connection reset by peer)
[23:30] * FL1SK (~quassel@159.118.92.60) has joined #ceph
[23:33] <lincolnb> will a 3.16.1 cephfs kernel module work against a 0.72 cluster?
[23:40] * rendar (~I@host30-177-dynamic.8-79-r.retail.telecomitalia.it) Quit ()
[23:43] <lincolnb> evidently yes
[23:46] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[23:48] * torrancew (~tray@mail.sudobangbang.org) Quit (Quit: 0.3.8)
[23:49] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[23:51] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[23:52] * Pedras (~Adium@216.207.42.140) has left #ceph
[23:52] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[23:52] * steki (~steki@net6-138-245-109.mbb.telenor.rs) Quit (Ping timeout: 480 seconds)
[23:52] * Pedras (~Adium@216.207.42.140) has joined #ceph
[23:53] <Venturi> Is there any cool open-source middleware written for Ceph RadosGW objects?
[23:56] <steveeJ> Venturi: what do you need/want that for?

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.