#ceph IRC Log

Index

IRC Log for 2014-06-11

Timestamps are in GMT/BST.

[0:00] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[0:01] * fghaas (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[0:06] <lightspeed> is there a way to determine the space consumed by individual rbd images? "rbd info" and "rbd ls -l" only report the full provisioned size
[0:06] * aldavud_ (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:06] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:09] * Sysadmin88 (~IceChat77@94.4.22.173) has joined #ceph
[0:09] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[0:10] * Nacer_ (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[0:10] <lightspeed> just got fstrim working properly in my rbd-backed VMs, and thought it would be handy if I could now easily keep track of the disk usage within those VMs' rbd images from the ceph side
[0:10] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[0:11] * jskinner (~jskinner@69.170.148.179) Quit (Remote host closed the connection)
[0:11] * jskinner (~jskinner@69.170.148.179) has joined #ceph
[0:12] <vilobhmm> How does the CEPH keyring information gets transferred from the Openstack API node to the Hypervisor node ? Does this keyring gets passed through message queue?
[0:12] * dereky (~derek@proxy00.umiacs.umd.edu) Quit (Read error: Operation timed out)
[0:12] * analbeard (~shw@host86-155-196-30.range86-155.btcentralplus.com) has joined #ceph
[0:13] * analbeard (~shw@host86-155-196-30.range86-155.btcentralplus.com) Quit ()
[0:13] * rwheeler (~rwheeler@173.48.207.57) has joined #ceph
[0:14] * mikedawson_ (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[0:14] * mikedawson_ is now known as mikedawson
[0:16] * jskinner (~jskinner@69.170.148.179) Quit (Read error: Operation timed out)
[0:16] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[0:18] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[0:20] * ikrstic (~ikrstic@178-221-100-206.dynamic.isp.telekom.rs) Quit (Quit: Konversation terminated!)
[0:23] <mo-> can I just assume these 2/3 monitors that crashed gone forever? I could try extending it back out to 3 from the 1 surviving monitor, but I already know that going from 1 to 2 monitors doesnt work with conventional means
[0:28] * alphe (~alphe@0001ac6f.user.oftc.net) has joined #ceph
[0:29] <alphe> hello I have s trange behavior
[0:29] <alphe> hello I have strange behavior
[0:29] <alphe> I made a rbd image the snapshot and cloned it to myimage2
[0:29] <alphe> I mapped it I mounted the mapped block device all ok
[0:30] <alphe> when I tried to umount that block device that umount process hung ...
[0:31] <alphe> all i have in syslog is that osd08 has communication problems but if I do a ceph -s i see no problems
[0:31] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[0:31] * rweeks (~goodeats@192.169.20.75.static.etheric.net) Quit (Quit: Leaving)
[0:31] * aldavud_ (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[0:32] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:32] * Kupo1 (~tyler.wil@wsip-68-14-231-140.ph.ph.cox.net) has left #ceph
[0:38] * sarob (~sarob@2601:9:1d00:c7f:19d2:e852:766:e2d6) has joined #ceph
[0:39] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[0:39] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[0:40] * rendar (~I@host138-179-dynamic.12-79-r.retail.telecomitalia.it) Quit ()
[0:43] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[0:43] * danieagle (~Daniel@186.214.48.173) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[0:43] * dmsimard is now known as dmsimard_away
[0:46] * sarob (~sarob@2601:9:1d00:c7f:19d2:e852:766:e2d6) Quit (Ping timeout: 480 seconds)
[0:48] * gregsfortytwo1 (~Adium@cpe-107-184-64-126.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[0:52] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[0:54] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:54] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[0:55] * aldavud_ (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:56] * sputnik13 (~sputnik13@207.8.121.241) Quit ()
[0:56] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[0:58] * sleinen (~Adium@2001:620:0:26:f563:a181:42e9:f0fe) Quit (Quit: Leaving.)
[1:02] <mo-> can I copy the store.db from 1 monitor (while not running) too another monitor and run both?
[1:07] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[1:08] * nwat (~textual@eduroam-250-169.ucsc.edu) has joined #ceph
[1:15] <sherry> Hi, anyone here can help me how would I be able to remove all of the objects from the pool, rather than removing the pool itself?
[1:16] <sherry> my Ceph filesystem is mounted into a pool, but removing my files in the directory would not cause the objects to be removed from the pool!
[1:17] * oms101 (~oms101@p20030057EA1CB000EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:20] <mo-> anything? anybody?
[1:20] * gregsfortytwo1 (~Adium@cpe-107-184-64-126.socal.res.rr.com) has joined #ceph
[1:22] * habalux (teemu@host-109-204-170-212.tp-fne.tampereenpuhelin.net) Quit (Ping timeout: 480 seconds)
[1:22] * reed (~reed@75-101-54-131.dsl.static.sonic.net) Quit (Read error: Operation timed out)
[1:23] * jsfrerot (~jsfrerot@192.222.132.57) Quit (Remote host closed the connection)
[1:24] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:26] * oms101 (~oms101@p20030057EA3A0100EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[1:31] * hasues1 (~hazuez@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[1:31] * abonilla (~abonilla@c-69-253-241-144.hsd1.de.comcast.net) Quit (Ping timeout: 480 seconds)
[1:33] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[1:33] * sputnik13 (~sputnik13@207.8.121.241) Quit ()
[1:33] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[1:34] * sputnik13 (~sputnik13@207.8.121.241) Quit ()
[1:37] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[1:38] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[1:39] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[1:43] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[1:46] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[1:48] <mo-> maybe another angle to solve this: does a monitor store anything (on disk) that is specific to that monitor
[1:49] <mo-> specific/unique
[1:51] * ultimape (~Ultimape@c-174-62-192-41.hsd1.vt.comcast.net) Quit (Quit: Leaving)
[2:00] * gregsfortytwo1 (~Adium@cpe-107-184-64-126.socal.res.rr.com) Quit (Quit: Leaving.)
[2:00] * eford (~fford@p509901f2.dip0.t-ipconnect.de) has joined #ceph
[2:03] * gregsfortytwo1 (~Adium@cpe-107-184-64-126.socal.res.rr.com) has joined #ceph
[2:07] * dford (~fford@p509901f2.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[2:12] * evl (~chatzilla@139.216.138.39) has joined #ceph
[2:18] * drankis (~drankis__@89.111.13.198) Quit (Read error: Connection reset by peer)
[2:19] * huangjun (~oftc-webi@111.173.98.164) has joined #ceph
[2:20] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:26] * huangjun (~oftc-webi@111.173.98.164) Quit (Quit: Page closed)
[2:27] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[2:30] <joao> mo-, no
[2:30] <mo-> so my idea is safe?
[2:30] <joao> as long as the store is in good shape, yes
[2:30] <mo-> also THANK YOU for responding
[2:31] <joao> np, was on my way to bed and happened to check out what was going on here :p
[2:32] <joao> mo-, why would you want to do that though?
[2:34] <mo-> hm how short does this need to be? -.-
[2:34] <mo-> I am working with a 0.61.2 cluster, trying to bring it back to health to be able to upgrade. today 2 monitors died and wont start up, crashing hard
[2:35] * lucas1 (~Thunderbi@222.240.148.130) has joined #ceph
[2:35] <mo-> the way I read the bug report it sounds like to fix it, i'd need to upgrade and then start the broken monitors from zero. and I cant do that
[2:35] <mo-> so I was thinking to just replicate the 1 surviving monitor to get quorum back
[2:35] <mo-> if that makes sense
[2:36] * LeaChim (~LeaChim@host86-174-77-240.range86-174.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:37] * dmsimard_away is now known as dmsimard
[2:40] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[2:40] <mo-> joao: ^- sorry for the ping
[2:40] <joao> what bug report was that?
[2:40] <mo-> lemme find that back, one sec
[2:41] <joao> got it
[2:41] <joao> I scrolled up
[2:41] <mo-> ha. okay
[2:42] <mo-> the bug report was not about that specific crash message (PGMonitor.cc), but rather about the 'out of domain' thing
[2:42] * dmsimard is now known as dmsimard_away
[2:43] <mo-> http://tracker.ceph.com/issues/7626 but Im not entirely sure, its a different version after all
[2:44] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[2:44] <mo-> but its the only thing thats even close to what Im seing
[2:45] <joao> I think you're being bit by both
[2:46] <joao> I'm having a hard time recalling the specifics of either of those bugs though
[2:46] <mo-> understandable
[2:46] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Read error: Operation timed out)
[2:47] * huangjun (~kvirc@111.173.98.164) has joined #ceph
[2:47] <mo-> for now tho, I have moved the broken ceph-c directory aside and created a new one with the contents of the other node's ceph-a folder
[2:48] <mo-> havent tried starting it yet because you started helping tho
[2:48] <joao> if I were you, I'd consider running from cuttlefish as soon as possible
[2:48] <mo-> am planning to
[2:48] <joao> specially from such an early release
[2:49] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) has joined #ceph
[2:49] <mo-> actually, I just logged into the rbd side of things to move the one remaining rbd off the cluster
[2:49] <mo-> so I could safely upgrade it
[2:49] <mo-> when I noticed it had fallen apart (again)
[2:49] <joao> there are not enough words to describe the world of pain cuttlefish is
[2:50] <mo-> oh I believe you. I may not have seen it all, but I have seen my share
[2:50] <mo-> like.. sync failing, leveldb growing uncontrollably (which is whats actually causing all the trouble)...
[2:51] <mo-> so anyhow. I guess I should just start mon.c now, see if that works, then do the same (copy over store.db) to mon.b?
[2:55] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[2:55] <joao> that may be your best option, yes
[2:55] <mo-> or do you happen to have another magic trick I might attempt, also possibly some tranquilizers
[2:56] <sherry> Hi, anyone here can help me how would I be able to remove all of the objects from the pool, rather than removing the pool itself? my Ceph filesystem is mounted into a pool, but removing my files in the directory would not cause the objects to be removed from the pool!
[2:56] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:56] <joao> but I would advise you to upgrade the monitors to latest cuttlefish, or even a later release at your earliest convenience
[2:56] <mo-> I was trying to move the data off the cluster to be able to safely upgrade, but I may not even be able to do that before it falls apart again
[2:57] <mo-> I might just be overly cautious... should I just upgrade to .61.9 and then to firefly tomorrow or something? without moving the data away
[2:57] <joao> also, iirc there was a change in mon protocol around .5 or .6 cuttlefish, so you may need to upgrade all your monitors at once
[2:57] <mo-> yes I know that much already, thanks tho :)
[2:59] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[2:59] <joao> well, upgrading to firefly would be best, but you should go through the release notes prior to that
[2:59] * alphe (~alphe@0001ac6f.user.oftc.net) Quit (Quit: Leaving)
[2:59] <mo-> so how risky is it to just go balls to the wall and upgrade the cluster without moving the data off of it / having backups
[3:00] <joao> I have this nagging sensation that you must go through dumpling first, but can't exactly pinpoint the source of it
[3:00] <joao> and I'm way too sleepy to be a reliable source atm
[3:00] <mo-> well I have tried the upgrade procedure on a virtual test cluster
[3:00] * nwat (~textual@eduroam-250-169.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:00] <mo-> I upgraded that from .61.2 to .61.9, and then to .80.1 directly, worked fine there
[3:01] <mo-> but yea I dont think going through d would be much of a hassle
[3:01] * bandrus (~oddo@adsl-71-137-197-172.dsl.scrm01.pacbell.net) Quit (Ping timeout: 480 seconds)
[3:02] <joao> as long as you upgrade all monitors and osds (there was also a protocol change in-between with regard to how the ceph tool talks to the cluster), and as long as you have at least one healthy monitor, things should be okay
[3:02] <joao> I should run for the day though
[3:03] <joao> best of luck
[3:03] <joao> o/
[3:03] <kraken> \o
[3:03] <mo-> thanks man
[3:03] <mo-> really
[3:03] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[3:05] * bandrus (~oddo@adsl-71-137-197-172.dsl.scrm01.pacbell.net) has joined #ceph
[3:09] * joef (~Adium@2601:9:2a00:690:e40a:8efa:752c:2235) has joined #ceph
[3:09] * joef (~Adium@2601:9:2a00:690:e40a:8efa:752c:2235) has left #ceph
[3:09] * narb (~Jeff@38.99.52.10) Quit (Quit: narb)
[3:10] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:12] * KaZeR (~kazer@64.201.252.132) Quit (Ping timeout: 480 seconds)
[3:13] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[3:16] * sputnik13 (~sputnik13@207.8.121.241) Quit ()
[3:16] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[3:18] * capri (~capri@212.218.127.222) Quit (Read error: Connection reset by peer)
[3:21] * PureNZ (~paul@122-62-45-132.jetstream.xtra.co.nz) has joined #ceph
[3:21] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[3:21] * lupu (~lupu@86.107.101.214) has left #ceph
[3:21] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[3:27] * yanzheng (~zhyan@jfdmzpr01-ext.jf.intel.com) has joined #ceph
[3:28] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[3:29] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[3:30] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:36] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[3:38] * sarob (~sarob@2601:9:1d00:c7f:b55f:75e9:7b8a:245e) has joined #ceph
[3:45] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[3:46] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[3:46] * sarob (~sarob@2601:9:1d00:c7f:b55f:75e9:7b8a:245e) Quit (Ping timeout: 480 seconds)
[3:46] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[3:55] * abonilla (~abonilla@c-69-253-241-144.hsd1.de.comcast.net) has joined #ceph
[3:59] * jrcresawn (~jrcresawn@ip24-251-38-21.ph.ph.cox.net) has joined #ceph
[4:00] * jrcresawn (~jrcresawn@ip24-251-38-21.ph.ph.cox.net) Quit (Remote host closed the connection)
[4:04] * dmsimard_away is now known as dmsimard
[4:09] * zhaochao (~zhaochao@124.205.245.26) has joined #ceph
[4:12] * dmsimard is now known as dmsimard_away
[4:14] * scuttlemonkey (~scuttlemo@63.138.96.2) has joined #ceph
[4:14] * ChanServ sets mode +o scuttlemonkey
[4:14] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[4:17] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[4:17] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[4:17] * yguang11 (~yguang11@2406:2000:ef96:e:c450:92ff:9080:d090) has joined #ceph
[4:21] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[4:21] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[4:23] * zack_dolby (~textual@pdf8519e7.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[4:29] * lupu (~lupu@86.107.101.214) has joined #ceph
[4:31] <kfei> I deployed a new ceph cluster with 3 OSDs, and I think it should be 0 object with 192 active+clean pgs. But it's '65 active, 54 active+degraded, 73 active+remapped' now.
[4:31] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[4:31] <kfei> And there is no error/warnings in `ceph -w`
[4:32] <kfei> Is this normal? Or how can I perform a recover/rebalancing/repair action to take it back to 192 active+clean state?
[4:35] <kfei> health: "HEALTH_WARN 54 pgs degraded; 192 pgs stuck unclean"
[4:36] * vbellur (~vijay@122.167.108.13) Quit (Read error: Connection reset by peer)
[4:47] * vbellur (~vijay@122.166.172.253) has joined #ceph
[4:56] <jammcq> kfei: how many hosts ?
[4:56] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[4:56] * bandrus1 (~oddo@adsl-71-137-197-211.dsl.scrm01.pacbell.net) has joined #ceph
[4:56] * sarob (~sarob@2601:9:1d00:c7f:b833:1ddd:5396:95f2) has joined #ceph
[4:56] <kfei> 3 OSDs are on single host, and there is another host acts as ceph comsumer and admin-node
[4:56] * bandrus1 (~oddo@adsl-71-137-197-211.dsl.scrm01.pacbell.net) Quit ()
[4:57] <kfei> sorry, it's 4 OSDs
[4:57] <jammcq> I had a VERY similar problem with a single host and multiple OSDs
[4:57] <jammcq> I was told that the default CRUSH map is expecting multiple hosts so it's trying to spread the object over hosts that you don't have
[4:57] <jammcq> it was a simple fix
[4:57] <jammcq> I had to modify the CRUSH map
[4:58] <kfei> jammcq, it's wierd, but I switch from a multihost deployment and that was fine, so maybe you are right
[4:58] <kfei> it supposed to be multihosts...
[4:59] <jammcq> i'm very new to Ceph and i expected it to work just fine on a single host, but it didn't I had the same "stuck" issue
[4:59] <jammcq> http://ceph.com/docs/master/rados/operations/crush-map/
[4:59] <jammcq> that's the docs explaining CRUSH maps, but I only had to change 1 line of the map
[4:59] * bandrus (~oddo@adsl-71-137-197-172.dsl.scrm01.pacbell.net) Quit (Ping timeout: 480 seconds)
[4:59] <kfei> jammcq, so did you modify the CRUSH map? did it work?
[5:00] <jammcq> you need to do:
[5:00] <jammcq> sudo ceph getcrushmap -o /tmp/crushmap
[5:01] <jammcq> sudo crushtool -d /tmp/crushmap -o /tmp/crushmap.txt
[5:01] <jammcq> then edit the /tmp/crushmap.txt file, change a line that looks like:
[5:01] <jammcq> step chooseleaf firstn 0 type host
[5:01] <jammcq> to:
[5:01] <jammcq> step chooseleaf firstn 0 type osd
[5:01] <jammcq> see the 'host' changed to 'osd'
[5:02] <kfei> jammcq, Nice! Let me try..
[5:02] <jammcq> then you need to compile and install the new crushmap
[5:02] <jammcq> sudo crushtool -c /tmp/crushmap.txt /tmp/crushmap.new
[5:03] <jammcq> sudo ceph osd setcrushmap -i /tmp/crushmap.new
[5:03] <jammcq> I think that's it
[5:03] <jammcq> it worked for me
[5:03] <kfei> jammcq, I'm trying now :p
[5:04] * sarob (~sarob@2601:9:1d00:c7f:b833:1ddd:5396:95f2) Quit (Ping timeout: 480 seconds)
[5:07] <kfei> jammcq, now it first became 4/4 obj degraded (100%), and then auto-recovering
[5:07] <kfei> jammcq, and finally became 192 active+clean!
[5:07] <jammcq> :)
[5:08] <kfei> jammcq, thanks an awful lot! :p
[5:08] <jammcq> sure. glad I could help
[5:13] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[5:24] * lucas1 (~Thunderbi@222.240.148.130) Quit (Quit: lucas1)
[5:27] * sarob (~sarob@2601:9:1d00:c7f:795f:f2c4:7f47:4321) has joined #ceph
[5:28] * haomaiwang (~haomaiwan@124.161.72.234) has joined #ceph
[5:29] * sarob_ (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[5:30] * haomaiwa_ (~haomaiwan@www27339ue.sakura.ne.jp) has joined #ceph
[5:30] * Vacum_ (~vovo@i59F797E4.versanet.de) has joined #ceph
[5:30] * sarob__ (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[5:32] * haomaiwa_ (~haomaiwan@www27339ue.sakura.ne.jp) Quit (Remote host closed the connection)
[5:33] * sarob_ (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[5:33] * haomaiwa_ (~haomaiwan@www27339ue.sakura.ne.jp) has joined #ceph
[5:33] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[5:35] * sarob (~sarob@2601:9:1d00:c7f:795f:f2c4:7f47:4321) Quit (Ping timeout: 480 seconds)
[5:36] * haomaiwang (~haomaiwan@124.161.72.234) Quit (Ping timeout: 480 seconds)
[5:37] * Vacum (~vovo@i59F79F0A.versanet.de) Quit (Ping timeout: 480 seconds)
[5:38] * sarob__ (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:40] * lucas1 (~Thunderbi@222.240.148.130) has joined #ceph
[5:49] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[5:57] * habalux (teemu@host-109-204-170-212.tp-fne.tampereenpuhelin.net) has joined #ceph
[5:58] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Quit: Leaving.)
[5:58] * PureNZ (~paul@122-62-45-132.jetstream.xtra.co.nz) Quit (Ping timeout: 480 seconds)
[6:01] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) Quit (Quit: Leaving.)
[6:03] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[6:03] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[6:07] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:11] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) Quit (Quit: Leaving.)
[6:16] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) has joined #ceph
[6:17] * yguang11 (~yguang11@2406:2000:ef96:e:c450:92ff:9080:d090) Quit (Remote host closed the connection)
[6:18] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[6:19] <sherry> anyone here?!
[6:24] * PureNZ (~paul@122-62-45-132.jetstream.xtra.co.nz) has joined #ceph
[6:34] * haomaiwa_ (~haomaiwan@www27339ue.sakura.ne.jp) Quit (Remote host closed the connection)
[6:34] * haomaiwang (~haomaiwan@www27339ue.sakura.ne.jp) has joined #ceph
[6:38] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[6:39] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[6:48] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[6:55] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[6:55] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[6:57] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[6:58] * sleinen1 (~Adium@2001:620:0:26:6cd6:2c75:a9e0:952e) has joined #ceph
[6:58] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Read error: Connection reset by peer)
[6:58] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Quit: Leaving.)
[6:59] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[6:59] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[7:06] * sleinen1 (~Adium@2001:620:0:26:6cd6:2c75:a9e0:952e) Quit (Ping timeout: 480 seconds)
[7:07] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[7:07] * rdas (~rdas@nat-pool-pnq-t.redhat.com) has joined #ceph
[7:10] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[7:10] * sleinen1 (~Adium@2001:620:0:26:7d04:65a7:4d29:768f) has joined #ceph
[7:16] * ajazdzewski (~quassel@2001:4dd0:ff00:9081:bcb7:980a:7383:581d) has joined #ceph
[7:17] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[7:21] * vbellur (~vijay@122.166.172.253) Quit (Ping timeout: 480 seconds)
[7:21] * Cube (~Cube@66.87.130.3) Quit (Read error: Connection reset by peer)
[7:21] * Cube (~Cube@66-87-130-3.pools.spcsdns.net) has joined #ceph
[7:22] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) has joined #ceph
[7:24] * fghaas (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) has joined #ceph
[7:27] * thb (~me@0001bd58.user.oftc.net) has joined #ceph
[7:30] * lucas1 (~Thunderbi@222.240.148.130) Quit (Quit: lucas1)
[7:30] * sarob (~sarob@2601:9:1d00:c7f:6867:dedb:15d1:4d76) has joined #ceph
[7:35] * yguang11_ (~yguang11@2406:2000:ef96:e:c450:92ff:9080:d090) has joined #ceph
[7:35] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[7:38] * sarob (~sarob@2601:9:1d00:c7f:6867:dedb:15d1:4d76) Quit (Ping timeout: 480 seconds)
[7:39] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[7:40] <huangjun> client B delete files (about 10GB size) when client A reading different big files, the reading client A hang about 5~10s
[7:41] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) Quit (Quit: Leaving)
[7:41] * saurabh (~saurabh@nat-pool-blr-t.redhat.com) has joined #ceph
[7:42] * vbellur (~vijay@209.132.188.8) has joined #ceph
[7:46] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[7:47] * ajazdzewski (~quassel@2001:4dd0:ff00:9081:bcb7:980a:7383:581d) Quit (Ping timeout: 480 seconds)
[7:53] * odyssey4me (~odyssey4m@41-132-44-122.dsl.mweb.co.za) has joined #ceph
[8:00] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[8:02] * ajazdzewski (~quassel@2001:4dd0:ff00:9081:bcb7:980a:7383:581d) has joined #ceph
[8:04] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[8:04] * sherry (~sherry@mike-alien.esc.auckland.ac.nz) Quit (Remote host closed the connection)
[8:05] <yanzheng> huangjun, cephfs?
[8:09] * sherry (~sherry@mike-alien.esc.auckland.ac.nz) has joined #ceph
[8:09] <sherry> huangjun: are u familiar with cephFS?
[8:11] <sherry> do u know why removing a file in a directory which is mounted to a special pool, does not remove the object in the pool and disk itself!?
[8:11] * sleinen1 (~Adium@2001:620:0:26:7d04:65a7:4d29:768f) Quit (Quit: Leaving.)
[8:14] <yanzheng> sherry, single client or multiple clients?
[8:19] * ajazdzewski (~quassel@2001:4dd0:ff00:9081:bcb7:980a:7383:581d) Quit (Remote host closed the connection)
[8:20] <sherry> yanzehng: single
[8:21] * Sysadmin88 (~IceChat77@94.4.22.173) Quit (Quit: Never put off till tomorrow, what you can do the day after tomorrow)
[8:22] <sherry> do u have any idea, yanzheng?
[8:23] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[8:25] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[8:28] * madkiss (~madkiss@2001:6f8:12c3:f00f:707b:4485:1f0c:957b) has joined #ceph
[8:30] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[8:30] * sarob (~sarob@2601:9:1d00:c7f:2075:8ad6:5877:b779) has joined #ceph
[8:31] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit ()
[8:36] * capri (~capri@212.218.127.222) has joined #ceph
[8:38] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[8:38] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[8:38] * sarob (~sarob@2601:9:1d00:c7f:2075:8ad6:5877:b779) Quit (Ping timeout: 480 seconds)
[8:38] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[8:39] <huangjun> yanzheng: yes, we use cephfs,
[8:40] <sherry> huangjun: it was me that asked u, do u know why removing a file in a directory which is mounted to a special pool, does not remove the object in the pool and disk itself!?
[8:40] <huangjun> sherry: you should upgrade your kclient,
[8:41] <yanzheng> deleting the file create too many osd operations ?
[8:41] <sherry> yanzheng: yes
[8:42] <sherry> huangjun; yes, I did bt it did not make any difference
[8:42] <huangjun> if delete a 10GB file, the mds will send 10*1024/4 = 5120 delete requests?
[8:43] * rendar (~I@host56-182-dynamic.37-79-r.retail.telecomitalia.it) has joined #ceph
[8:43] <sherry> I didn't calculate that, bt my mds has a really high spec
[8:43] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[8:44] <yanzheng> yes
[8:45] <yanzheng> no throttle mechanism in mds so far
[8:45] <yanzheng> sherry, do you know inode number of that file
[8:45] <huangjun> well, so the read request will queue after the delete request, and then it will effect the read performance,
[8:46] <yanzheng> yes
[8:46] <huangjun> if we are playing vedio, it will hang about 3s
[8:46] <huangjun> what can i do to decrease the hang time, add more RAM?
[8:46] <sherry> yanzheng: u mean that I need to delete through rados?
[8:46] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[8:46] <yanzheng> no
[8:47] <yanzheng> ceph mds tell \* dumpcache
[8:47] <huangjun> sherry: show your "ceph df" output
[8:47] <yanzheng> the cache dump can give you hint why the inode is not deleted
[8:48] <yanzheng> it's likely the inode is referenced by someone
[8:49] <sherry> right, thanks, I will check that out.
[8:49] <huangjun> yanzheng: we apply your patch about the delete problem,and works fine
[8:49] <yanzheng> which patch? I don't remember
[8:50] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[8:54] <huangjun> https://www.mail-archive.com/ceph-devel@vger.kernel.org/msg15437.html
[8:55] * ikrstic (~ikrstic@c82-214-88-26.loc.akton.net) has joined #ceph
[8:55] <huangjun> sherry: i can not reproduce your problem, i delete a 10GB file, and it remove in backend and disk
[8:56] <sherry> hangjun: is it singly/multiple client?
[8:56] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[8:56] <huangjun> signle
[8:57] <huangjun> what your ceph version?
[8:57] <huangjun> i use 0.80.1
[8:57] <sherry> it's emperor
[8:57] <sherry> 72.2
[8:57] * ajazdzewski (~quassel@lpz-66.sprd.net) has joined #ceph
[8:58] <huangjun> you can check the patch yanzheng pushed, and recompile the kclient,
[9:01] <huangjun> yanzheng: the osdmap in kernel client have not information about if a pool if full or not?
[9:02] * sleinen (~Adium@2001:620:0:26:c1fe:ab06:e21e:2c) has joined #ceph
[9:04] <yanzheng> I think the kclient only has if the cluster is full or not
[9:04] <huangjun> yes,it's
[9:05] <huangjun> but if i set the pool quota, the kclient don't recognize the pool is full or not, it just send files to OSD
[9:06] <huangjun> if i want to check the pool flag in kclient , what should i begin, add a variable in osdmap?
[9:10] * zerick (~eocrospom@190.118.32.106) Quit (Ping timeout: 480 seconds)
[9:11] * wschulze (~wschulze@94.119.7.66) has joined #ceph
[9:15] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[9:18] * yguang11_ (~yguang11@2406:2000:ef96:e:c450:92ff:9080:d090) Quit (Remote host closed the connection)
[9:18] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[9:23] * analbeard (~shw@support.memset.com) has joined #ceph
[9:28] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[9:29] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[9:31] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Remote host closed the connection)
[9:31] * yguang11 (~yguang11@2406:2000:ef96:e:44d5:67b1:7466:1954) has joined #ceph
[9:32] * Infitialis (~infitiali@194.30.182.18) has joined #ceph
[9:35] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:37] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[9:38] * zack_dolby (~textual@e0109-114-22-12-137.uqwimax.jp) has joined #ceph
[9:40] * yguang11 (~yguang11@2406:2000:ef96:e:44d5:67b1:7466:1954) Quit (Ping timeout: 480 seconds)
[9:41] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) Quit (Quit: Leaving.)
[9:41] * thomnico (~thomnico@2a01:e35:8b41:120:213f:c6e2:f791:8d5e) has joined #ceph
[9:43] * yguang11 (~yguang11@2406:2000:ef96:e:19fc:639c:6c87:2a73) has joined #ceph
[9:49] * zack_dolby (~textual@e0109-114-22-12-137.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[9:50] * haomaiwang (~haomaiwan@www27339ue.sakura.ne.jp) Quit (Remote host closed the connection)
[9:51] * haomaiwang (~haomaiwan@www27339ue.sakura.ne.jp) has joined #ceph
[9:54] * kwaegema (~kwaegema@daenerys.ugent.be) has joined #ceph
[9:55] * Cube (~Cube@66-87-130-3.pools.spcsdns.net) Quit (Quit: Leaving.)
[9:59] * wschulze (~wschulze@94.119.7.66) Quit (Quit: Leaving.)
[9:59] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Quit: Leaving.)
[10:00] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[10:01] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[10:01] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[10:02] * vilobhmm (~vilobhmm@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) Quit (Quit: vilobhmm)
[10:02] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:06] * lucas1 (~Thunderbi@218.76.25.66) Quit (Ping timeout: 480 seconds)
[10:15] * yguang11_ (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[10:17] * vbellur (~vijay@209.132.188.8) Quit (Ping timeout: 480 seconds)
[10:18] * yguang11_ (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Remote host closed the connection)
[10:18] * yguang11_ (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[10:20] * yguang11 (~yguang11@2406:2000:ef96:e:19fc:639c:6c87:2a73) Quit (Ping timeout: 480 seconds)
[10:22] * fghaas (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[10:23] * drankis (~drankis__@89.111.13.198) has joined #ceph
[10:26] * yguang11_ (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[10:26] * zack_dolby (~textual@e0109-114-22-12-137.uqwimax.jp) has joined #ceph
[10:27] * Ronald (~oftc-webi@vpn.mc.osso.nl) Quit (Quit: Page closed)
[10:28] * sarob (~sarob@2601:9:1d00:c7f:2847:6caf:8616:b79) has joined #ceph
[10:29] * vbellur (~vijay@nat-pool-blr-t.redhat.com) has joined #ceph
[10:30] * sarob_ (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[10:32] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[10:34] * sarob_ (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[10:34] * zack_dolby (~textual@e0109-114-22-12-137.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[10:36] * sarob (~sarob@2601:9:1d00:c7f:2847:6caf:8616:b79) Quit (Ping timeout: 480 seconds)
[10:41] * davyjang (~oftc-webi@171.213.52.155) has joined #ceph
[10:42] <davyjang> after I created 1 mon and prepared 2 osds???I checked and found that the fsid of the three are same,but when I input *ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1*, the error output were as follows??? node2][WARNIN] ceph-disk: Error: No cluster conf found in /etc/ceph with fsid 3e68a2b5-cbf3-4149-9462-b89e2a40236e It was strange that the fsid in the output is different with that of the three node,and if I modified the three nodes
[10:42] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[10:42] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[10:42] <davyjang> another error happend as "[node2][WARNIN] 2014-06-11 01:39:17.738451 b63cfb40 0 librados: client.bootstrap-osd authentication error (1) Operation not permitted"
[10:43] <davyjang> what should I do?
[10:44] <kfei> davyjang, make sure you clean things under /var/lib/ceph/osd/* (on OSDs) before you run ceph-deploy
[10:45] <davyjang> why?
[10:46] <kfei> in case you have ran `ceph-deploy install <OSDs>` before
[10:46] <kfei> sorry I mean `ceph-deploy osd prepare <OSDs>`
[10:47] <davyjang> but it is always empty
[10:47] <davyjang> now it is either
[10:48] <kfei> and /etc/ceph/ceph.conf?
[10:49] * allsystemsarego (~allsystem@86.121.2.97) has joined #ceph
[10:49] <ghartz> someone use proxmox with ceph (rbd) ?
[10:49] <singler_> also on ceph-deploy host there may be leftovers from previous cluster
[10:49] <ghartz> How can I check if there is an error somewhere ?
[10:49] <ghartz> Can't find any log
[10:49] <davyjang> no, /var/lib/ceph/osd/
[10:50] <davyjang> the conf dir is normal
[10:50] <mo-> ghartz: what error would you be looking for?
[10:51] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Quit: Leaving.)
[10:52] <davyjang> I only have one cluster,so the fsid should be unique,right?
[10:55] * drankis (~drankis__@89.111.13.198) Quit (Ping timeout: 480 seconds)
[10:55] <kfei> davyjang, do you have any previous install on the same ceph-deploy node and OSD nodes?
[10:56] <davyjang> yes
[10:56] * yanzheng (~zhyan@jfdmzpr01-ext.jf.intel.com) Quit (Quit: Leaving)
[10:56] <davyjang> I manually install without ceph-deploy, and then with it
[10:57] <davyjang> not uninstall everynode,supposed it will cover old
[10:57] <kfei> davyjang, then I will suggest first run `ceph-deploy purge` and manully clean `/etc/ceph/*` and `/var/lib/ceph/*`
[10:58] <davyjang> ok,let me try
[10:58] <kfei> to ensure there is no previous configs left
[11:02] * CAPSLOCK2000 (~oftc@541856CC.cm-5-1b.dynamic.ziggo.nl) has joined #ceph
[11:13] * blue (~blue@irc.mmh.dk) has joined #ceph
[11:14] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Quit: Leaving.)
[11:15] <ghartz> mo-, I don't know actually
[11:16] <ghartz> I get something like a time out
[11:16] <ghartz> ceph version on proxmox .72.2, ceph version cluster .8
[11:16] <ghartz> maybe a mismatch version or something
[11:16] <ghartz> but I can't figure out why I can't access to the pool
[11:16] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[11:16] <ghartz> they can ping each other, no firewall
[11:17] <ghartz> other client can access to the cluster
[11:17] <mo-> if you manually use the rados command on the proxmox nodes, it should tell you whats up
[11:18] <mo-> the version thing is somewhat confusing to me too, because the docs say the "ceph" command (CLI) needs to be of an equally recent version, but afaik qemu only utilizes rados and the "ceph" command is only present because proxmox comes with its own ceph server included
[11:18] <ghartz> mo-, from rados I got a "mismatch version"
[11:19] <mo-> interesting, that explains that then I suppose
[11:20] <ghartz> but from the proxmox GUI, I do not get any error
[11:20] <mo-> generally the GUI is only a wrapper for whats going on under the hood. it may or may not tell you whats going on
[11:21] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Quit: Konversation terminated!)
[11:21] <blue> any idea how to get rid of a pool without a name?
[11:21] <mo-> the webinterface will only ever not show you the rbd contents if everything is fine, or not. not showing error messages on the latter case
[11:21] <blue> pool 17 '' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 4031 owner 0
[11:21] <mo-> -not
[11:23] * pinguini (~pinguini@host-94-251-111-51.bbcustomer.zsttk.net) has joined #ceph
[11:23] <pinguini> hi all
[11:23] <pinguini> how i can upgrade from 0.80.1 to 0.81
[11:23] <mo-> ghartz: the debian wheezy repositories have ceph-common 0.80.1 packages though, you may wish to upgrade to those
[11:23] <pinguini> http://ceph.com/debian-firefly/pool/main/c/ceph/ there is no 0.81 deb package
[11:28] <pinguini> oh 0.81 - development release
[11:29] * sarob (~sarob@2601:9:1d00:c7f:851d:9749:3947:7504) has joined #ceph
[11:30] <Infitialis> I'm trying to setup Calamari but when I execute supervisord -n -c dev/supervisord.conf I get exited: salt-master (exit status 4; not expected)
[11:31] * fdmanana (~fdmanana@bl10-142-30.dsl.telepac.pt) has joined #ceph
[11:31] <ingard> everything gets started when you initialize
[11:31] <ingard> at least that worked for me
[11:31] <ingard> http://calamari.readthedocs.org/en/latest/development/building_packages.html
[11:31] <ingard> sudo /opt/calamari/venv/bin/calamari-ctl initialize
[11:31] <ingard> that bit
[11:31] <ingard> specifically
[11:33] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[11:33] <Infitialis> ingard: not that it matters that much, but I'm following this one https://github.com/ceph/calamari which doesnt use vagrant
[11:34] * PureNZ (~paul@122-62-45-132.jetstream.xtra.co.nz) has left #ceph
[11:34] <ingard> right you're not using packages then?
[11:35] <Infitialis> ingard: ah wait I did initialize, sorry.
[11:36] <Infitialis> the strange thing, I don't really get any logging.
[11:37] * sarob (~sarob@2601:9:1d00:c7f:851d:9749:3947:7504) Quit (Ping timeout: 480 seconds)
[11:39] <Infitialis> ingard: atleast not that I can find, the initialising didnt solve the problem
[11:39] <davyjang> thanks kfei
[11:42] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[11:47] * analbeard (~shw@support.memset.com) has joined #ceph
[11:47] <ingard> Infitialis:
[11:47] <ingard> root 28609 0.0 0.2 61352 11572 ? Ss Jun06 2:25 /usr/bin/python /usr/bin/supervisord
[11:48] <ingard> root 28738 10.0 1.2 172404 52228 ? Sl Jun06 655:11 \_ /opt/calamari/venv/bin/python /opt/calamari/venv/bin/carbon-cache.py --debug --config /etc/graphite/carbon.conf start
[11:48] <ingard> root 29194 0.7 2.8 1069368 115356 ? Sl Jun06 48:33 \_ /opt/calamari/venv/bin/python /opt/calamari/venv/bin/cthulhu-manager
[11:48] <ingard> this is how it looks in the end for me at least
[11:50] <Infitialis> ingard: I cant really look at my processes(without starting a new terminal) because I need to cancel the supervisord.
[11:50] <Infitialis> like this
[11:50] <Infitialis> http://pastebin.com/3x44H3m0
[11:54] <kfei> davyjang, np!
[11:55] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[11:55] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) has joined #ceph
[11:55] * aldavud (~aldavud@194.146.213.1) has joined #ceph
[11:56] * jamespage (~jamespage@culvain.gromper.net) Quit (Quit: Coyote finally caught me)
[11:57] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[12:00] * aldavud_ (~aldavud@194.146.213.1) has joined #ceph
[12:03] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[12:03] <ingard> anyone got a main.js that isnt minified?
[12:03] <ingard> for calamari that is
[12:04] * davyjang (~oftc-webi@171.213.52.155) Quit (Quit: Page closed)
[12:04] <ingard> calamari-client/dashboard
[12:08] * vbellur (~vijay@nat-pool-blr-t.redhat.com) Quit (Ping timeout: 480 seconds)
[12:09] <Infitialis> ingard: I've got this in the log Unable to bind socket, error: [Errno 98] Address already in use
[12:09] <Infitialis> with the salt master
[12:10] <Infitialis> ah now it does work :S :P so far...
[12:12] <Infitialis> ingard: and end up with The requested URL /login/ was not found on this server.
[12:15] <Infitialis> even though there aren't any minions yet there should atleast be a login screen, shouldn't there?
[12:20] * vbellur (~vijay@209.132.188.8) has joined #ceph
[12:27] * fdmanana (~fdmanana@bl10-142-30.dsl.telepac.pt) Quit (Quit: Leaving)
[12:29] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[12:33] * aldavud (~aldavud@194.146.213.1) Quit (Read error: Operation timed out)
[12:35] * aldavud_ (~aldavud@194.146.213.1) Quit (Ping timeout: 480 seconds)
[12:36] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[12:37] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[12:38] * avati (~avati@nat-pool-rdu-t.redhat.com) has joined #ceph
[12:38] * odyssey4me (~odyssey4m@41-132-44-122.dsl.mweb.co.za) Quit (Read error: Operation timed out)
[12:40] * a2_ (~avati@209.132.181.86) has joined #ceph
[12:40] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Read error: Operation timed out)
[12:41] * a2 (~avati@209.132.181.86) Quit (Read error: Connection reset by peer)
[12:47] * avati (~avati@nat-pool-rdu-t.redhat.com) Quit (Ping timeout: 480 seconds)
[12:58] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[13:03] * lucas1 (~Thunderbi@218.76.25.66) Quit (Quit: lucas1)
[13:03] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[13:05] * huangjun (~kvirc@111.173.98.164) Quit (Ping timeout: 480 seconds)
[13:09] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[13:10] * fghaas (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) has joined #ceph
[13:22] * The_Bishop__ (~bishop@e180175052.adsl.alicedsl.de) has joined #ceph
[13:29] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[13:29] * The_Bishop_ (~bishop@f055051063.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[13:30] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[13:30] <mo-> has anybody tried increasing the size of a pool before? Tried it with 2 test clusters (.80.1) and always end up with some PGs degraded
[13:32] <absynth> well, isn't that normal? after increasing the size, there are new PGs which aren't yet replicated
[13:32] <mo-> well yes and no
[13:32] <mo-> because they and up being stuck degraded or remapped, even days later
[13:32] <absynth> you don#t have recovery_threads set to 0, right?
[13:33] <mo-> unless thats the default value, no
[13:33] <absynth> nah, it's 4 or 8 or something by default
[13:33] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[13:34] <mo-> was wondering whether there was a way to tell the PGs to go fix themselves. should be no problem since all OSDs are up and everything. but they just dont
[13:36] <absynth> from my understanding they really should
[13:37] <absynth> you do have enough OSDs and free space to satisfy the replication criteria, right?
[13:37] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[13:37] <mo-> pretty sure, yea
[13:37] * dmsimard_away is now known as dmsimard
[13:38] <mo-> the second time I did this just now (just telling this as a proxy), one of the monitors crashed during this recovery after going from 2 to 3
[13:38] <mo-> the log just shows the usual "paxos recovering ....." messages and then nothing. service ceph status says the mon is "dead"
[13:40] * saurabh (~saurabh@nat-pool-blr-t.redhat.com) Quit (Quit: Leaving)
[13:41] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[13:42] * diegows (~diegows@190.190.5.238) has joined #ceph
[13:43] * zhaochao (~zhaochao@124.205.245.26) has left #ceph
[13:58] * CAPSLOCK2000 (~oftc@541856CC.cm-5-1b.dynamic.ziggo.nl) Quit (Ping timeout: 480 seconds)
[13:58] * ganders (~root@200-127-158-54.net.prima.net.ar) has joined #ceph
[13:59] * CAPSLOCK2000 (~oftc@541856CC.cm-5-1b.dynamic.ziggo.nl) has joined #ceph
[14:09] * spekzor (spekzor@d.clients.kiwiirc.com) has joined #ceph
[14:11] * jrankin (~jrankin@d47-69-66-231.try.wideopenwest.com) has joined #ceph
[14:14] <spekzor> hi, last week we've upgraded to firefily 80.1 on a production cluster (3 nodes, 18 osds). Uprade went fine but the speed of the entire cluster dropped drastically. We've benched all OSD's but none seem to be slow. Also no weird messages in dmesg. If i map a rbd device on one of the nodes i and do a dd with conv=sync oflag=direct i get only 10mb /se
[14:14] <spekzor> c. used to be 170. We use 10gbit networking dedicated osd and cluster network. Problems started after rebooting an osd server. Recovery of 6% degraded data took ages and was exponentionally slower. Untill recovery stalled completly at 1.1%. Until i stopped some clients, then recovery kicked back in again and finished. Still the cluster is very slow
[14:14] <spekzor> . Especially read.
[14:16] <spekzor> we use proxmox ve which uses ceph version 0.67.7. But as i mentioned, performance is also low on local mapped rbd dev.
[14:17] <spekzor> sage already asked me to pull some debug from an osd but forgot the command and the file to look in.
[14:17] * guerby (~guerby@ip165-ipv6.tetaneutral.net) Quit (Read error: Connection reset by peer)
[14:18] * berant (~blemmenes@gw01.ussignal.com) has joined #ceph
[14:18] * guerby (~guerby@ip165-ipv6.tetaneutral.net) has joined #ceph
[14:18] * mjeanson_ is now known as mjeanson
[14:19] <cronix> hi
[14:19] <cronix> sage: you here?
[14:19] <absynth> spekzor: that sounds really dubious
[14:19] <absynth> cronix: hardly, it's 5 am where he is
[14:19] <mo-> spekzor: do you have a bugtracker going about that already? because Id love to follow it
[14:19] <cronix> oh okay :)
[14:19] <cronix> weve just finished upgrading our POC cluster to firefly
[14:20] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[14:20] <spekzor> no not yet, i need to pull some debug out of the cluster but forgot how. debug-ms or something?
[14:20] <absynth> i am trying to remember too
[14:20] <cronix> and after that was done ive set crush tunables to optimal, now our OSD's are dying again, last lines of theyre logfiles: http://pastebin.com/WDN7MCfg
[14:20] <mo-> yes uhm
[14:20] <mo-> ceph-osd --help shows the parameter I think
[14:20] <mo-> ceph-mon does, I know that much
[14:21] <mo-> think its --debug_ms 20
[14:21] <spekzor> then i need to restart an osd right?
[14:21] <absynth> no, you can inject that during runtime
[14:21] <spekzor> but then it will log to a file?
[14:21] <absynth> ceph tell osd.* injectargs '--debug-ms 20' or something
[14:21] <absynth> insert \ where appropriate to avoid bash globbing
[14:22] <absynth> i think it logs to /var/log/ceph/ceph-osd.x.log or something by default? not sure though
[14:23] * The_Bishop_ (~bishop@e178115251.adsl.alicedsl.de) has joined #ceph
[14:24] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Remote host closed the connection)
[14:24] <cronix> thats correct
[14:24] <cronix> afair
[14:25] * The_Bishop__ (~bishop@e180175052.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[14:25] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) has joined #ceph
[14:26] * danieagle (~Daniel@186.214.48.173) has joined #ceph
[14:27] <cronix> and i get alot of the following lines in my logs:
[14:28] <cronix> 2014-06-11 14:23:17.922644 7f5524f57700 0 -- 10.78.6.4:6835/4416182 >> 10.78.6.11:6939/4718540 pipe(0x38181900 sd=570 :51705 s=2 pgs=7881 cs=1 l=0 c=0x11dec840).fault with nothing to send, going to standby
[14:28] * scuttlemonkey (~scuttlemo@63.138.96.2) Quit (Remote host closed the connection)
[14:29] <spekzor> hmm, /var/log/ceph/ceph-osd.x.log stays empty after injecting '--debug-ms 20 --debug-osd 1' on osd.0
[14:29] * sarob (~sarob@2601:9:1d00:c7f:539:77f0:b606:3ada) has joined #ceph
[14:29] * diegows (~diegows@190.190.5.238) Quit (Read error: Operation timed out)
[14:31] <cronix> 2014-06-11 14:31:17.777959 7fbe48028700 -1 osd.602 14062 heartbeat_check: no reply from osd.1325 ever on either front or back, first ping sent 2014-06-11 14:21:41.265424 (cutoff 2014-06-11 14:30:57.777950)
[14:31] * jrankin (~jrankin@d47-69-66-231.try.wideopenwest.com) Quit (Quit: Leaving)
[14:35] <spekzor> absynth, log files stay empty
[14:35] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Quit: Leaving.)
[14:36] * vbellur (~vijay@209.132.188.8) Quit (Quit: Leaving.)
[14:37] * sarob (~sarob@2601:9:1d00:c7f:539:77f0:b606:3ada) Quit (Ping timeout: 480 seconds)
[14:38] * odyssey4me (~odyssey4m@165.233.205.190) has joined #ceph
[14:41] * odyssey4me_ (~odyssey4m@165.233.71.2) has joined #ceph
[14:42] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) has joined #ceph
[14:42] * diegows (~diegows@host-216-57-132-113.customer.veroxity.net) has joined #ceph
[14:43] * odyssey4me is now known as Guest13282
[14:43] * odyssey4me_ is now known as odyssey4me
[14:43] <absynth> spekzor: syslog, maybe?
[14:44] * fghaas (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[14:45] * jrankin (~jrankin@d47-69-66-231.try.wideopenwest.com) has joined #ceph
[14:45] <spekzor> no cigar
[14:45] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[14:46] * bandrus (~oddo@adsl-71-137-199-24.dsl.scrm01.pacbell.net) has joined #ceph
[14:46] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Remote host closed the connection)
[14:46] * Guest13282 (~odyssey4m@165.233.205.190) Quit (Ping timeout: 480 seconds)
[14:48] <spekzor> dit open a ticket though: http://tracker.ceph.com/issues/8582
[14:48] <spekzor> did
[14:48] * sz0 (~sz0@94.54.193.66) has joined #ceph
[14:50] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) Quit (Ping timeout: 480 seconds)
[14:55] <kfei> Is it possible to enlarge QEMU-RBD driver's read_ahead size? Seems no docs mentioned this
[14:56] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) has joined #ceph
[14:56] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) Quit ()
[14:56] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) has joined #ceph
[14:59] <alfredodeza> seapasulli: ping
[14:59] * sleinen (~Adium@2001:620:0:26:c1fe:ab06:e21e:2c) Quit (Quit: Leaving.)
[14:59] <alfredodeza> seapasulli: issue 7157
[14:59] <kraken> alfredodeza might be talking about http://tracker.ceph.com/issues/7157 [ceph-disk list fails in encrypted disk setup]
[14:59] <alfredodeza> thanks kraken
[14:59] * kraken is flabbergasted by the staturated declaration of divinity
[15:01] * sleinen (~Adium@130.59.94.228) has joined #ceph
[15:02] * sleinen1 (~Adium@2001:620:0:26:d91c:e204:656f:efab) has joined #ceph
[15:05] * vbellur (~vijay@122.166.172.253) has joined #ceph
[15:05] * rdas (~rdas@nat-pool-pnq-t.redhat.com) Quit (Quit: Leaving)
[15:09] <janos_> saturated. haha
[15:09] * sleinen (~Adium@130.59.94.228) Quit (Ping timeout: 480 seconds)
[15:12] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Ping timeout: 480 seconds)
[15:16] * huangjun (~kvirc@117.151.45.47) has joined #ceph
[15:18] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[15:19] * jskinner (~jskinner@69.170.148.179) has joined #ceph
[15:29] * sarob (~sarob@2601:9:1d00:c7f:de8:8ef8:df53:8704) has joined #ceph
[15:31] * abonilla (~abonilla@c-69-253-241-144.hsd1.de.comcast.net) Quit (Quit: leaving)
[15:31] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Ping timeout: 480 seconds)
[15:33] <ingard> hi. after messing around with replication factor on one pool i've got all of that pools pg's degraded
[15:33] * diegows (~diegows@host-216-57-132-113.customer.veroxity.net) Quit (Read error: Connection reset by peer)
[15:33] <ingard> will it auto fix or do i have to do something?
[15:33] <ingard> i'm on firefly btw
[15:34] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[15:35] <mo-> if you increase the size, the objects need replicate once more, which will take time
[15:35] <ingard> yeah thats what I did
[15:35] <ingard> but its been hours
[15:35] <ingard> and i'm not seeing that number change
[15:35] <ingard> its still 4096
[15:35] <ingard> WARN 4096 pgs degraded
[15:35] <ingard> WARN recovery 40052/125283 objects degraded (31.969%)
[15:36] <mo-> Apparently (guess on my end) doing what you did makes some PGs show as degraded
[15:36] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[15:37] <mo-> whereas it should actually show the recovery as blocked by client IO. this is what Im assuming, also according to this ticket that was posted just minutes ago: issue 8582
[15:37] <kraken> mo- might be talking about http://tracker.ceph.com/issues/8582 [Cluster very slow after upgrade to 80.1]
[15:37] * sarob (~sarob@2601:9:1d00:c7f:de8:8ef8:df53:8704) Quit (Ping timeout: 480 seconds)
[15:39] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[15:39] <cronix> failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf --name=osd.616 --keyring=/var/lib/ceph/osd/ceph-616/keyring osd crush create-or-move -- 616 3.64 host=csliveeubs-u01b01 root=default'
[15:40] <cronix> i only get OSD up again after several start attempts, they timeout alot, does anyone else noted this behaviour?
[15:43] <ingard> is there a limit on how fast the pgs will replicate?
[15:43] <ingard> i can see traffic on my nodes of 5-10 mbit/s
[15:43] <ingard> and its otherwise idle
[15:43] <singler_> ingard: do you have enough failover domains?
[15:43] <ingard> i have no idea :)
[15:44] <ingard> whats that?
[15:44] <singler_> if you have 2 hosts and set replication to 3, 1/3 of your pgs would be degraded (considering default crush rules)
[15:44] <ingard> i've got 3 hosts
[15:44] <ingard> and set the repl to 4
[15:45] <singler_> same thing applies
[15:45] <ingard> and for that pool all my pgs are degraded
[15:45] <singler_> by default no copy of pg will be put on same host
[15:45] <ingard> right
[15:45] * danieagle (~Daniel@186.214.48.173) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[15:46] <ingard> i'll change it to 2 then
[15:46] <ingard> thx for that :)
[15:46] <singler_> you have 3 hosts, so 3 replicas can by placed on different hosts, 4th pg does not have host to put it in
[15:46] <ingard> right yeah i got it
[15:46] <singler_> you can set failover domain to osd by changink crush map
[15:46] <singler_> *changing
[15:47] <ingard> what happens when i lower the repl factor to 2 though?
[15:47] <ingard> it was initially 3
[15:47] * tracphil (~tracphil@130.14.71.217) Quit (Quit: leaving)
[15:47] <ingard> i was expecting lots of stuff to happen but the dashboard just now says "everything looks good"
[15:47] <janos_> i would imagine it's doing a lazy delete of uneeded copies. but i don't actually know
[15:48] <janos_> *unneeded
[15:48] <singler_> well you already had copies of pg on all hosts so I guess 3rd copy just got deleted
[15:48] <singler_> also you probably should set min_size to 1 in this case if it is more than 1
[15:50] * tracphil (~tracphil@130.14.71.217) has joined #ceph
[15:50] * sleinen1 (~Adium@2001:620:0:26:d91c:e204:656f:efab) Quit (Quit: Leaving.)
[15:51] <ingard> min_size ?
[15:53] <singler_> http://ceph.com/docs/master/rados/operations/pools/#set-the-number-of-object-replicas
[15:53] * diegows (~diegows@190.190.5.238) has joined #ceph
[15:54] <ingard> so is this not the same as the replication factor for the pool?
[15:55] * evl (~chatzilla@139.216.138.39) Quit (Remote host closed the connection)
[15:55] <huangjun> hi, did anyone test the 24OSDs performance?
[15:58] <mo-> ingard: min_size is the number of replicas to be OK for the cluster to accept IO to/from it
[15:58] <singler_> ingard: "size" is replication factor. "min_size" specifies minimum pg size to process IO (e.g. if size=3, min_size=2 then if 1 replica is lost IO is processed, but if 2 replicas are lost, IO gets blocked)
[15:58] <mo-> *that must be OK
[15:59] <ingard> right
[15:59] <ingard> but it will try to restore the said replication factor?
[16:02] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[16:04] <singler_> yes
[16:04] * gregsfortytwo1 (~Adium@cpe-107-184-64-126.socal.res.rr.com) Quit (Quit: Leaving.)
[16:06] * aldavud (~aldavud@213.55.176.136) has joined #ceph
[16:08] <ingard> singler_: when i changed the min size to 1 it changed the replicas with it
[16:08] <ingard> for that pool
[16:08] <ingard> to 1
[16:08] <ingard> root@ceph-002:~# ceph osd pool get .rgw.buckets size
[16:08] <ingard> size: 2
[16:08] <ingard> root@ceph-002:~# ceph osd pool set .rgw.buckets size 1
[16:08] <ingard> set pool 20 size to 1
[16:08] <ingard> root@ceph-002:~# ceph osd pool get .rgw.buckets size
[16:08] * lalatenduM_ (~lalatendu@nat-pool-blr-t.redhat.com) has joined #ceph
[16:08] <ingard> size: 1
[16:09] <cronix> http://pastebin.com/LRPS4im7
[16:09] <cronix> help?
[16:09] <cronix> OSDs crashed: http://pastebin.com/WDN7MCfg
[16:09] <ingard> singler_: never mind
[16:09] <ingard> i just read the next paragraph :)
[16:09] <cronix> and are timing out when i try to start them again: == osd.821 ===
[16:09] <cronix> failed: 'timeout 120 /usr/bin/ceph -c /etc/ceph/ceph.conf --name=osd.821 --keyring=/var/lib/ceph/osd/ceph-821/keyring osd crush create-or-move -- 821 3.64 host=csliveeubs-u01b03 root=default'
[16:10] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[16:11] * sleinen (~Adium@130.59.94.228) has joined #ceph
[16:13] * sleinen1 (~Adium@2001:620:0:26:ccc:8f21:ae83:e9a5) has joined #ceph
[16:13] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[16:15] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[16:15] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) Quit (Ping timeout: 480 seconds)
[16:17] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:18] * aldavud (~aldavud@213.55.176.136) Quit (Ping timeout: 480 seconds)
[16:19] * b0e (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[16:19] * sleinen (~Adium@130.59.94.228) Quit (Ping timeout: 480 seconds)
[16:20] * markbby (~Adium@168.94.245.1) has joined #ceph
[16:21] * jammcq (~jam@c-24-11-53-228.hsd1.mi.comcast.net) Quit (Quit: WeeChat 0.3.7)
[16:21] * markbby (~Adium@168.94.245.1) Quit (Remote host closed the connection)
[16:23] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:23] * markbby (~Adium@168.94.245.4) has joined #ceph
[16:29] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[16:30] * ganders (~root@200-127-158-54.net.prima.net.ar) Quit (Quit: WeeChat 0.4.1)
[16:33] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[16:33] * rotbeard (~redbeard@2a02:908:df19:9900:76f0:6dff:fe3b:994d) has joined #ceph
[16:33] <cronix> i've opened an issue: 8584
[16:34] <cronix> i've opened an issue 8584
[16:34] <kraken> cronix might be talking about http://tracker.ceph.com/issues/8584 [OSD Crashing on firefly - Timeouts on starting again]
[16:35] * jammcq (~jam@c-24-11-53-228.hsd1.mi.comcast.net) has joined #ceph
[16:39] * jcsp (~Adium@0001bf3a.user.oftc.net) has joined #ceph
[16:41] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[16:42] * drankis (~drankis__@89.111.13.198) has joined #ceph
[16:44] * xarses (~andreww@12.164.168.117) has joined #ceph
[16:47] * rpowell (~rpowell@128.135.219.215) has joined #ceph
[16:49] * lincolnb (~lincoln@c-67-165-142-226.hsd1.il.comcast.net) has joined #ceph
[16:51] * scuttlemonkey (~scuttlemo@nat-pool-bos-t.redhat.com) has joined #ceph
[16:51] * ChanServ sets mode +o scuttlemonkey
[16:53] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[16:57] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Ping timeout: 480 seconds)
[16:58] * rdas (~rdas@nat-pool-pnq-t.redhat.com) has joined #ceph
[16:58] * rpowell (~rpowell@128.135.219.215) has left #ceph
[17:00] * xarses_ (~andreww@12.164.168.117) has joined #ceph
[17:00] * bandrus1 (~oddo@adsl-71-137-194-181.dsl.scrm01.pacbell.net) has joined #ceph
[17:01] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[17:01] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:01] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[17:02] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[17:03] * newbie|2 (~kvirc@117.151.45.47) has joined #ceph
[17:03] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[17:04] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[17:04] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[17:04] * bandrus (~oddo@adsl-71-137-199-24.dsl.scrm01.pacbell.net) Quit (Ping timeout: 480 seconds)
[17:04] * xarses (~andreww@12.164.168.117) Quit (Read error: Operation timed out)
[17:04] * jammcq (~jam@c-24-11-53-228.hsd1.mi.comcast.net) Quit (Quit: WeeChat 0.3.7)
[17:06] * Hell_Fire (~HellFire@123-243-155-184.static.tpgi.com.au) has joined #ceph
[17:06] * newbie|2 (~kvirc@117.151.45.47) Quit ()
[17:07] * newbie|2 (~kvirc@117.151.45.47) has joined #ceph
[17:07] <lincolnb> can anyone point me at the appropriate documentation to enable multimds? i would like to start benchmarking/breaking it in our testbed environment :)
[17:08] * berant (~blemmenes@gw01.ussignal.com) Quit (Ping timeout: 480 seconds)
[17:08] * bandrus1 (~oddo@adsl-71-137-194-181.dsl.scrm01.pacbell.net) Quit (Ping timeout: 480 seconds)
[17:08] * vbellur1 (~vijay@122.166.172.253) has joined #ceph
[17:08] * bandrus (~oddo@adsl-71-137-194-253.dsl.scrm01.pacbell.net) has joined #ceph
[17:09] * BManojlovic (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[17:10] * huangjun (~kvirc@117.151.45.47) Quit (Ping timeout: 480 seconds)
[17:10] * xarses_ (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[17:11] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[17:12] * vbellur (~vijay@122.166.172.253) Quit (Ping timeout: 480 seconds)
[17:13] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[17:13] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[17:14] * Infitialis (~infitiali@194.30.182.18) Quit ()
[17:14] <scuttlemonkey> wido: ping?
[17:18] * xarses (~andreww@12.164.168.117) has joined #ceph
[17:20] <devicenull> hmm, is there a way to make radosgw ignore duplicate slashes in object names?
[17:20] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:21] * xarses_ (~andreww@12.164.168.117) has joined #ceph
[17:21] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[17:23] * yuriw (~Adium@121.243.198.77.rev.sfr.net) has joined #ceph
[17:26] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[17:29] * sarob (~sarob@2601:9:1d00:c7f:3cd6:ef42:2b75:d159) has joined #ceph
[17:30] * narb (~Jeff@38.99.52.10) has joined #ceph
[17:32] * haomaiwang (~haomaiwan@www27339ue.sakura.ne.jp) Quit (Remote host closed the connection)
[17:33] * haomaiwang (~haomaiwan@www27339ue.sakura.ne.jp) has joined #ceph
[17:33] * fdmanana (~fdmanana@bl10-142-30.dsl.telepac.pt) has joined #ceph
[17:33] * ikrstic (~ikrstic@c82-214-88-26.loc.akton.net) Quit (Ping timeout: 480 seconds)
[17:35] * newbie|2 (~kvirc@117.151.45.47) Quit (Ping timeout: 480 seconds)
[17:36] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[17:37] * sarob (~sarob@2601:9:1d00:c7f:3cd6:ef42:2b75:d159) Quit (Ping timeout: 480 seconds)
[17:38] * joef (~Adium@2620:79:0:131:9c2c:b310:60ab:bfd2) has joined #ceph
[17:40] * thomnico (~thomnico@2a01:e35:8b41:120:213f:c6e2:f791:8d5e) Quit (Quit: Ex-Chat)
[17:41] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[17:41] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[17:42] * zack_dolby (~textual@pdf8519e7.tokynt01.ap.so-net.ne.jp) has joined #ceph
[17:43] * odyssey4me (~odyssey4m@165.233.71.2) Quit (Quit: Leaving)
[17:49] * rwheeler (~rwheeler@173.48.207.57) Quit (Quit: Leaving)
[17:52] * spekzor (spekzor@d.clients.kiwiirc.com) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[17:53] * ajazdzewski (~quassel@lpz-66.sprd.net) Quit (Remote host closed the connection)
[17:54] * hasues (~hazuez@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[17:54] * markbby (~Adium@168.94.245.2) has joined #ceph
[17:56] * rdas (~rdas@nat-pool-pnq-t.redhat.com) Quit (Quit: Leaving)
[17:57] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[18:00] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[18:04] * thomnico (~thomnico@2a01:e35:8b41:120:213f:c6e2:f791:8d5e) has joined #ceph
[18:07] * lalatenduM_ (~lalatendu@nat-pool-blr-t.redhat.com) Quit (Quit: Leaving)
[18:08] * KaZeR (~kazer@64.201.252.132) has joined #ceph
[18:08] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[18:11] * sleinen1 (~Adium@2001:620:0:26:ccc:8f21:ae83:e9a5) Quit (Quit: Leaving.)
[18:14] * reed (~reed@75-101-54-131.dsl.static.sonic.net) has joined #ceph
[18:14] * haomaiwang (~haomaiwan@www27339ue.sakura.ne.jp) Quit (Remote host closed the connection)
[18:14] * haomaiwang (~haomaiwan@www27339ue.sakura.ne.jp) has joined #ceph
[18:18] * drankis_ (~drankis__@37.148.173.239) has joined #ceph
[18:23] * nwat (~textual@eduroam-250-158.ucsc.edu) has joined #ceph
[18:24] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) has joined #ceph
[18:25] * jrcresawn (~jrcresawn@150.135.211.226) has joined #ceph
[18:29] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[18:29] * jrcresawn (~jrcresawn@150.135.211.226) Quit (Remote host closed the connection)
[18:29] * jrcresawn (~jrcresawn@150.135.211.226) has joined #ceph
[18:36] * joshd1 (~jdurgin@2602:306:c5db:310:b05d:b940:7103:d2d7) has joined #ceph
[18:37] * sarob_ (~sarob@2601:9:1d00:c7f:f1ff:4219:3bdf:400c) has joined #ceph
[18:37] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:37] * Cube (~Cube@66.87.131.128) has joined #ceph
[18:37] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[18:39] * sleinen1 (~Adium@2001:620:0:26:40cb:ff89:bb5e:4428) has joined #ceph
[18:39] * sarob (~sarob@2601:9:1d00:c7f:9913:3d2e:56d5:c4cb) has joined #ceph
[18:40] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[18:40] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[18:44] * jeremy___s (~jeremy__s@AStDenis-552-1-172-70.w80-8.abo.wanadoo.fr) has joined #ceph
[18:44] * drankis_ (~drankis__@37.148.173.239) Quit (Ping timeout: 480 seconds)
[18:45] * sarob_ (~sarob@2601:9:1d00:c7f:f1ff:4219:3bdf:400c) Quit (Ping timeout: 480 seconds)
[18:45] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[18:46] * jeremy__1s (~jeremy__s@AStDenis-552-1-167-139.w80-8.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[18:46] * yuriw (~Adium@121.243.198.77.rev.sfr.net) Quit (Quit: Leaving.)
[18:47] * diegows (~diegows@200.68.116.185) has joined #ceph
[18:47] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[18:50] * wschulze (~wschulze@80.149.32.4) has joined #ceph
[18:52] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[18:53] * drankis_ (~drankis__@89.111.13.198) has joined #ceph
[18:55] * markbby (~Adium@168.94.245.2) Quit (Quit: Leaving.)
[18:55] * alphe (~alphe@0001ac6f.user.oftc.net) has joined #ceph
[18:57] * nwat (~textual@eduroam-250-158.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[18:58] <alphe> umount is stuck when i try to umount a cloned rbd image anyone knows what happends ?
[18:58] * markbby (~Adium@168.94.245.2) has joined #ceph
[19:02] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[19:03] * markbby (~Adium@168.94.245.2) Quit (Remote host closed the connection)
[19:18] * markbby (~Adium@168.94.245.3) has joined #ceph
[19:26] * zerick (~eocrospom@190.187.21.53) Quit (Ping timeout: 480 seconds)
[19:31] * tracphil (~tracphil@130.14.71.217) Quit (Quit: leaving)
[19:32] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[19:34] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[19:36] * zerick (~eocrospom@190.114.249.148) has joined #ceph
[19:37] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[19:42] * Cube1 (~Cube@66-87-65-52.pools.spcsdns.net) has joined #ceph
[19:42] * Cube (~Cube@66.87.131.128) Quit (Read error: Connection reset by peer)
[19:42] * alram (~alram@38.122.20.226) has joined #ceph
[19:43] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[19:46] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[19:46] <alphe> umount is stuck when i try to umount a cloned rbd image anyone knows what happends ?
[19:48] * sputnik13 (~sputnik13@207.8.121.241) Quit ()
[19:49] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[19:56] * KaZeR (~kazer@64.201.252.132) Quit (Remote host closed the connection)
[19:56] * KaZeR (~kazer@64.201.252.132) has joined #ceph
[19:58] * tnt is now known as tnt_
[19:59] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[19:59] * wschulze (~wschulze@80.149.32.4) Quit (Read error: Operation timed out)
[20:04] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[20:05] * markbby (~Adium@168.94.245.3) has joined #ceph
[20:05] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[20:06] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[20:07] * Pedras (~Adium@216.207.42.132) has joined #ceph
[20:07] * scuttlemonkey (~scuttlemo@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[20:13] * joef (~Adium@2620:79:0:131:9c2c:b310:60ab:bfd2) Quit (Read error: Connection reset by peer)
[20:14] * joef (~Adium@138-72-131-163.pixar.com) has joined #ceph
[20:14] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[20:14] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) Quit (Ping timeout: 480 seconds)
[20:16] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[20:17] * sarob (~sarob@2601:9:1d00:c7f:9913:3d2e:56d5:c4cb) Quit (Remote host closed the connection)
[20:18] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[20:20] * reed (~reed@75-101-54-131.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[20:21] * KaZeR (~kazer@64.201.252.132) Quit (Ping timeout: 480 seconds)
[20:22] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[20:25] * rendar (~I@host56-182-dynamic.37-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[20:25] * rweeks (~goodeats@c-24-6-118-113.hsd1.ca.comcast.net) has joined #ceph
[20:26] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[20:31] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[20:31] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[20:32] * KaZeR (~kazer@64.201.252.132) has joined #ceph
[20:35] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[20:37] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[20:41] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) has joined #ceph
[20:47] * BManojlovic (~steki@cable-94-189-165-169.dynamic.sbb.rs) has joined #ceph
[20:48] * b0e (~aledermue@x2f31415.dyn.telefonica.de) has joined #ceph
[20:48] * thb (~me@0001bd58.user.oftc.net) Quit (Quit: Leaving.)
[20:50] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Remote host closed the connection)
[20:54] * b0e (~aledermue@x2f31415.dyn.telefonica.de) Quit (Quit: Leaving.)
[20:59] * thb (~me@port-93006.pppoe.wtnet.de) has joined #ceph
[20:59] * thb is now known as Guest13310
[21:00] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[21:00] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[21:00] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[21:00] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[21:02] * bandrus (~oddo@adsl-71-137-194-253.dsl.scrm01.pacbell.net) Quit (Quit: Leaving.)
[21:03] * bandrus (~Adium@adsl-71-137-194-253.dsl.scrm01.pacbell.net) has joined #ceph
[21:04] * sarob_ (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[21:04] * markbby (~Adium@168.94.245.3) has joined #ceph
[21:06] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[21:06] * sarob_ (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit ()
[21:06] * sarob_ (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[21:08] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[21:09] * bandrus (~Adium@adsl-71-137-194-253.dsl.scrm01.pacbell.net) Quit (Read error: Connection reset by peer)
[21:10] * bandrus (~Adium@adsl-71-137-194-253.dsl.scrm01.pacbell.net) has joined #ceph
[21:11] * alphe (~alphe@0001ac6f.user.oftc.net) Quit (Quit: Leaving)
[21:13] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[21:14] * tracphil (~tracphil@130.14.71.217) has joined #ceph
[21:16] * rweeks (~goodeats@c-24-6-118-113.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[21:16] * vilobhmm (~vilobhmm@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) has joined #ceph
[21:16] * wschulze (~wschulze@80.149.32.4) has joined #ceph
[21:20] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[21:23] * BManojlovic (~steki@cable-94-189-165-169.dynamic.sbb.rs) Quit (Ping timeout: 480 seconds)
[21:27] * sarob_ (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[21:27] * sarob (~sarob@2601:9:1d00:c7f:59cf:659e:a5fe:c818) has joined #ceph
[21:27] * wschulze (~wschulze@80.149.32.4) Quit (Quit: Leaving.)
[21:29] * aldavud (~aldavud@213.55.184.176) has joined #ceph
[21:31] * bandrus (~Adium@adsl-71-137-194-253.dsl.scrm01.pacbell.net) Quit (Quit: Leaving.)
[21:34] * aldavud_ (~aldavud@213.55.184.176) has joined #ceph
[21:35] * sarob (~sarob@2601:9:1d00:c7f:59cf:659e:a5fe:c818) Quit (Ping timeout: 480 seconds)
[21:35] * Sysadmin88 (~IceChat77@94.4.22.173) has joined #ceph
[21:47] * xdeller_ (~xdeller@h195-91-128-218.ln.rinet.ru) Quit (Quit: Leaving)
[21:49] * leseb (~leseb@185.21.174.206) Quit (Killed (NickServ (Too many failed password attempts.)))
[21:50] * leseb (~leseb@185.21.174.206) has joined #ceph
[21:52] * BManojlovic (~steki@cable-94-189-165-169.dynamic.sbb.rs) has joined #ceph
[21:54] * sarob (~sarob@2601:9:1d00:c7f:5523:d176:7a0a:7b19) has joined #ceph
[21:55] * scuttlemonkey (~scuttlemo@63.138.96.2) has joined #ceph
[21:55] * ChanServ sets mode +o scuttlemonkey
[21:58] * sommarnatt (~sommarnat@c83-251-204-51.bredband.comhem.se) has joined #ceph
[21:58] <sommarnatt> Hi guys! Is there anyway to turn off a running deep scrub?
[22:02] * sarob (~sarob@2601:9:1d00:c7f:5523:d176:7a0a:7b19) Quit (Ping timeout: 480 seconds)
[22:07] * valeech (~valeech@173-163-204-166-Richmond.hfc.comcastbusiness.net) has joined #ceph
[22:08] * bandrus (~oddo@adsl-71-137-194-253.dsl.scrm01.pacbell.net) has joined #ceph
[22:16] * valeech (~valeech@173-163-204-166-Richmond.hfc.comcastbusiness.net) Quit (Quit: valeech)
[22:16] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[22:16] * jcsp (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[22:20] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[22:25] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[22:27] * bandrus1 (~Adium@adsl-71-137-194-253.dsl.scrm01.pacbell.net) has joined #ceph
[22:31] * sarob (~sarob@2601:9:1d00:c7f:15fa:cfcf:bd01:27cb) has joined #ceph
[22:31] * Guest13310 is now known as thb
[22:33] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[22:33] * bandrus1 (~Adium@adsl-71-137-194-253.dsl.scrm01.pacbell.net) Quit (Read error: Connection reset by peer)
[22:33] * bandrus1 (~Adium@adsl-71-137-194-253.dsl.scrm01.pacbell.net) has joined #ceph
[22:35] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit (Quit: Quitte)
[22:36] * tracphil (~tracphil@130.14.71.217) Quit (Quit: leaving)
[22:37] * mtanski (~mtanski@65.107.210.227) has joined #ceph
[22:38] * aldavud (~aldavud@213.55.184.176) Quit (Read error: Operation timed out)
[22:39] * aldavud_ (~aldavud@213.55.184.176) Quit (Ping timeout: 480 seconds)
[22:39] * sommarnatt (~sommarnat@c83-251-204-51.bredband.comhem.se) Quit (Quit: Leaving...)
[22:39] * sarob (~sarob@2601:9:1d00:c7f:15fa:cfcf:bd01:27cb) Quit (Ping timeout: 480 seconds)
[22:39] * LeaChim (~LeaChim@host86-174-77-240.range86-174.btcentralplus.com) has joined #ceph
[22:41] * markbby (~Adium@168.94.245.1) has joined #ceph
[22:42] * diegows (~diegows@200.68.116.185) Quit (Ping timeout: 480 seconds)
[22:43] * markbby (~Adium@168.94.245.1) Quit ()
[22:45] * jrankin (~jrankin@d47-69-66-231.try.wideopenwest.com) Quit (Quit: Leaving)
[22:46] * mtanski (~mtanski@65.107.210.227) Quit (Quit: mtanski)
[22:48] * thomnico (~thomnico@2a01:e35:8b41:120:213f:c6e2:f791:8d5e) Quit (Quit: Ex-Chat)
[22:49] * markbby (~Adium@168.94.245.1) has joined #ceph
[22:50] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[22:53] * aldavud_ (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[22:56] * rmoe (~quassel@12.164.168.117) has joined #ceph
[23:01] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[23:01] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[23:03] * sarob (~sarob@2601:9:1d00:c7f:38b5:c506:8d33:e963) has joined #ceph
[23:04] * kfei (~root@61-227-11-182.dynamic.hinet.net) Quit (Read error: Connection reset by peer)
[23:05] <davidzlap> sommarnatt: ceph osd set nodeep-scrub
[23:08] * BManojlovic (~steki@cable-94-189-165-169.dynamic.sbb.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[23:08] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) has joined #ceph
[23:10] * allsystemsarego (~allsystem@86.121.2.97) Quit (Quit: Leaving)
[23:11] * sarob (~sarob@2601:9:1d00:c7f:38b5:c506:8d33:e963) Quit (Ping timeout: 480 seconds)
[23:17] * kfei (~root@114-27-49-58.dynamic.hinet.net) has joined #ceph
[23:17] * sarob (~sarob@2601:9:1d00:c7f:c4ff:2191:e13e:bf3d) has joined #ceph
[23:19] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) Quit (Quit: Leaving)
[23:19] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[23:19] * bandrus (~oddo@adsl-71-137-194-253.dsl.scrm01.pacbell.net) Quit (Quit: Leaving.)
[23:24] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has left #ceph
[23:29] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[23:29] * ircolle1 (~Adium@c-67-172-132-222.hsd1.co.comcast.net) has joined #ceph
[23:33] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Ping timeout: 480 seconds)
[23:33] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[23:34] * jcsp (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[23:34] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[23:34] * xarses_ (~andreww@12.164.168.117) Quit (Read error: Operation timed out)
[23:36] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[23:36] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[23:37] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[23:37] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit (Quit: valeech)
[23:40] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[23:40] * ircolle1 (~Adium@c-67-172-132-222.hsd1.co.comcast.net) Quit (Read error: Connection reset by peer)
[23:43] * xarses_ (~andreww@12.164.168.117) has joined #ceph
[23:43] * xarses (~andreww@12.164.168.117) has joined #ceph
[23:47] * mtanski (~mtanski@65.107.210.227) has joined #ceph
[23:49] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[23:49] * mtanski (~mtanski@65.107.210.227) Quit ()
[23:50] <seapasulli> is there a reason why the troubleshooting osds and the start/stop osds section differ? IE ceph osd start osd.0 or 'start ceph-osd id=0'
[23:50] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[23:53] * zerick (~eocrospom@190.114.249.148) Quit (Read error: Connection reset by peer)
[23:56] <lupu> Does anyone know if it's possible to have multiple ssd cache pool tiers on a single data pool ?

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.