#ceph IRC Log

Index

IRC Log for 2014-06-04

Timestamps are in GMT/BST.

[0:00] * primechu_ (~primechuc@69.170.148.179) has joined #ceph
[0:00] * primechuck (~primechuc@69.170.148.179) Quit (Read error: Connection reset by peer)
[0:01] * asadpanda (~asadpanda@2001:470:c09d:24:5918:4e2e:e0ac:1572) Quit (Ping timeout: 480 seconds)
[0:02] * asadpanda (~asadpanda@2001:470:c09d:24:91d:a402:6e76:3ab6) has joined #ceph
[0:04] * sarob (~sarob@129.210.115.7) Quit (Remote host closed the connection)
[0:04] * sarob (~sarob@129.210.115.7) has joined #ceph
[0:04] * primechu_ (~primechuc@69.170.148.179) Quit (Remote host closed the connection)
[0:05] * paul_mezo (~pkilar@38.122.241.27) Quit (Quit: Leaving.)
[0:05] * sarob_ (~sarob@129.210.115.7) has joined #ceph
[0:05] * fdmanana (~fdmanana@bl13-158-240.dsl.telepac.pt) Quit (Quit: Leaving)
[0:07] * sarob (~sarob@129.210.115.7) Quit (Read error: Operation timed out)
[0:08] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[0:08] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[0:12] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[0:13] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[0:13] * sarob_ (~sarob@129.210.115.7) Quit (Ping timeout: 480 seconds)
[0:18] * wschulze1 (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[0:21] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[0:22] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[0:24] * gregsfortytwo (~Adium@129.210.115.6) has joined #ceph
[0:25] * ikrstic (~ikrstic@77-46-245-216.dynamic.isp.telekom.rs) Quit (Quit: Konversation terminated!)
[0:27] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) has joined #ceph
[0:29] * sarob_ (~sarob@129.210.115.7) has joined #ceph
[0:30] <lightspeed> reading the docs on cache pools, it sounds like you can take an existing pool that contains data, and add a new faster pool as a writeback cache pool in front of it
[0:30] <lightspeed> what if I have an existing pool containing data, and I want that to become the writeback cache-pool for a new empty slower backing pool - does that work too?
[0:30] <lightspeed> ie if I associate the two, will the existing data in the faster pool be preserved?
[0:33] <lightspeed> in my particular example I currently have 3 1TB SSD OSDs, and am wondering if I can effectively expand the capacity by adding an erasure-coded pool behind them using 3 more OSDs on larger rotational disks
[0:33] <sherry> Hi guys,There are couple of issues that I faced:1) Ceph automatically changes /etc/apt/sources.list.d/ceph.list! no matter what did I set (emperor) it would change it to firefly.2) On one of my hosts, /etc/ceph will not be created, so I have to create /etc/ceph manually and push ceph.conf to it!3) PGs stuck inactive, and it seems to be take forever to create them!4) "ceph osd dump | grep size" shows size=3! while I set min_size and max_size to
[0:33] <sherry> 2!!! I also set "osd pool default size = 2" in ceph.conf, but that also did not help!Any ideas regarding any of these issue is highly appreciated.
[0:37] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[0:38] <iggy> sherry: most people have reported also having to restart the OSDs when changing pool default size
[0:39] <iggy> and I think you might have to change the default pool's size
[0:39] <sherry> iggy: thanks for ur reply, I'd try that soon. bt whats is wrong with default pool size?
[0:40] <sherry> i mean that shouldn't be 2?
[0:40] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[0:40] <iggy> it changed recently
[0:40] * yanzheng (~zhyan@134.134.137.71) has joined #ceph
[0:40] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) has joined #ceph
[0:41] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:45] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[0:45] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) has joined #ceph
[0:45] <sherry> iggy: restarting all of my OSD at once or one by one did not make any difference!
[0:46] * bandrus (~Adium@adsl-75-5-249-45.dsl.scrm01.sbcglobal.net) Quit (Quit: Leaving.)
[0:49] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[0:49] * jharley (~jharley@23-91-144-126.cpe.pppoe.ca) Quit (Quit: jharley)
[0:53] * gregsfortytwo (~Adium@129.210.115.6) Quit (Quit: Leaving.)
[0:53] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) Quit (Quit: Leaving.)
[0:55] * gregsfortytwo (~Adium@129.210.115.6) has joined #ceph
[0:56] * bandrus (~Adium@66-87-119-214.pools.spcsdns.net) has joined #ceph
[0:56] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[0:58] * gregsfortytwo (~Adium@129.210.115.6) Quit (Read error: Connection reset by peer)
[0:59] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[0:59] * `jpg (~josephgla@ppp255-151.static.internode.on.net) has joined #ceph
[1:00] * asadpand- (~asadpanda@2001:470:c09d:24:68de:137c:48b8:e8e) has joined #ceph
[1:01] * asadpand- (~asadpanda@2001:470:c09d:24:68de:137c:48b8:e8e) Quit ()
[1:04] * zack_dolby (~textual@pdf8519e7.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:05] * bandrus (~Adium@66-87-119-214.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[1:08] * nwat (~textual@eduroam-227-115.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:09] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:12] * bandrus (~Adium@66-87-119-74.pools.spcsdns.net) has joined #ceph
[1:13] * aldavud (~aldavud@213.55.176.168) has joined #ceph
[1:14] * bandrus (~Adium@66-87-119-74.pools.spcsdns.net) Quit ()
[1:14] * bandrus (~Adium@66.87.119.74) has joined #ceph
[1:17] <Pauline_> I don't think setting the pool size with "osd pool default size = 2" helps for already existing pools.
[1:18] <Pauline_> try "ceph osd pool set <poolname> 2"
[1:23] * aldavud (~aldavud@213.55.176.168) Quit (Read error: Connection reset by peer)
[1:23] * cookednoodles_ (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[1:24] * ira (~ira@0001cb91.user.oftc.net) Quit ()
[1:25] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) has joined #ceph
[1:25] * oms101 (~oms101@p20030057EA009400EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:28] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[1:28] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) Quit ()
[1:29] * n-st (~n-st@0001c80a.user.oftc.net) Quit (Remote host closed the connection)
[1:31] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) has joined #ceph
[1:32] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[1:33] * oms101 (~oms101@p20030057EA001A00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[1:36] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[1:41] * dcurtiss (~dcurtiss@130.164.62.72) has joined #ceph
[1:45] * BlackFX (~BlackFX@208.72.139.54) has joined #ceph
[1:45] <BlackFX> hi - has anyone got calamari running ?
[1:46] <joef> I tried for a bit but the install docs are lacking
[1:46] <BlackFX> I kinda do - but cant get past this : https://www.dropbox.com/s/mokbxsfy0ehzqvb/Screenshot%202014-06-03%2016.46.04.png
[1:46] <joef> Does it expect to be installed on a monitor?
[1:47] <BlackFX> I dunno
[1:48] * sarob_ (~sarob@129.210.115.7) Quit (Read error: Connection reset by peer)
[1:48] * sarob (~sarob@129.210.115.7) has joined #ceph
[1:49] <dcurtiss> I've been struggling today trying to get ceph up and running (first time) with swift API access (on a set of ubuntu 12.04 VMs).
[1:49] <dcurtiss> I followed the directions for quick-start-preflight and quick-ceph-deploy, and got it running successfully.
[1:49] <dcurtiss> Then I followed the install-ceph-gateway directions, and the configuring directions for it, and I successfully verified that the root of my web server shows the ListAllMyBucketsResult.
[1:49] <dcurtiss> Finally, I followed the admin guide to create a user+subuser, and set a swift key, but I am unable to authenticate, either with the swift commandline or directly using curl.
[1:49] * rweeks (~goodeats@192.169.20.75.static.etheric.net) has joined #ceph
[1:49] <dcurtiss> I always get 403 errors.
[1:50] <rweeks> hey you guys
[1:51] <dmick> BlackFX: have you set up the salt minions and diamond agents on the cluster hosts?
[1:51] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[1:52] <dmick> https://github.com/ceph/calamari#connecting-ceph-servers-to-calamari
[1:53] * Cube (~Cube@66.87.67.18) Quit (Read error: Connection reset by peer)
[1:53] <dcurtiss> output of "radosgw-admin user info --uid=hive_cache": http://pastebin.com/vwwbyd4c
[1:54] * Cube (~Cube@66-87-131-31.pools.spcsdns.net) has joined #ceph
[1:54] <dcurtiss> And here's my curl invocation: http://pastebin.com/EfQ8nw8a
[1:54] <scuttlemonkey> rweeks: wuddup d00d
[1:54] <rweeks> scuttlemonkey: I'm working at HGST as of last week
[1:54] <scuttlemonkey> ahh, nice
[1:54] <rweeks> you know those ethernet connected drives?
[1:54] <rweeks> that's the group I joined
[1:54] <scuttlemonkey> yeah, the HGST one is the one I'm excited to see
[1:54] <scuttlemonkey> cool
[1:55] <rweeks> so I may be here asking lots of questions around that.
[1:55] <scuttlemonkey> hehe
[1:55] <rweeks> actually going to be trying to submit at least one blueprint for this next developer summit
[1:55] <scuttlemonkey> neat
[1:55] <BlackFX> dmick - minions are installed, but havent done anything about diamond - lets see what I can figure out
[1:55] <scuttlemonkey> keep an eye out for the videos coming out of Ceph Day Boston
[1:55] <rweeks> our marketing guy Mario will be there.
[1:56] <rweeks> I didn't get here in time to sign up to go. :/
[1:56] <scuttlemonkey> we'll be videotaping all the sessions...and Mario Blandini is doing one w/ a demo of Ceph on that drive
[1:56] <rweeks> yep! :)
[1:56] <rweeks> Mario is like... 3 cubes that way *points*
[1:56] <dmick> BlackFX: is salt-minion running ok? From teh calamari server try "salt '*' test.ping"
[1:56] <scuttlemonkey> cool
[1:57] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[1:58] <dmick> rweeks: but where is Luigi?
[1:58] <rweeks> I don't think we've hired him yet
[1:58] <BlackFX> hmm - okay so i tried state.sls diamond and it failed.
[1:59] <BlackFX> "The following packages failed to install/update: diamond."
[1:59] <BlackFX> where are those debs kept ?
[1:59] * LeaChim (~LeaChim@host86-174-77-240.range86-174.btcentralplus.com) Quit (Read error: Operation timed out)
[2:00] * zack_dolby (~textual@e0109-114-22-4-235.uqwimax.jp) has joined #ceph
[2:01] * rturk is now known as rturk|afk
[2:01] * BlackFX (~BlackFX@208.72.139.54) Quit ()
[2:02] * BlackFX (~BlackFX@208.72.139.54) has joined #ceph
[2:02] <BlackFX> damnit - IRC disconnect
[2:05] * ircolle (~Adium@129.210.115.7) Quit (Quit: Leaving.)
[2:06] * rweeks (~goodeats@192.169.20.75.static.etheric.net) Quit (Quit: Leaving)
[2:06] * eford (~wee@p3EE04DBE.dip0.t-ipconnect.de) has joined #ceph
[2:08] <eford> hi there, iam pretty interested in the following up of this stuff: http://ceph.com/community/xenserver-support-for-rbd/ ... is it possible yet to create xenserver clusters with ceph storage?
[2:09] <dcurtiss> Gotta go. I'll ask my question again tomorrow during business hours. :)
[2:13] <BlackFX> okay - Diamond installed and state.sls diamond returns sucsess
[2:15] <darkfader> eford: can you /msg me your mail, i have some info on that but can't dig out the mail exchange right now
[2:16] * sarob_ (~sarob@mobile-166-137-177-199.mycingular.net) has joined #ceph
[2:17] * sarob (~sarob@129.210.115.7) Quit (Ping timeout: 480 seconds)
[2:18] * rmoe (~quassel@12.164.168.117) Quit (Read error: Operation timed out)
[2:18] * BlackFX (~BlackFX@208.72.139.54) Quit ()
[2:22] * yanzheng (~zhyan@134.134.137.71) Quit (Remote host closed the connection)
[2:29] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[2:33] * sarob_ (~sarob@mobile-166-137-177-199.mycingular.net) Quit (Remote host closed the connection)
[2:34] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) Quit ()
[2:40] * xarses (~andreww@12.164.168.117) Quit (Read error: Operation timed out)
[2:42] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[2:42] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) has joined #ceph
[2:43] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[2:52] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[2:54] * bandrus (~Adium@66.87.119.74) Quit (Quit: Leaving.)
[2:54] * rturk|afk is now known as rturk
[2:56] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[2:59] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[3:02] * yanzheng (~zhyan@134.134.139.70) has joined #ceph
[3:02] <classicsnail> hrm, can't seem to start the mds on a cluster that was upgraded from 0.78 to firefly
[3:03] <classicsnail> everything else is fine, but for the life of me, cannot figure out what's happening
[3:03] <classicsnail> 2014-06-04 01:03:05.036882 7ffadc018700 10 -- 10.60.8.2:6800/82928 >> 10.60.8.2:6789/0 pipe(0x2768500 sd=7 :39273 s=2 pgs=11062 cs=1 l=1 c=0x26d9080).writer: state = open policy.server=0
[3:03] <classicsnail> is the log line I get over and over
[3:03] <classicsnail> ceph health says the cluster is fine (other than too few pgs on pool data)
[3:04] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[3:05] * rturk is now known as rturk|afk
[3:06] <yanzheng> classicsnail, do you have mds log
[3:06] <classicsnail> I do
[3:06] <classicsnail> hang on, pastebinning it
[3:08] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) Quit (Quit: Leaving.)
[3:09] <classicsnail> http://pastebin.com/tnNYkWn0
[3:09] * narb (~Jeff@38.99.52.10) Quit (Quit: narb)
[3:11] <yanzheng> set debug_mds 10 and try again
[3:12] <classicsnail> I started it using /usr/bin/ceph-mds -i cbr-a-ssg1 --pid-file /var/run/ceph/mds.cbr-a-ssg1.pid -c /etc/ceph/ceph.conf --cluster ceph --debug_ms=10 -f
[3:12] <classicsnail> oh, debug_mds
[3:12] <classicsnail> let me do it
[3:12] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[3:12] <classicsnail> debug_ms, heh, typo, whoops
[3:13] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[3:13] * `jpg (~josephgla@ppp255-151.static.internode.on.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[3:14] <classicsnail> http://pastebin.com/RugyHQv8
[3:16] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[3:16] * asadpanda (~asadpanda@2001:470:c09d:24:91d:a402:6e76:3ab6) Quit (Remote host closed the connection)
[3:17] <yanzheng> output of "ceph -s" ?
[3:17] <classicsnail> cluster 33828e41-29b1-4bfe-be83-b8037460819e
[3:17] <classicsnail> health HEALTH_WARN pool data has too few pgs; mds cbr-a-ssg1 is laggy
[3:17] <classicsnail> monmap e2: 1 mons at {cbr-a-ssg1=10.60.8.2:6789/0}, election epoch 1, quorum 0 cbr-a-ssg1
[3:17] <classicsnail> mdsmap e555: 1/1/1 up {0=cbr-a-ssg1=up:active(laggy or crashed)}
[3:17] <classicsnail> osdmap e16923: 144 osds: 144 up, 144 in
[3:17] <classicsnail> pgmap v5939414: 13056 pgs, 102 pools, 37807 GB data, 42909 kobjects
[3:17] <classicsnail> 76419 GB used, 316 TB / 391 TB avail
[3:18] <classicsnail> 13055 active+clean
[3:18] <classicsnail> 1 active+clean+scrubbing
[3:18] <classicsnail> health detail reports
[3:18] <classicsnail> HEALTH_WARN pool data has too few pgs; mds cbr-a-ssg1 is laggy
[3:18] <classicsnail> pool data objects per pg (136920) is more than 40.6894 times cluster average (3365)
[3:18] <classicsnail> mds.cbr-a-ssg1 at 10.60.8.2:6803/15825 is laggy/unresponsive
[3:18] <classicsnail> the interesting part is it thinks the mds is on port 6803, which is presently an osd
[3:19] * KaZeR (~kazer@64.201.252.132) Quit (Ping timeout: 480 seconds)
[3:24] <yanzheng> try using a different mds name, such as cbr-a-ssg2
[3:25] <classicsnail> I did try ceph-deploying another mds, to no avail, I'll try starting it with a different name
[3:26] <yanzheng> looks like a monitor issue
[3:27] <yanzheng> the mds didn't get any monitor's response
[3:27] <classicsnail> at least with the default configuration, there's nothing in the monitor log, I'll try it with a higher debugging level
[3:27] * bandrus (~Adium@111.sub-75-247-136.myvzw.com) has joined #ceph
[3:28] * KaZeR (~kazer@c-67-161-64-186.hsd1.ca.comcast.net) has joined #ceph
[3:29] <yanzheng> try restarting the monitor if possible
[3:29] * lucas1 (~Thunderbi@222.240.148.154) has joined #ceph
[3:32] <classicsnail> from the monitor log
[3:32] <classicsnail> 2014-06-04 01:32:15.517160 7fe886e7d700 10 mon.cbr-a-ssg1@0(leader).mds e555 preprocess_query mdsbeacon(550314/cbr-a-ssg1 up:boot seq 13 v555) v2 from mds.? 10.60.8.2:6800/97801
[3:32] <classicsnail> 2014-06-04 01:32:15.517179 7fe886e7d700 7 mon.cbr-a-ssg1@0(leader).mds e555 mdsmap DOWN flag set, ignoring mds mds.? 10.60.8.2:6800/97801 beacon
[3:32] <classicsnail> 2014-06-04 01:32:16.935837 7fe88787e700 10 mon.cbr-a-ssg1@0(leader).mds e555 e555: 1/1/1 up {0=cbr-a-ssg1=up:active(laggy or crashed)}
[3:33] <classicsnail> 6800 is the right port at least as far as the mds itself is concerned
[3:33] <classicsnail> the health detail report still thinks it's on port 6803
[3:34] <classicsnail> which is an osd
[3:34] <classicsnail> oh mutter mutter, sorry about that, it appears the mds cluster had been set down
[3:35] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:35] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[3:35] <yanzheng> 'mdsmap DOWN flag set' looks suspicious
[3:36] <classicsnail> that's what made me go check the mds dump, which had it marked down
[3:36] <classicsnail> mark it up, the mds joins, and starts doing replay
[3:37] <yanzheng> ;)
[3:38] <classicsnail> thankyou for your assistance all the same :)
[3:38] <classicsnail> essentially I had forgotten to plug it at the power point ;)
[3:41] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[3:46] * bandrus (~Adium@111.sub-75-247-136.myvzw.com) Quit (Quit: Leaving.)
[3:46] * sm1ly_ (~sm1ly@ppp109-252-169-241.pppoe.spdop.ru) Quit (Ping timeout: 480 seconds)
[3:47] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) Quit (Quit: Leaving)
[3:53] * zack_dol_ (~textual@e0109-114-22-4-235.uqwimax.jp) has joined #ceph
[3:53] * zack_dolby (~textual@e0109-114-22-4-235.uqwimax.jp) Quit (Read error: Connection reset by peer)
[3:54] * rturk|afk is now known as rturk
[3:54] * rturk is now known as rturk|afk
[4:02] <kfei> Why the pgmap version number increases all the time? Even there is neither new object added nor exsisting object edited
[4:02] * bandrus (~Adium@111.sub-75-247-136.myvzw.com) has joined #ceph
[4:03] <kfei> `ceph -w` shows me the cluster is currently no operationss. But pgmap version is still increase
[4:06] <kfei> (The cluster has no operations for last 24-hours)
[4:14] * rdas (~rdas@122.168.253.94) has joined #ceph
[4:19] * `jpg (~josephgla@ppp255-151.static.internode.on.net) has joined #ceph
[4:20] * lucas1 (~Thunderbi@222.240.148.154) Quit (Ping timeout: 480 seconds)
[4:27] * bandrus (~Adium@111.sub-75-247-136.myvzw.com) Quit (Quit: Leaving.)
[4:28] * zhaochao (~zhaochao@124.205.245.26) has joined #ceph
[4:29] * bandrus (~Adium@111.sub-75-247-136.myvzw.com) has joined #ceph
[4:30] * bandrus (~Adium@111.sub-75-247-136.myvzw.com) Quit ()
[4:51] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[4:51] * shang (~ShangWu@175.41.48.77) has joined #ceph
[4:54] <iggy> kfei: that's expected... even when the cluster "does nothing", it's still doing something
[4:55] * pavera_ (~tomc@64-199-185-54.ip.mcleodusa.net) has joined #ceph
[4:56] <pavera_> I am seeing some pretty uneven data distribution on a cluster with homogeneous hardware, and I???m wondering if its a result of the dataset I???m putting in ceph
[4:57] <pavera_> does anyone know how librbd/crush determines which OSD a particular object/data chunk should be stored on? Is it based on a hash of the data itself?
[4:59] * bandrus (~Adium@75.5.249.45) has joined #ceph
[5:01] <iggy> read the _old_ ceph papers
[5:02] <yanzheng> pavera_, hash of object name, not the data
[5:03] <pavera_> I???m loading a bunch of copies of a ~2GB disk image into rbds to load up a cluster to test failure cases..
[5:03] <pavera_> I was hoping the clumpiness of the data was due to the fact that its all the same data
[5:04] <pavera_> but I???ve got 3 osd nodes, 11 disks/osds per node, and about 5 of the disks are 85%+ full, while 5 are less than 55% full, and the rest are all over the map in between
[5:05] <pavera_> all disks are the same size
[5:05] <pavera_> all weights are identical
[5:05] <pavera_> the full and empty disks are evenly distributed among the nodes
[5:09] <pavera_> any ideas on what I did/am doing wrong to cause this?
[5:20] * Vacum_ (~vovo@88.130.216.40) has joined #ceph
[5:23] * pavera_ (~tomc@64-199-185-54.ip.mcleodusa.net) Quit (Quit: pavera_)
[5:27] * Vacum (~vovo@88.130.223.140) Quit (Ping timeout: 480 seconds)
[5:37] * saurabh (~saurabh@nat-pool-blr-t.redhat.com) has joined #ceph
[5:38] * rejy (~rmc@nat-pool-blr-t.redhat.com) has joined #ceph
[5:39] <gleam> you might need more placement groups
[5:39] * haomaiwang (~haomaiwan@119.6.74.175) has joined #ceph
[5:42] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) has joined #ceph
[5:46] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) has joined #ceph
[5:48] * haomaiwang (~haomaiwan@119.6.74.175) Quit (Remote host closed the connection)
[5:48] * haomaiwang (~haomaiwan@li634-52.members.linode.com) has joined #ceph
[6:00] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Quit: Leaving.)
[6:02] <kfei> For restarting the ceph cluster, should I log in to every node and issue the `sudo /etc/init.d/ceph -a stop/start` command?
[6:02] <kfei> I'm trying to disable cephx authentication
[6:04] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) has joined #ceph
[6:05] * `jpg (~josephgla@ppp255-151.static.internode.on.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[6:05] * haomaiwa_ (~haomaiwan@li634-52.members.linode.com) has joined #ceph
[6:06] * haomaiwang (~haomaiwan@li634-52.members.linode.com) Quit (Read error: Connection reset by peer)
[6:07] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[6:10] * rdas (~rdas@122.168.253.94) Quit (Quit: Leaving)
[6:11] * saturnine (~saturnine@ashvm.saturne.in) Quit (Read error: Connection reset by peer)
[6:19] <dmick> kfei: -a means "on all nodes" so you need do that only on one (in theory)
[6:20] <dmick> http://ceph.com/docs/master/rados/operations/operating/#running-ceph-with-sysvinit
[6:20] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) Quit (Ping timeout: 480 seconds)
[6:20] * haomaiwa_ (~haomaiwan@li634-52.members.linode.com) Quit (Read error: Connection reset by peer)
[6:21] * haomaiwang (~haomaiwan@li634-52.members.linode.com) has joined #ceph
[6:22] * bandrus (~Adium@75.5.249.45) Quit (Quit: Leaving.)
[6:24] * saturnine (~saturnine@ashvm.saturne.in) has joined #ceph
[6:24] * jrcresawn (~jrcresawn@ip24-251-38-21.ph.ph.cox.net) has joined #ceph
[6:24] * jrcresawn (~jrcresawn@ip24-251-38-21.ph.ph.cox.net) Quit (Remote host closed the connection)
[6:25] * haomaiwa_ (~haomaiwan@li634-52.members.linode.com) has joined #ceph
[6:25] * haomaiwang (~haomaiwan@li634-52.members.linode.com) Quit (Read error: Connection reset by peer)
[6:33] * haomaiwang (~haomaiwan@124.248.205.17) has joined #ceph
[6:37] * haomaiwa_ (~haomaiwan@li634-52.members.linode.com) Quit (Read error: Connection reset by peer)
[6:37] * sjusthm (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[6:39] * Cube (~Cube@66-87-131-31.pools.spcsdns.net) Quit (Quit: Leaving.)
[6:53] * `jpg (~josephgla@ppp255-151.static.internode.on.net) has joined #ceph
[6:54] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) has joined #ceph
[6:54] * drankis_ (~drankis__@37.148.173.239) has joined #ceph
[6:55] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) Quit (Quit: Leaving.)
[6:56] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[6:56] * michalefty (~micha@p20030071CF6D9400BC430B80D7BA78D5.dip0.t-ipconnect.de) has joined #ceph
[7:01] * lucas1 (~Thunderbi@222.240.148.130) has joined #ceph
[7:02] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[7:05] * vbellur (~vijay@122.167.250.20) Quit (Read error: Operation timed out)
[7:17] * drankis_ (~drankis__@37.148.173.239) Quit (Ping timeout: 480 seconds)
[7:18] * Pedras (~Adium@50.185.218.255) Quit (Quit: Leaving.)
[7:19] * capri (~capri@212.218.127.222) has joined #ceph
[7:20] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[7:21] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:33] * blook (~blook@juniper1.netways.de) has joined #ceph
[7:33] * The_Bishop_ (~bishop@f055213012.adsl.alicedsl.de) has joined #ceph
[7:38] * vbellur (~vijay@209.132.188.8) has joined #ceph
[7:40] * The_Bishop (~bishop@e180247236.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[7:40] <kfei> dmick: yes ran it on my admin node, but the command give me zero output, and when I log in to other nodes (MONs/OSDs), the ceph-* process is still there
[7:46] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[7:46] * sarob (~sarob@2601:9:1d00:c7f:790b:2aed:dc47:2001) has joined #ceph
[7:49] * haomaiwa_ (~haomaiwan@119.6.74.175) has joined #ceph
[7:49] * ircolle (~Adium@12.207.21.2) has joined #ceph
[7:50] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[7:53] <kfei> Now even I run `sudo /etc/init.d/ceph stop` on each node, the command still return in less than 1 second,, and leave everything unchanged.
[7:53] * haomaiwang (~haomaiwan@124.248.205.17) Quit (Ping timeout: 480 seconds)
[7:54] * sarob (~sarob@2601:9:1d00:c7f:790b:2aed:dc47:2001) Quit (Ping timeout: 480 seconds)
[7:57] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has left #ceph
[8:03] * `jpg (~josephgla@ppp255-151.static.internode.on.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[8:06] * michalefty (~micha@p20030071CF6D9400BC430B80D7BA78D5.dip0.t-ipconnect.de) has left #ceph
[8:08] * sarob (~sarob@2601:9:1d00:c7f:4c2a:7dbe:d363:62ea) has joined #ceph
[8:09] * ikrstic (~ikrstic@77-46-245-216.dynamic.isp.telekom.rs) has joined #ceph
[8:09] * sleinen (~Adium@2001:620:0:26:40d5:312e:205:db02) has joined #ceph
[8:13] * ade (~abradshaw@193.202.255.218) has joined #ceph
[8:14] * steki (~steki@91.195.39.5) has joined #ceph
[8:16] * sarob (~sarob@2601:9:1d00:c7f:4c2a:7dbe:d363:62ea) Quit (Ping timeout: 480 seconds)
[8:16] * sleinen1 (~Adium@2001:620:0:26:cca1:eb8a:ff4a:4d8e) has joined #ceph
[8:17] * sarob (~sarob@2601:9:1d00:c7f:dd1e:27ce:1749:c0f) has joined #ceph
[8:17] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[8:18] * sarob_ (~sarob@2601:9:1d00:c7f:7d16:a832:cb08:990d) has joined #ceph
[8:20] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[8:23] * sleinen (~Adium@2001:620:0:26:40d5:312e:205:db02) Quit (Ping timeout: 480 seconds)
[8:25] * sarob (~sarob@2601:9:1d00:c7f:dd1e:27ce:1749:c0f) Quit (Ping timeout: 480 seconds)
[8:25] <Gugge-47527> kfei: if you use ubuntu you should forget about the sysvinit script, it uses upstart
[8:26] * thb (~me@2a02:2028:1f1:aa0:6267:20ff:fec9:4e40) has joined #ceph
[8:26] * vbellur (~vijay@209.132.188.8) Quit (Ping timeout: 480 seconds)
[8:26] * mourgaya (~kvirc@80.124.164.139) has joined #ceph
[8:27] * sarob_ (~sarob@2601:9:1d00:c7f:7d16:a832:cb08:990d) Quit (Ping timeout: 480 seconds)
[8:28] <kfei> Gugge-47527: Yes I'm using Ubuntu, I'll try `sudo start ceph-all` after my fio test done. Thank you.
[8:28] <Gugge-47527> and that should be on all hosts
[8:29] <kfei> no way to do that via single command on single node?
[8:32] * ircolle (~Adium@12.207.21.2) Quit (Quit: Leaving.)
[8:33] * aldavud (~aldavud@213.55.176.129) has joined #ceph
[8:36] * vbellur (~vijay@nat-pool-blr-t.redhat.com) has joined #ceph
[8:37] * sleinen (~Adium@2001:620:0:46:6515:8d98:dcdb:e9d6) has joined #ceph
[8:38] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[8:38] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[8:42] * sleinen1 (~Adium@2001:620:0:26:cca1:eb8a:ff4a:4d8e) Quit (Ping timeout: 480 seconds)
[8:46] * Sysadmin88 (~IceChat77@94.4.22.173) Quit (Quit: Beware of programmers who carry screwdrivers.)
[8:46] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[8:48] * lucas1 (~Thunderbi@222.240.148.130) Quit (Ping timeout: 480 seconds)
[8:56] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit (Quit: Quitte)
[8:58] * thomnico (~thomnico@2a01:e35:8b41:120:b841:4631:c163:3a27) has joined #ceph
[8:58] * thomnico (~thomnico@2a01:e35:8b41:120:b841:4631:c163:3a27) Quit (Remote host closed the connection)
[9:00] <singler_> kfei: if you are using ceph-deploy, you should have host which has permissions to access all other hosts, then you could write one-liner to execute command on all hosts
[9:00] <Gugge-47527> kfei: "yes": for i in host{1..10}; do ssh $i sudo start ceph-all; done
[9:00] <Gugge-47527> :)
[9:03] * capri_on (~capri@212.218.127.222) has joined #ceph
[9:04] * capri (~capri@212.218.127.222) Quit (Read error: Operation timed out)
[9:06] * rendar (~I@host86-177-dynamic.8-79-r.retail.telecomitalia.it) has joined #ceph
[9:08] * a1-away is now known as AbyssOne
[9:09] <drankis> Hello all, I want to know, how I can check active ceph client sessions? I got Openstack with ceph volumes and want to know, how i can check mounts and sessions.
[9:11] <kfei> OK, but I think a `ceph stop <cluster>-all` to automatically done the for loop would be more friendly. :p
[9:12] <kfei> Anyway thanks a lot!
[9:13] * ssejour (~sebastien@out-chantepie.fr.clara.net) has joined #ceph
[9:14] * houkouonchi-home (~linux@2001:470:c:c69::2) has joined #ceph
[9:15] * Infitialis (~infitiali@194.30.182.18) has joined #ceph
[9:17] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Read error: Operation timed out)
[9:18] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[9:21] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:23] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[9:27] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[9:32] * analbeard (~shw@support.memset.com) has joined #ceph
[9:35] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[9:36] * aldavud (~aldavud@213.55.176.129) Quit (Ping timeout: 480 seconds)
[9:48] * madkiss (~madkiss@2001:6f8:12c3:f00f:1907:6e4a:d31a:7702) has joined #ceph
[9:49] * ghartz (~ghartz@ircad17.u-strasbg.fr) has joined #ceph
[9:50] <kfei> About the librbd cache, the documentaion (http://ceph.com/docs/master/rbd/rbd-config-ref/) says: "To enable it, add rbd cache = true to the [client] section of your ceph.conf file. By default librbd does not perform any caching."
[9:50] <kfei> Does it mean that I have to set the `rbd cache = true` config to every node in the cluster, or I can just set the config in a rbd-consumer node that I want to enable cache?
[9:50] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[10:04] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[10:04] * i_m (~ivan.miro@gbibp9ph1--blueice2n1.emea.ibm.com) has joined #ceph
[10:05] * i_m (~ivan.miro@gbibp9ph1--blueice2n1.emea.ibm.com) Quit ()
[10:05] * i_m (~ivan.miro@gbibp9ph1--blueice4n2.emea.ibm.com) has joined #ceph
[10:06] * `jpg (~josephgla@ppp255-151.static.internode.on.net) has joined #ceph
[10:06] * m0e (~Moe@41.45.69.38) has joined #ceph
[10:07] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[10:17] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[10:19] <iggy> kfei: afaik, just the "initiators" (consumers)
[10:23] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:25] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[10:26] <kfei> iggy: yeah, I think this make sense!
[10:26] * mourgaya (~kvirc@80.124.164.139) Quit (Remote host closed the connection)
[10:26] * mourgaya (~kvirc@80.124.164.139) has joined #ceph
[10:29] <kfei> iggy: btw, how can I determine the QEMU librbd caching is enable or not?
[10:31] * thomnico (~thomnico@2a01:e35:8b41:120:b841:4631:c163:3a27) has joined #ceph
[10:32] <kfei> OK, answer my own quesion, just check `/var/log/libvirt/qemu/<node>.log`
[10:33] <kfei> and search for `cache=` term
[10:33] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has joined #ceph
[10:35] * LeaChim (~LeaChim@host86-174-77-240.range86-174.btcentralplus.com) has joined #ceph
[10:35] * i_m (~ivan.miro@gbibp9ph1--blueice4n2.emea.ibm.com) Quit (Read error: Connection reset by peer)
[10:35] * i_m (~ivan.miro@gbibp9ph1--blueice2n1.emea.ibm.com) has joined #ceph
[10:35] <kalleh> Hey guys, are there any docs to set up Calamari in "production" mode atm?
[10:36] * mourgaya (~kvirc@80.124.164.139) Quit (Remote host closed the connection)
[10:37] * mourgaya (~kvirc@80.124.164.139) has joined #ceph
[10:38] <mourgaya> kalleh: I'am also interressed in that, specially on a redhat platform!
[10:38] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has left #ceph
[10:45] * blue (~blue@irc.mmh.dk) has joined #ceph
[10:46] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Quit: ZNC - http://znc.in)
[10:46] * Zethrok (~martin@95.154.26.34) has joined #ceph
[10:47] <Zethrok> We have a dumpling (0.67.9) cluster and we just added radosgw on top of it. Now it is stuck with creating 8 pg. Any idea/input?
[10:47] <Zethrok> Everything works fine btw, it is just stuck with creating those 8 pg
[10:53] * zack_dol_ (~textual@e0109-114-22-4-235.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[10:54] * m0e (~Moe@41.45.69.38) Quit (Quit: This computer has gone to sleep)
[11:00] * yanzheng (~zhyan@134.134.139.70) Quit (Quit: Leaving)
[11:04] * KindOne (kindone@0001a7db.user.oftc.net) has joined #ceph
[11:04] <Zethrok> Nm - figured it out - we have removed the default crush_ruleset and replaced with our own - and default created pg tries to use that. Which it couldn't ofc. Once we changed it to the appropriate ruleset it worked.
[11:06] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[11:07] * shang (~ShangWu@175.41.48.77) has joined #ceph
[11:07] * shang (~ShangWu@175.41.48.77) Quit ()
[11:08] * nolan_ (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) has joined #ceph
[11:12] * chutz (~chutz@rygel.linuxfreak.ca) Quit (Ping timeout: 480 seconds)
[11:12] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) Quit (Ping timeout: 480 seconds)
[11:12] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[11:12] * nolan_ is now known as nolan
[11:12] * classicsnail (~David@2600:3c01::f03c:91ff:fe96:d3c0) Quit (Ping timeout: 480 seconds)
[11:13] * classicsnail (~David@2600:3c01::f03c:91ff:fe96:d3c0) has joined #ceph
[11:17] * sarob (~sarob@2601:9:1d00:c7f:c37:f93:1b3b:2cad) has joined #ceph
[11:25] * sarob (~sarob@2601:9:1d00:c7f:c37:f93:1b3b:2cad) Quit (Ping timeout: 480 seconds)
[11:31] * capri_on (~capri@212.218.127.222) Quit (Read error: Connection reset by peer)
[11:37] * vbellur (~vijay@nat-pool-blr-t.redhat.com) Quit (Ping timeout: 480 seconds)
[11:42] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[11:47] * Qten (~Qu310@121.0.1.110) Quit (Remote host closed the connection)
[11:47] * Qu310 (~Qu310@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[11:48] * allsystemsarego (~allsystem@188.27.188.69) has joined #ceph
[11:52] * mourgaya (~kvirc@80.124.164.139) Quit (Remote host closed the connection)
[11:52] * mourgaya (~kvirc@80.124.164.139) has joined #ceph
[11:53] * vbellur (~vijay@209.132.188.8) has joined #ceph
[11:58] * thomnico (~thomnico@2a01:e35:8b41:120:b841:4631:c163:3a27) Quit (Quit: Ex-Chat)
[11:58] * thomnico (~thomnico@2a01:e35:8b41:120:b841:4631:c163:3a27) has joined #ceph
[12:06] * haomaiwang (~haomaiwan@li634-52.members.linode.com) has joined #ceph
[12:09] * haomaiw__ (~haomaiwan@124.248.205.17) has joined #ceph
[12:09] * mongo (~gdahlman@voyage.voipnw.net) Quit (Ping timeout: 480 seconds)
[12:13] * haomaiwa_ (~haomaiwan@119.6.74.175) Quit (Ping timeout: 480 seconds)
[12:13] * haomaiwang (~haomaiwan@li634-52.members.linode.com) Quit (Read error: Operation timed out)
[12:16] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[12:17] * sarob (~sarob@2601:9:1d00:c7f:5ced:eb2:6551:fef7) has joined #ceph
[12:18] * mourgaya (~kvirc@80.124.164.139) Quit (Quit: KVIrc 4.1.3 Equilibrium http://www.kvirc.net/)
[12:25] * haomaiwang (~haomaiwan@119.6.74.175) has joined #ceph
[12:25] * lucas1 (~Thunderbi@222.240.148.154) has joined #ceph
[12:25] * sarob (~sarob@2601:9:1d00:c7f:5ced:eb2:6551:fef7) Quit (Ping timeout: 480 seconds)
[12:28] * haomaiw__ (~haomaiwan@124.248.205.17) Quit (Ping timeout: 480 seconds)
[12:32] * lucas1 (~Thunderbi@222.240.148.154) Quit (Quit: lucas1)
[12:42] * cclien (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) Quit (Remote host closed the connection)
[13:00] * madkiss1 (~madkiss@zid-vpnn078.uibk.ac.at) has joined #ceph
[13:06] * madkiss (~madkiss@2001:6f8:12c3:f00f:1907:6e4a:d31a:7702) Quit (Ping timeout: 480 seconds)
[13:06] * The_Bishop_ (~bishop@f055213012.adsl.alicedsl.de) Quit (Read error: Connection reset by peer)
[13:07] * The_Bishop_ (~bishop@f055213012.adsl.alicedsl.de) has joined #ceph
[13:13] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[13:13] * ChanServ sets mode +v andreask
[13:17] * sarob (~sarob@2601:9:1d00:c7f:c9d9:b25d:fe44:4b61) has joined #ceph
[13:23] * diegows (~diegows@190.190.5.238) has joined #ceph
[13:23] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[13:24] * leseb (~leseb@185.21.174.206) Quit (Killed (NickServ (Too many failed password attempts.)))
[13:24] * leseb (~leseb@185.21.174.206) has joined #ceph
[13:25] * sarob (~sarob@2601:9:1d00:c7f:c9d9:b25d:fe44:4b61) Quit (Ping timeout: 480 seconds)
[13:27] * The_Bishop_ (~bishop@f055213012.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[13:27] * The_Bishop_ (~bishop@f055213012.adsl.alicedsl.de) has joined #ceph
[13:33] * saurabh (~saurabh@nat-pool-blr-t.redhat.com) Quit (Quit: Leaving)
[13:38] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[13:50] * ganders (~root@200-127-158-54.net.prima.net.ar) has joined #ceph
[13:52] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[13:53] * blook (~blook@juniper1.netways.de) Quit (Quit: This computer has gone to sleep)
[13:59] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) has joined #ceph
[14:08] * ade (~abradshaw@193.202.255.218) Quit (Read error: Connection reset by peer)
[14:17] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[14:25] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[14:28] * madkiss (~madkiss@2001:6f8:12c3:f00f:1907:6e4a:d31a:7702) has joined #ceph
[14:30] * zhaochao (~zhaochao@124.205.245.26) has left #ceph
[14:33] * madkiss1 (~madkiss@zid-vpnn078.uibk.ac.at) Quit (Ping timeout: 480 seconds)
[14:37] * zack_dolby (~textual@pdf8519e7.tokynt01.ap.so-net.ne.jp) has joined #ceph
[14:38] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[14:40] * haomaiwang (~haomaiwan@119.6.74.175) Quit (Remote host closed the connection)
[14:40] * haomaiwang (~haomaiwan@li634-52.members.linode.com) has joined #ceph
[14:46] * haomaiwa_ (~haomaiwan@119.6.74.175) has joined #ceph
[14:48] * michalefty (~micha@p20030071CF6CAC00BC430B80D7BA78D5.dip0.t-ipconnect.de) has joined #ceph
[14:50] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[14:52] * `jpg (~josephgla@ppp255-151.static.internode.on.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[14:53] * vbellur (~vijay@209.132.188.8) Quit (Quit: Leaving.)
[14:53] * haomaiwang (~haomaiwan@li634-52.members.linode.com) Quit (Ping timeout: 480 seconds)
[14:53] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[14:54] * primechuck (~primechuc@69.170.148.179) has joined #ceph
[14:54] * `jpg (~josephgla@ppp255-151.static.internode.on.net) has joined #ceph
[14:54] * jtangwk (~Adium@gateway.tchpc.tcd.ie) Quit (Quit: Leaving.)
[14:58] * mongo (~gdahlman@voyage.voipnw.net) has joined #ceph
[15:00] * jtangwk (~Adium@gateway.tchpc.tcd.ie) has joined #ceph
[15:01] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[15:06] * ajazdzewski (~quassel@lpz-66.sprd.net) has joined #ceph
[15:08] * fdmanana (~fdmanana@bl13-158-240.dsl.telepac.pt) has joined #ceph
[15:15] <djh-work> I read about ceph's ability to use a key-value store (leveldb) instead of the filesystem's extended attributes; I tried forcing ceph to store all attributes in the key-value store by setting filestore_max_inline_xattr{s,size} to zero -- is this the correct way? Because it seems like ceph still requires my fs to provide xattrs.
[15:17] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[15:25] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Ping timeout: 480 seconds)
[15:25] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[15:26] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[15:27] * yanzheng (~zhyan@222.64.141.243) has joined #ceph
[15:28] <tnt> Mmmm, it seems that specifying a Content-Type met data for a multi-part uplaod doesn't work. It gets ignored AFAICT. Anybody observed that ?
[15:29] * michalefty (~micha@p20030071CF6CAC00BC430B80D7BA78D5.dip0.t-ipconnect.de) has left #ceph
[15:32] * `jpg (~josephgla@ppp255-151.static.internode.on.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[15:33] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[15:36] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) has joined #ceph
[15:36] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Ping timeout: 480 seconds)
[15:38] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:38] * nwat (~textual@50.141.87.8) has joined #ceph
[15:42] * nwat (~textual@50.141.87.8) Quit (Read error: Connection reset by peer)
[15:44] * Shmouel1 (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) has joined #ceph
[15:44] * nwat (~textual@50.141.87.8) has joined #ceph
[15:47] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) Quit (Read error: Operation timed out)
[15:50] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) Quit (Quit: Leaving)
[15:53] * markbby (~Adium@168.94.245.3) has joined #ceph
[15:55] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[15:56] * nwat_ (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[16:00] * vbellur (~vijay@122.167.250.20) has joined #ceph
[16:03] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has left #ceph
[16:04] * nwat (~textual@50.141.87.8) Quit (Ping timeout: 480 seconds)
[16:04] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[16:05] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (Remote host closed the connection)
[16:07] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:09] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[16:09] * ajazdzewski (~quassel@lpz-66.sprd.net) Quit (Remote host closed the connection)
[16:15] * i_m (~ivan.miro@gbibp9ph1--blueice2n1.emea.ibm.com) Quit (Quit: Leaving.)
[16:17] * sarob (~sarob@2601:9:1d00:c7f:e18c:87:84f7:9542) has joined #ceph
[16:20] * rpowell (~rpowell@128.135.219.215) has joined #ceph
[16:21] * rejy (~rmc@nat-pool-blr-t.redhat.com) Quit (Quit: Leaving)
[16:25] * sarob (~sarob@2601:9:1d00:c7f:e18c:87:84f7:9542) Quit (Ping timeout: 480 seconds)
[16:25] * ircolle (~Adium@12.207.21.2) has joined #ceph
[16:26] * rpowell (~rpowell@128.135.219.215) has left #ceph
[16:32] * ircolle (~Adium@12.207.21.2) Quit (Quit: Leaving.)
[16:32] * zerick (~eocrospom@190.118.43.113) Quit (Ping timeout: 480 seconds)
[16:35] * danieagle (~Daniel@179.176.52.247.dynamic.adsl.gvt.net.br) has joined #ceph
[16:38] * sleinen (~Adium@2001:620:0:46:6515:8d98:dcdb:e9d6) Quit (Ping timeout: 480 seconds)
[16:38] * steki (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[16:39] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[16:45] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[16:45] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Read error: Connection reset by peer)
[16:46] * sleinen (~Adium@2001:620:0:46:b412:7398:4cf7:8fc3) has joined #ceph
[16:50] * sleinen1 (~Adium@2001:620:0:2d:70c9:4256:f704:f50a) has joined #ceph
[16:52] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[16:56] * ibuclaw (~ibuclaw@rabbit.dbplc.com) has joined #ceph
[16:56] <ibuclaw> Hi, is the master documentation uptodate with how ceph 0.80 works?
[16:56] * sleinen (~Adium@2001:620:0:46:b412:7398:4cf7:8fc3) Quit (Ping timeout: 480 seconds)
[16:57] <ibuclaw> specifically, I'm looking at http://ceph.com/docs/master/radosgw/adminops/
[16:57] <ibuclaw> On get user info, it says: "If no user is specified returns the list of all users along with suspension information."
[16:57] <ibuclaw> however GET /admin/user?format=json -> AccessDenied
[16:58] <ibuclaw> and GET /admin/user?uid=testuser&format=json -> Returns user information
[17:00] * Guest12383 is now known as Azrael
[17:03] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[17:03] * b0e (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[17:03] * lalatenduM (~lalatendu@122.167.8.181) has joined #ceph
[17:03] * bandrus (~Adium@75.5.249.45) has joined #ceph
[17:04] * bandrus (~Adium@75.5.249.45) Quit ()
[17:05] * sleinen1 (~Adium@2001:620:0:2d:70c9:4256:f704:f50a) Quit (Ping timeout: 480 seconds)
[17:05] * drankis is now known as drankis_off
[17:06] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:07] * bandrus (~Adium@adsl-75-5-249-45.dsl.scrm01.sbcglobal.net) has joined #ceph
[17:12] * sleinen (~Adium@2001:620:0:46:b1f2:4ed9:1c24:98cc) has joined #ceph
[17:12] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[17:15] * Infitialis (~infitiali@194.30.182.18) Quit (Read error: Operation timed out)
[17:17] * sarob (~sarob@2601:9:1d00:c7f:acbd:2d12:2b10:436b) has joined #ceph
[17:25] * sarob (~sarob@2601:9:1d00:c7f:acbd:2d12:2b10:436b) Quit (Ping timeout: 480 seconds)
[17:27] <dcurtiss> Over the last two days, I set up ceph on a set of ubuntu 12.04 VMs (my first time working with ceph), but I can't authenticate with the swift API.
[17:27] <dcurtiss> My curl invocation: http://pastebin.com/EfQ8nw8a
[17:27] <dcurtiss> Output of "radosgw-admin user info --uid=hive_cache": http://pastebin.com/vwwbyd4c
[17:27] * ganders (~root@200-127-158-54.net.prima.net.ar) Quit (Quit: WeeChat 0.4.1)
[17:28] * ganders (~root@200-127-158-54.net.prima.net.ar) has joined #ceph
[17:28] <dcurtiss> (Visiting the root of the web server shows the ListAllMyBucketsResult XML, as expected.)
[17:28] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[17:30] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[17:30] * narb (~Jeff@38.99.52.10) has joined #ceph
[17:32] * primechuck (~primechuc@69.170.148.179) Quit (Remote host closed the connection)
[17:33] * primechuck (~primechuc@host-71-34-75.infobunker.com) has joined #ceph
[17:33] * yanzheng (~zhyan@222.64.141.243) Quit (Read error: Operation timed out)
[17:34] * jcsp (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[17:34] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[17:37] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Read error: Operation timed out)
[17:45] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[17:49] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:50] * n-st (~n-st@0001c80a.user.oftc.net) has joined #ceph
[17:51] * KaZeR (~kazer@c-67-161-64-186.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[17:55] * KB (~oftc-webi@cpe-74-137-252-159.swo.res.rr.com) has joined #ceph
[17:55] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[17:55] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[17:56] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[18:00] * jskinner (~jskinner@69.170.148.179) has joined #ceph
[18:02] <KB> Hi all - question on adding a new server on a prior release with CentOS using ceph-deploy... Since ceph 0.80.1 is deployed to EPEL repo, it is taking priority over the ceph repository when we specify the --release emperor with ceph-deploy, and yum is installing the firefly release.
[18:02] <KB> We have a workaround that's kludgy with manual repo setup, yum-priorities, and setting EPEL as a lower pri than the ceph repo, but not pretty...
[18:03] * lalatenduM (~lalatendu@122.167.8.181) Quit (Quit: Leaving)
[18:04] * KaZeR (~kazer@64.201.252.132) has joined #ceph
[18:05] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) has joined #ceph
[18:06] <alfredodeza> KB: yeah there was an email to the ceph-users list about it
[18:06] <alfredodeza> this is definitely making things tricky for us
[18:07] * rotbeard (~redbeard@2a02:908:df11:9480:76f0:6dff:fe3b:994d) has joined #ceph
[18:10] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:13] * haomaiwa_ (~haomaiwan@119.6.74.175) Quit (Ping timeout: 480 seconds)
[18:13] * alfredodeza just got hit but that same issue on his testing cluster
[18:14] * drankis_ (~drankis__@37.148.173.239) has joined #ceph
[18:14] <KB> alfredodeza: Thanks - found your thread on the list... definitely going to be tricky for us to maintain going forward... updating from 0.72.1 to 0.72.2 presented a similiar issue. No clean workaround at this time, seems like :(
[18:14] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:14] * Ilya (~AndChat60@5.141.231.95) has joined #ceph
[18:15] <alfredodeza> the only one thing I see is the yum priorities package
[18:15] <alfredodeza> and then set that
[18:15] <Ilya> Hi all!
[18:15] <alfredodeza> KB: why do you say it is kludgy ?
[18:16] <KB> yep - we manually configured epel.repo and ceph.repo with priorities and are using the --no-adjust-repos after pushing our custom repos to each node
[18:17] <KB> it works, just not as pretty/simple as ceph-deploy install <node>... more manual steps for me to forget.
[18:17] <alfredodeza> ah... but ceph-deploy can help there if you are re-using the same values for the flags!
[18:18] <alfredodeza> you can set them in $HOME/.cephdeploy.conf or in the current working directory in cephdeploy.conf
[18:18] <Ilya> I mount ceph to host as end and share it with samba. Write to this share is OK. But then I delete a file, ceph don't want free space in cluster. How can I free space?
[18:18] <alfredodeza> KB: see http://ceph.com/ceph-deploy/docs/conf.html
[18:19] <Ilya> Sorry, t9) mount as rbd
[18:20] <KB> very nice! I was not aware of the cephdeploy.conf... I'll play around with that... thanks much!
[18:21] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[18:22] <alfredodeza> \o/
[18:26] * bandrus (~Adium@adsl-75-5-249-45.dsl.scrm01.sbcglobal.net) Quit (Quit: Leaving.)
[18:26] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[18:28] * sleinen (~Adium@2001:620:0:46:b1f2:4ed9:1c24:98cc) Quit (Ping timeout: 480 seconds)
[18:28] * rturk|afk is now known as rturk
[18:30] * nwat_ (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[18:30] * The_Bishop_ (~bishop@f055213012.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[18:33] * KaZeR (~kazer@64.201.252.132) Quit (Ping timeout: 480 seconds)
[18:34] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[18:40] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) Quit (Quit: Leaving.)
[18:43] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[18:43] * analbeard (~shw@host86-180-222-132.range86-180.btcentralplus.com) has joined #ceph
[18:43] * bandrus (~Adium@66-87-119-137.pools.spcsdns.net) has joined #ceph
[18:45] * KaZeR (~kazer@64.201.252.132) has joined #ceph
[18:47] * xarses (~andreww@12.164.168.117) has joined #ceph
[18:47] * ssejour (~sebastien@out-chantepie.fr.clara.net) Quit (Quit: Leaving.)
[18:50] * ircolle (~Adium@129.210.115.7) has joined #ceph
[18:51] <saturnine> Does anyone know of a way to quickly view the total disk usage for a specified user?
[18:52] <saturnine> I know you can dump the bucket stats and sum it, but if there's an easier way to get the total it'd be nice to know. :D
[18:56] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) has joined #ceph
[18:56] * analbeard (~shw@host86-180-222-132.range86-180.btcentralplus.com) Quit (Quit: Leaving.)
[18:59] <dcurtiss> I'm new to ceph myself, but try this: radosgw-admin user stats --sync-stats --uid=<user>
[19:01] <KB> alfredodeza: the .ceph-deploy.conf works *almost* perfectly for this - thank you so much. One oddity - when the custom repositories are pushed to the new node via ceph-deploy, it creates a "proxy=" line in the destination (which doesn't exist in the source). This causes the install to fail... however, specifying the proxy=<proxyIP:port> in the source works. Not sure what you'd do if there was no proxy available...
[19:03] <alfredodeza> hrmnnn
[19:03] <alfredodeza> I think I know what this is
[19:03] * bandrus1 (~Adium@66-87-119-165.pools.spcsdns.net) has joined #ceph
[19:04] * Cube (~Cube@12.248.40.138) has joined #ceph
[19:04] <alfredodeza> KB: can you share output on how the install fails?
[19:04] * Ilya (~AndChat60@5.141.231.95) Quit (Quit: Bye)
[19:04] <KB> sure
[19:05] <alfredodeza> thank you!
[19:06] * bandrus2 (~Adium@66-87-118-26.pools.spcsdns.net) has joined #ceph
[19:08] <KB> here you go: http://pastebin.com/r5w05YEN
[19:08] <KB> has my config, install, and resulting ceph.repo on remote node
[19:08] <alfredodeza> fantastic that you are using the 'extra-repos' feature!
[19:08] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) has joined #ceph
[19:08] * zerick (~eocrospom@190.187.21.53) Quit (Remote host closed the connection)
[19:09] * bandrus (~Adium@66-87-119-137.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[19:09] <KB> seems to work perfectly except for the proxy= stub. If I give it a valid proxy, it works without any issues
[19:09] <alfredodeza> yeah that is not ideal
[19:10] <alfredodeza> I will create a ticket for this
[19:10] <alfredodeza> thanks for reporing it
[19:10] <alfredodeza> *reporting
[19:10] <KB> excellent... no problem at all. again, appreciate your help!
[19:11] * flaxy (~afx@78.130.171.69) Quit (Quit: WeeChat 0.4.2)
[19:11] * bandrus1 (~Adium@66-87-119-165.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[19:12] * flaxy (~afx@78.130.171.69) has joined #ceph
[19:12] <alfredodeza> issue 8534
[19:12] <kraken> alfredodeza might be talking about http://tracker.ceph.com/issues/8534 [ceph-deploy will use a 'proxy' value in the repo file that is invalid]
[19:13] <alfredodeza> KB: ^ ^
[19:13] * yuriw (~Adium@ABordeaux-654-1-80-223.w109-214.abo.wanadoo.fr) has joined #ceph
[19:13] * sjusthm (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) has joined #ceph
[19:14] <KB> got it...
[19:14] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[19:15] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[19:15] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[19:18] * bandrus (~Adium@66-87-118-166.pools.spcsdns.net) has joined #ceph
[19:18] * Infitialis (~infitiali@5ED48E69.cm-7-5c.dynamic.ziggo.nl) has joined #ceph
[19:21] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[19:22] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[19:24] * bandrus1 (~Adium@66-87-118-117.pools.spcsdns.net) has joined #ceph
[19:24] * bandrus2 (~Adium@66-87-118-26.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[19:24] * leseb (~leseb@185.21.174.206) Quit (Killed (NickServ (Too many failed password attempts.)))
[19:24] * leseb (~leseb@185.21.174.206) has joined #ceph
[19:25] * bandrus2 (~Adium@66.87.118.234) has joined #ceph
[19:26] * bandrus (~Adium@66-87-118-166.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[19:27] * Infitialis (~infitiali@5ED48E69.cm-7-5c.dynamic.ziggo.nl) Quit (Ping timeout: 480 seconds)
[19:29] * rturk is now known as rturk|afk
[19:29] * rturk|afk is now known as rturk
[19:30] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[19:32] * bandrus1 (~Adium@66-87-118-117.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[19:32] * sleinen1 (~Adium@2001:620:0:68::100) has joined #ceph
[19:34] * BManojlovic (~steki@cable-94-189-165-169.dynamic.sbb.rs) has joined #ceph
[19:34] * flaxy (~afx@78.130.171.69) Quit (Quit: WeeChat 0.4.2)
[19:35] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Read error: Operation timed out)
[19:36] * flaxy (~afx@78.130.171.69) has joined #ceph
[19:38] * sleinen (~Adium@2001:620:0:26:a0b1:a2f0:5ea5:811b) has joined #ceph
[19:39] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) has joined #ceph
[19:40] * fford (~wee@p4FC9DCA9.dip0.t-ipconnect.de) has joined #ceph
[19:41] * sleinen1 (~Adium@2001:620:0:68::100) Quit (Read error: Connection reset by peer)
[19:42] <ponyofdeath> hi, anyone can help me figure out why i cannot increas my rbd pool's pg? i get this Error E2BIG: specified pg_num 2048 is too large (creating 1984 new PGs on ~32 OSDs exceeds per-OSD max of 32)
[19:43] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[19:43] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) has joined #ceph
[19:43] * sarob (~sarob@129.210.115.7) has joined #ceph
[19:45] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[19:45] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[19:46] <wrencsok> when you run this command "ceph osd pool set pool1 pg_num 8192 "or something like it fails with that error?
[19:47] <ponyofdeath> wrencsok: yes
[19:47] <ponyofdeath> all my other pools i have upped the pg value this one was the rbd pool and i am trying to up it as well
[19:47] * eford (~wee@p3EE04DBE.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[19:47] <ponyofdeath> there is not data on there
[19:48] <wrencsok> you mean the default rbd pool that's created when you build a cluster?
[19:48] * musca (musca@tyrael.eu) has left #ceph
[19:49] <wrencsok> 0 data,1 metadata,2 rbd,3 testpool,4 ..... i usually delete that one and create my own pool for rbd volumes, if that's the one you are talking about. let me try it on my lab cluster since i have it there.
[19:50] <wrencsok> ok, repro'd that in my lab, let me see if i can figure out why. we usually delete those initial pools.
[19:52] <steveeJ> isn't metadata used for ceph internals? can you just delete that?
[19:52] * ajazdzewski (~quassel@2001:4dd0:ff00:9081:d1f1:821b:225f:d973) has joined #ceph
[19:52] <wrencsok> i do. heh
[19:52] * BManojlovic (~steki@cable-94-189-165-169.dynamic.sbb.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[19:53] <wrencsok> works fine for us, we don't run a metadata server we just use RBD and object store.
[19:53] <wrencsok> its been stable for over a year without those default dir's.
[19:53] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[19:56] <wrencsok> ponyofdeath: try it in steps so you don't hit the throttling code. like try upping to 1024 then to 2048. you'll need to let the cluster settle btwn goes and make sure to update the pgp_nums for the pool too as you step up the pg_num. I was able to get around the error that way. after a quick look at the code.
[19:56] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:57] <wrencsok> if the pool is empty it shoudl be near instant
[19:57] <wrencsok> worked for me in my lab.
[19:57] * sleinen (~Adium@2001:620:0:26:a0b1:a2f0:5ea5:811b) Quit (Quit: Leaving.)
[20:00] * rturk is now known as rturk|afk
[20:02] * rturk|afk is now known as rturk
[20:03] * bandrus (~Adium@66.87.118.234) has joined #ceph
[20:03] * bandrus2 (~Adium@66.87.118.234) Quit (Read error: Connection reset by peer)
[20:04] * sarob (~sarob@129.210.115.7) Quit (Remote host closed the connection)
[20:04] * sarob (~sarob@129.210.115.7) has joined #ceph
[20:06] * talonisx (~talonisx@pool-108-18-97-131.washdc.fios.verizon.net) has joined #ceph
[20:09] * bandrus1 (~Adium@66-87-118-234.pools.spcsdns.net) has joined #ceph
[20:10] * michalefty (~micha@188-195-129-145-dynip.superkabel.de) has joined #ceph
[20:12] * BManojlovic (~steki@cable-94-189-165-169.dynamic.sbb.rs) has joined #ceph
[20:12] * sarob (~sarob@129.210.115.7) Quit (Ping timeout: 480 seconds)
[20:15] * lofejndif (~lsqavnbok@luxemburg.gtor.org) has joined #ceph
[20:15] * bandrus (~Adium@66.87.118.234) Quit (Ping timeout: 480 seconds)
[20:17] * rpowell1 (~rpowell@128.135.100.107) has joined #ceph
[20:19] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[20:20] * dgbaley27 (~matt@c-98-245-167-2.hsd1.co.comcast.net) has joined #ceph
[20:21] * sleinen1 (~Adium@2001:620:0:26:e91f:fa39:d160:5d28) has joined #ceph
[20:21] * ircolle (~Adium@129.210.115.7) Quit (Quit: Leaving.)
[20:22] * rturk is now known as rturk|afk
[20:26] * rendar (~I@host86-177-dynamic.8-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[20:27] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:31] * michalefty (~micha@188-195-129-145-dynip.superkabel.de) Quit (Quit: Leaving.)
[20:32] * sarob (~sarob@129.210.115.7) has joined #ceph
[20:32] * ircolle (~Adium@129.210.115.7) has joined #ceph
[20:35] * m0e (~Moe@41.45.208.72) has joined #ceph
[20:35] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[20:37] * jcsp (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[20:37] * The_Bishop_ (~bishop@f055213012.adsl.alicedsl.de) has joined #ceph
[20:40] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[20:42] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[20:44] * Cube1 (~Cube@12.248.40.138) has joined #ceph
[20:44] * Cube (~Cube@12.248.40.138) Quit (Read error: Connection reset by peer)
[20:46] * chutz (~chutz@rygel.linuxfreak.ca) Quit (Quit: Leaving)
[20:46] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[20:48] * yuriw1 (~Adium@ABordeaux-654-1-80-223.w109-214.abo.wanadoo.fr) has joined #ceph
[20:48] * yuriw (~Adium@ABordeaux-654-1-80-223.w109-214.abo.wanadoo.fr) Quit (Read error: Connection reset by peer)
[20:51] * bandrus1 (~Adium@66-87-118-234.pools.spcsdns.net) Quit (Read error: No route to host)
[20:52] * Cube (~Cube@12.248.40.138) has joined #ceph
[20:54] * bandrus (~Adium@66.87.118.234) has joined #ceph
[20:55] * lupu (~lupu@86.107.101.246) has joined #ceph
[20:55] * bandrus (~Adium@66.87.118.234) Quit (Read error: Connection reset by peer)
[20:55] * bandrus (~Adium@66.87.118.234) has joined #ceph
[20:56] * Cube1 (~Cube@12.248.40.138) Quit (Ping timeout: 480 seconds)
[20:58] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) Quit (Read error: Operation timed out)
[21:02] * aldavud (~aldavud@213.55.184.251) has joined #ceph
[21:06] * rturk|afk is now known as rturk
[21:11] <KaZeR> Hey there. I'm trying to put my journal on a SSD for a new OSD. formatted the HDD, mounted it, did a ceph-osd mkfs, linked the journal to the partition on my SSD, did ceph-osd mkjournal
[21:11] <KaZeR> so far so good. but when i try to start my osd, it reads the journal then aborts : http://bpaste.net/show/hlGQy3v6njL1IvNr3Wvt/
[21:19] * alop (~abelopez@128-107-239-235.cisco.com) has joined #ceph
[21:20] * gregsfortytwo (~Adium@129.210.115.6) has joined #ceph
[21:20] <alop> Hey guys, quick question
[21:20] <alop> I could have sworn I remember hearing about the great advantages of using qcow2 formatted images in glance with a ceph backend
[21:21] <alop> now I???m hearing that qcow2 is not supported
[21:21] <jack> using rbd ?
[21:21] <alop> ya
[21:21] <jack> useless, then
[21:22] <jack> which advantages are you thinking about ?
[21:22] <alop> IIRC, the benefit was that using qcow2 images would allow for new instances to be launched/cloned faster
[21:23] <KaZeR> alop, you can still have cow volumes for your instances for your raw image
[21:23] <KaZeR> it works
[21:23] * nwat (~textual@eduroam-229-119.ucsc.edu) has joined #ceph
[21:23] <KaZeR> (as long as you don't use ephemeral volumes)
[21:23] <alop> alright
[21:25] <KaZeR> alop, which version of openstack are you using ?
[21:26] <alop> Havana right now, moving to Icehouse
[21:26] <alop> I think the major advantage I was looking for was uploading an 800MB image instead of a 20GB image
[21:28] * rpowell1 (~rpowell@128.135.100.107) has left #ceph
[21:29] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) Quit (Quit: Leaving.)
[21:30] <KaZeR> yeah for that you're screwed AFAIK
[21:30] <KaZeR> also for havana you'll need to use a different git branch https://github.com/jdurgin/nova/tree/havana-ephemeral-rbd
[21:32] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) has joined #ceph
[21:33] <alop> Thanks, I guess I just misunderstood last time I heard it
[21:38] * reed (~reed@75-101-54-131.dsl.static.sonic.net) has joined #ceph
[21:38] <KaZeR> np
[21:44] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[21:44] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) has joined #ceph
[21:44] * m0e (~Moe@41.45.208.72) Quit (Quit: This computer has gone to sleep)
[21:45] <KaZeR> can you change osd recovery op priority on the fly ?
[21:45] <KaZeR> the doc at https://ceph.com/docs/master/rados/configuration/osd-config-ref/ is discussing doing it in the config file only
[21:49] * sarob (~sarob@129.210.115.7) Quit (Remote host closed the connection)
[21:50] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[21:51] * markbby (~Adium@168.94.245.3) has joined #ceph
[21:52] <dcurtiss> KaZeR: try http://ceph.com/docs/master/rados/configuration/ceph-conf/#runtime-changes
[21:52] <KaZeR> thanks dcurtiss
[21:53] * ircolle (~Adium@129.210.115.7) Quit (Quit: Leaving.)
[21:54] * rturk is now known as rturk|afk
[22:01] * ghartz_ (~ghartz@ip-68.net-80-236-84.joinville.rev.numericable.fr) has joined #ceph
[22:02] * danieagle (~Daniel@179.176.52.247.dynamic.adsl.gvt.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[22:02] * Sysadmin88 (~IceChat77@94.4.22.173) has joined #ceph
[22:03] <ponyofdeath> wrencsok: sweet thanks!
[22:03] <ponyofdeath> glad it was something like that and nothing major
[22:04] * bandrus (~Adium@66.87.118.234) Quit (Quit: Leaving.)
[22:04] * aldavud (~aldavud@213.55.184.251) Quit (Ping timeout: 480 seconds)
[22:04] * thomnico (~thomnico@2a01:e35:8b41:120:b841:4631:c163:3a27) Quit (Quit: Ex-Chat)
[22:04] * leseb (~leseb@185.21.174.206) Quit (Killed (NickServ (Too many failed password attempts.)))
[22:05] * rturk|afk is now known as rturk
[22:05] * leseb (~leseb@185.21.174.206) has joined #ceph
[22:08] * ssejourne (~ssejourne@37.187.216.206) has joined #ceph
[22:09] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Read error: Connection reset by peer)
[22:09] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[22:15] * bandrus (~Adium@66.87.118.234) has joined #ceph
[22:16] * n-st (~n-st@0001c80a.user.oftc.net) has left #ceph
[22:23] * cce (~cce@50.56.54.167) Quit (Remote host closed the connection)
[22:23] * m0e (~Moe@41.45.208.72) has joined #ceph
[22:23] * rotbeard (~redbeard@2a02:908:df11:9480:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[22:25] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[22:26] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit (Quit: Quitte)
[22:26] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has left #ceph
[22:27] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[22:29] * ssejourne (~ssejourne@37.187.216.206) Quit (Quit: leaving)
[22:30] * drankis_ (~drankis__@37.148.173.239) Quit (Ping timeout: 480 seconds)
[22:31] * bandrus (~Adium@66.87.118.234) Quit (Read error: Operation timed out)
[22:32] * ssejourne (~ssejourne@37.187.216.206) has joined #ceph
[22:33] * bandrus (~Adium@66-87-135-35.pools.spcsdns.net) has joined #ceph
[22:36] * Nacer_ (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[22:38] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[22:38] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) has joined #ceph
[22:39] * drankis_ (~drankis__@89.111.13.198) has joined #ceph
[22:39] * jcsp_ (~john@185.34.80.249) has joined #ceph
[22:43] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[22:47] * ganders (~root@200-127-158-54.net.prima.net.ar) Quit (Quit: WeeChat 0.4.1)
[22:49] * sarob (~sarob@129.210.115.7) has joined #ceph
[22:49] * rturk is now known as rturk|afk
[22:53] * rturk|afk is now known as rturk
[22:54] * sprachgenerator (~sprachgen@130.202.135.240) Quit (Quit: sprachgenerator)
[22:55] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[22:59] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Ping timeout: 480 seconds)
[22:59] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) Quit (Quit: Leaving)
[23:01] * bens (~ben@c-71-231-52-111.hsd1.wa.comcast.net) has joined #ceph
[23:01] <bens> yo!
[23:01] <bens> ceph 8 is in the dumpling repo!@
[23:01] <bens> (3/7): ceph-0.80.1-2.el6.x86_64.rpm | 18 MB 00:00
[23:01] <bens> bad news!@!
[23:01] <bens> baseurl=http://ceph.com/rpm-dumpling/el6/$basearch
[23:02] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[23:02] * ChanServ sets mode +v andreask
[23:02] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has left #ceph
[23:04] <bens> when i did a yum update on my system it tried to give me firefly.
[23:04] <bens> any idea why?
[23:04] * ajazdzewski (~quassel@2001:4dd0:ff00:9081:d1f1:821b:225f:d973) Quit (Remote host closed the connection)
[23:05] * bandrus1 (~Adium@66.87.118.32) has joined #ceph
[23:05] * sleinen1 (~Adium@2001:620:0:26:e91f:fa39:d160:5d28) Quit (Quit: Leaving.)
[23:06] * zack_dolby (~textual@pdf8519e7.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[23:08] * allsystemsarego (~allsystem@188.27.188.69) Quit (Quit: Leaving)
[23:12] * bandrus (~Adium@66-87-135-35.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[23:12] * sarob_ (~sarob@129.210.115.7) has joined #ceph
[23:13] * m0e (~Moe@41.45.208.72) Quit (Quit: This computer has gone to sleep)
[23:15] * Nacer_ (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[23:16] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[23:16] <iggy> bens: it's been reported (earlier in here and on the mailing list I think)
[23:17] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[23:17] <bens> ok
[23:17] <bens> i just paniced
[23:17] <kraken> http://i.imgur.com/tpGQV.gif
[23:17] <bens> sorry
[23:17] * sarob (~sarob@129.210.115.7) Quit (Ping timeout: 480 seconds)
[23:17] <bens> panicked
[23:17] <kraken> http://i.imgur.com/rhNOy3I.gif
[23:17] * ssejour (~sebastien@ec135-1-78-239-10-19.fbx.proxad.net) has joined #ceph
[23:18] * ircolle (~Adium@mobile-198-228-212-145.mycingular.net) has joined #ceph
[23:19] * drankis_ (~drankis__@89.111.13.198) Quit (Ping timeout: 480 seconds)
[23:22] * Shmouel1 (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) Quit (Read error: Connection reset by peer)
[23:28] <bens> thank you kraken
[23:28] <kraken> bens: you got it :)
[23:28] <bens> we can be friends.
[23:30] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[23:30] * Tamil (~Adium@cpe-142-136-97-92.socal.res.rr.com) has joined #ceph
[23:32] * yuriw1 (~Adium@ABordeaux-654-1-80-223.w109-214.abo.wanadoo.fr) Quit (Quit: Leaving.)
[23:35] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[23:38] * sverrest_ (~sverrest@cm-84.208.166.184.getinternet.no) has joined #ceph
[23:39] * sverrest (~sverrest@cm-84.208.166.184.getinternet.no) Quit (Ping timeout: 480 seconds)
[23:40] * jskinner (~jskinner@69.170.148.179) Quit (Quit: Leaving...)
[23:48] <Kupo1> What should osd op threads be set to on a pure SSD pool, # of cores on the node?
[23:52] * ghartz_ (~ghartz@ip-68.net-80-236-84.joinville.rev.numericable.fr) Quit (Remote host closed the connection)
[23:56] * jcsp_ (~john@185.34.80.249) Quit (Ping timeout: 480 seconds)
[23:57] * nwat (~textual@eduroam-229-119.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.