#ceph IRC Log

Index

IRC Log for 2014-03-25

Timestamps are in GMT/BST.

[0:01] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) Quit (Quit: Leaving.)
[0:02] * mattt (~textual@CPE0026f326e530-CM0026f326e52d.cpe.net.cable.rogers.com) Quit ()
[0:03] * diegows_ (~diegows@186.61.17.101) Quit (Read error: Connection reset by peer)
[0:03] * h6w (~tudor@254.86.96.58.static.exetel.com.au) has joined #ceph
[0:04] <h6w> Does ceph require an mds daemon to mount it with ceph-fuse?
[0:05] <dmick> mds is required for any use of cephfs
[0:05] <dmick> "ceph" is more than just cephfs
[0:05] <dmick> cephfs is the posix filesystem component
[0:05] * linuxkidd (~linuxkidd@rtp-isp-nat1.cisco.com) Quit (Ping timeout: 480 seconds)
[0:06] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[0:07] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:07] <h6w> dmick: Thanks. I *just* found that answer on the cephfs page, also. :-)
[0:08] <h6w> Indeed. I'm trying to determine why my openstack message queue might be failing. I've changed IPs and moved the monitor, and everything ceph-side seems to be correct.
[0:09] * AfC (~andrew@nat-gw1.syd4.anchor.net.au) has joined #ceph
[0:09] <h6w> I don't believe that an mds was running beforehand, so OS must be using ceph in another way. I was just trying to verify that it was all running as expected.
[0:11] <dmick> OpenStack typically uses the block device, yes
[0:11] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[0:13] * wschulze (~wschulze@p54BEDDB2.dip0.t-ipconnect.de) has joined #ceph
[0:13] <h6w> As in, rbd? radosgw isn't even installed, so I'm guessing it's using rbd. But rbd ls lists nothing. :-(
[0:14] <dmick> rbd, yes
[0:15] <h6w> Hmmm, maybe it's the pool name.
[0:15] * linuxkidd (~linuxkidd@rtp-isp-nat1.cisco.com) has joined #ceph
[0:16] <BillK> What is the best way to delete lots of files from cephfs? - (security camer videos/jpegs)
[0:16] * wschulze (~wschulze@p54BEDDB2.dip0.t-ipconnect.de) Quit ()
[0:17] * zack_dolby (~textual@p852cae.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[0:17] <BillK> if I try and mass delete the cluster comes to a screeching halt as the deletes cue up - currently I am scripting it with a 2 second delay between each delete ... is there a better way?
[0:18] <h6w> Ahah! "ceph osd lspools" lists the pools: "0 data,1 metadata,2 rbd,3 images,4 volumes,5 compute"
[0:18] <h6w> "rbd ls images" lists the images by UUID. :-D
[0:19] <dmick> there you go
[0:20] <h6w> Hmmm. So all I've managed to prove is that the ceph cluster is working. Still have no idea why my message queue would chuck a 500 error. :-|
[0:20] <h6w> I've learned something, tho! :-D
[0:21] <h6w> BillK: What kind of network contention do you have? Is your ceph cluster on its own private net or sharing?
[0:24] * markbby (~Adium@168.94.245.2) Quit (Quit: Leaving.)
[0:28] <BillK> its two hosts (not high performance), 2 osds each 1g link. Even if I stop all other useage it still starts timimg out if I que too many deletes as the buffers fill up (I am talking of deleting many days of up to 20-30000 files each :) - painfull
[0:30] <BillK> From what I have read its a basic problem with cephfs and lots of smallish files so I am looking more at whats the best strategy as improving the performance of the cluster (new hardware) is not currently possible
[0:30] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[0:35] * Serbitar (~david@autoconfig.asguard.org.uk) has joined #ceph
[0:38] * skeenan (~Adium@8.21.68.242) has joined #ceph
[0:42] * sputnik13 (~sputnik13@64.134.221.62) has joined #ceph
[0:42] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:42] * sputnik13 (~sputnik13@64.134.221.62) Quit ()
[0:44] <h6w> Yeah. I saw something on the mailing list the other day about it. Someone was talking about having 10000 files in a directory. They were concentrating on the swift issue, tho.
[0:47] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) has joined #ceph
[0:48] * mattt (~textual@CPE0026f326e530-CM0026f326e52d.cpe.net.cable.rogers.com) has joined #ceph
[0:52] * mattt (~textual@CPE0026f326e530-CM0026f326e52d.cpe.net.cable.rogers.com) Quit ()
[0:59] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[0:59] * jtaguinerd (~Adium@112.205.12.151) has joined #ceph
[1:04] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[1:06] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:11] * sjusthm (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[1:11] * mattt (~textual@CPE0026f326e530-CM0026f326e52d.cpe.net.cable.rogers.com) has joined #ceph
[1:13] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:14] * fatih_ (~fatih@78.186.36.182) has joined #ceph
[1:17] * hybrid512 (~walid@195.200.167.70) Quit (Ping timeout: 480 seconds)
[1:18] * fatih (~fatih@78.186.36.182) Quit (Ping timeout: 480 seconds)
[1:19] * fatih (~fatih@78.186.36.182) has joined #ceph
[1:22] * fatih_ (~fatih@78.186.36.182) Quit (Ping timeout: 480 seconds)
[1:23] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) Quit (Quit: Leaving.)
[1:24] * The_Bishop (~bishop@g229164051.adsl.alicedsl.de) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[1:26] * jharley (~jharley@173-230-163-47.cable.teksavvy.com) has joined #ceph
[1:28] * bitblt (~don@128-107-239-233.cisco.com) Quit (Quit: Leaving)
[1:29] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[1:32] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[1:33] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) has joined #ceph
[1:37] * hybrid512 (~walid@195.200.167.70) has joined #ceph
[1:40] * mattt (~textual@CPE0026f326e530-CM0026f326e52d.cpe.net.cable.rogers.com) Quit (Quit: Computer has gone to sleep.)
[1:43] * jcsp (~Adium@0001bf3a.user.oftc.net) has joined #ceph
[1:46] * zack_dolby (~textual@e0109-114-22-14-183.uqwimax.jp) has joined #ceph
[1:54] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[1:54] * joao|lap (~JL@a95-92-33-54.cpe.netcabo.pt) Quit (Ping timeout: 480 seconds)
[1:54] * skeenan (~Adium@8.21.68.242) Quit (Ping timeout: 480 seconds)
[2:02] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[2:06] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[2:09] * JeffK (~Jeff@38.99.52.10) Quit (Quit: JeffK)
[2:13] * zerick (~eocrospom@190.187.21.53) Quit (Remote host closed the connection)
[2:15] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[2:22] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[2:28] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[2:28] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Read error: Connection reset by peer)
[2:37] * Boltsky (~textual@office.deviantart.net) Quit (Ping timeout: 480 seconds)
[2:39] * hflai (~hflai@alumni.cs.nctu.edu.tw) has joined #ceph
[2:44] * Lea (~LeaChim@host86-159-235-225.range86-159.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:46] * jharley (~jharley@173-230-163-47.cable.teksavvy.com) Quit (Quit: jharley)
[2:54] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[2:54] * stewiem20001 (~stewiem20@195.10.250.233) has joined #ceph
[2:56] * jdmason (~jon@jfdmzpr06-ext.jf.intel.com) Quit (Remote host closed the connection)
[2:56] * JoeGruher (~JoeGruher@134.134.137.75) Quit (Read error: Connection reset by peer)
[2:57] * JoeGruher (~JoeGruher@jfdmzpr01-ext.jf.intel.com) has joined #ceph
[2:57] * jdmason (~jon@134.134.137.71) has joined #ceph
[2:59] * zack_dolby (~textual@e0109-114-22-14-183.uqwimax.jp) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * BillK (~BillK-OFT@124-148-70-238.dyn.iinet.net.au) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * Nats (~Nats@telstr575.lnk.telstra.net) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * toutour (~toutour@causses.idest.org) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * stewiem2000 (~stewiem20@195.10.250.233) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * dlan_ (~dennis@116.228.88.131) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * aarcane (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * eternaleye (~eternaley@50.245.141.73) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * classicsnail (~David@2600:3c01::f03c:91ff:fe96:d3c0) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * baffle (baffle@jump.stenstad.net) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * olc- (~olecam@paola.glou.fr) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * loicd (~loicd@bouncer.dachary.org) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * neurodrone (~neurodron@static-108-30-171-7.nycmny.fios.verizon.net) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * swizgard_ (~swizgard@port-87-193-133-18.static.qsc.de) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * nyerup_ (irc@jespernyerup.dk) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * nwf (~nwf@67.62.51.95) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * partner_ (joonas@ajaton.net) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * brother (foobaz@vps1.hacking.dk) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * twx_ (~twx@rosamoln.org) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * todin (tuxadero@kudu.in-berlin.de) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * ferai (~quassel@corkblock.jefferai.org) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * LCF (ball8@193.231.broadband16.iol.cz) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * Ormod (~valtha@ohmu.fi) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * Anticimex (anticimex@95.80.32.80) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * tom2 (~jens@s11.jayr.de) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * liiwi (liiwi@idle.fi) Quit (charon.oftc.net magnet.oftc.net)
[2:59] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit ()
[3:00] * zack_dolby (~textual@e0109-114-22-14-183.uqwimax.jp) has joined #ceph
[3:00] * BillK (~BillK-OFT@124-148-70-238.dyn.iinet.net.au) has joined #ceph
[3:00] * Nats (~Nats@telstr575.lnk.telstra.net) has joined #ceph
[3:00] * toutour (~toutour@causses.idest.org) has joined #ceph
[3:00] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[3:00] * dlan_ (~dennis@116.228.88.131) has joined #ceph
[3:00] * aarcane (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[3:00] * eternaleye (~eternaley@50.245.141.73) has joined #ceph
[3:00] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) has joined #ceph
[3:00] * classicsnail (~David@2600:3c01::f03c:91ff:fe96:d3c0) has joined #ceph
[3:00] * baffle (baffle@jump.stenstad.net) has joined #ceph
[3:00] * olc- (~olecam@paola.glou.fr) has joined #ceph
[3:00] * loicd (~loicd@bouncer.dachary.org) has joined #ceph
[3:00] * neurodrone (~neurodron@static-108-30-171-7.nycmny.fios.verizon.net) has joined #ceph
[3:00] * swizgard_ (~swizgard@port-87-193-133-18.static.qsc.de) has joined #ceph
[3:00] * nyerup_ (irc@jespernyerup.dk) has joined #ceph
[3:00] * nwf (~nwf@67.62.51.95) has joined #ceph
[3:00] * partner_ (joonas@ajaton.net) has joined #ceph
[3:00] * brother (foobaz@vps1.hacking.dk) has joined #ceph
[3:00] * twx_ (~twx@rosamoln.org) has joined #ceph
[3:00] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[3:00] * ferai (~quassel@corkblock.jefferai.org) has joined #ceph
[3:00] * tom2 (~jens@s11.jayr.de) has joined #ceph
[3:00] * Anticimex (anticimex@95.80.32.80) has joined #ceph
[3:00] * LCF (ball8@193.231.broadband16.iol.cz) has joined #ceph
[3:00] * Ormod (~valtha@ohmu.fi) has joined #ceph
[3:00] * liiwi (liiwi@idle.fi) has joined #ceph
[3:01] * jtaguinerd (~Adium@112.205.12.151) Quit (Quit: Leaving.)
[3:03] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[3:06] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Quit: Leaving.)
[3:07] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[3:14] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) has joined #ceph
[3:14] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) Quit (Quit: Leaving.)
[3:15] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[3:19] * yanzheng (~zhyan@134.134.137.71) has joined #ceph
[3:21] <houkouonchi-work> loicd: btw that machine (mira052) was getting btrfs oop's like crazy and soft lockups. ssh wasn't responding and nuke won't work without ssh. I was able to powercycle and than nuke. I left it locked under you still.
[3:21] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[3:29] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:31] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) has joined #ceph
[3:38] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[3:46] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[3:46] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[3:57] <winston-d> joshd: ping
[3:58] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:09] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[4:12] * h6w (~tudor@254.86.96.58.static.exetel.com.au) Quit (Ping timeout: 480 seconds)
[4:16] * xdeller (~xdeller@109.188.124.66) Quit (Ping timeout: 480 seconds)
[4:17] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[4:17] * h6w (~tudor@254.86.96.58.static.exetel.com.au) has joined #ceph
[4:28] * haomaiwa_ (~haomaiwan@118.187.35.6) has joined #ceph
[4:29] * JCL (~JCL@2601:9:5980:39b:3db9:964d:f47b:25a1) has joined #ceph
[4:31] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[4:34] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) Quit (Quit: Leaving.)
[4:36] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) Quit (Ping timeout: 480 seconds)
[4:36] * prakashsurya (~dunecn@c-98-224-26-177.hsd1.ca.comcast.net) has joined #ceph
[4:37] * yanzheng (~zhyan@134.134.137.71) Quit (Remote host closed the connection)
[4:39] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:39] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[4:43] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[4:45] <prakashsurya> Hi. Is anybody here familiar with Ceph's automated testing framework "teuthology"?
[4:47] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[4:51] * Cube (~Cube@12.248.40.138) Quit (Quit: Leaving.)
[4:59] <dmick> prakashsurya: somewhat. how can I help?
[5:01] <prakashsurya> Well, I don't really have a specific question yet..
[5:01] <prakashsurya> But I'm looking at setting up some automated testing at my company
[5:02] <prakashsurya> I work for LLNL, and was looking at different testing infrastructure "stuff" available
[5:02] <prakashsurya> and I'm curious if it would work for me
[5:02] <prakashsurya> So I'm just trying to get a grip on how it's used, and if I could take advantage of it for the internal testing I'm trying to set up
[5:04] <prakashsurya> I would ideally use it with Lustre, which is another distributed filesystem, so I thought it might fit the niche I'm looking for. But I'm completely unfamiliar with it.
[5:05] <dmick> ok
[5:05] * Vacum_ (~vovo@88.130.206.247) has joined #ceph
[5:08] <prakashsurya> Yea.. Sorry for such a vague "question".
[5:10] <prakashsurya> I'm just trying to take advantage of any exising software that I can. I'd rather not create my own internal solution if possible.
[5:10] <dmick> well, I mean, there's not much to say; it is an automated test framework of particular sort
[5:10] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[5:12] * Vacum (~vovo@88.130.219.77) Quit (Ping timeout: 480 seconds)
[5:13] <prakashsurya> Yea.. I guess i just need to spend some time trying to set it up and playing with it. I just stumbled on the github repo a few hours ago.
[5:16] <prakashsurya> Is there any documentation on setting it up besides the readme? and/or on how inktank is using it?
[5:16] <dmick> not that I'm aware of
[5:18] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[5:18] <prakashsurya> OK. Thanks!
[5:18] <dmick> the basic idea is "connect with ssh to machines that are already installed with an OS, and install and regression/stresstest Ceph on them, collecting logs of all tests". There's also a locking subsystem to share a pool of machines which is optional
[5:18] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[5:19] <prakashsurya> Thats'
[5:19] <prakashsurya> oops. That's basically what I'm looking for.
[5:19] <prakashsurya> While tieing into a "cloud" provider to provision VMs
[5:20] <dmick> there's been talk of different provisioning backends, but that work hasn't really been done
[5:21] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[5:21] <prakashsurya> I see
[5:22] <prakashsurya> any chance you know why they moved away from autotest?
[5:22] <prakashsurya> that's what I've been looking at recently
[5:22] <prakashsurya> but from what I've read, autotest didn't really work
[5:22] <dmick> not specifically. I know there's a lot about teuthology that's purpose-built for Ceph testing
[5:23] <prakashsurya> that's a common theme. everything seems to be tied heavily to the project they originated from.
[5:23] <dmick> sure, it's the easy path]
[5:25] <prakashsurya> I'd imagine testing Lustre and Ceph would have similar requirements which piqued my interest in teuthology
[5:25] <prakashsurya> just going to have to play with it to see if i can get it to work
[5:26] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:26] * JoeGruher (~JoeGruher@jfdmzpr01-ext.jf.intel.com) Quit (Remote host closed the connection)
[5:28] <prakashsurya> anyways, thanks for the help
[5:28] <dmick> I expect you'll have more questions after a bit :)
[5:28] * Cube (~Cube@66-87-67-4.pools.spcsdns.net) has joined #ceph
[5:40] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[5:41] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[5:42] * wrale_ (~wrale@cpe-107-9-20-3.woh.res.rr.com) Quit (Quit: Leaving...)
[5:44] * prakashsurya (~dunecn@c-98-224-26-177.hsd1.ca.comcast.net) Quit (Quit: leaving)
[5:49] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:49] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[6:03] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[6:11] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[6:12] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[6:15] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[6:20] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[6:30] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[6:31] * sleinen1 (~Adium@2001:620:0:26:a90d:3125:a7b6:ec3f) has joined #ceph
[6:36] * sleinen1 (~Adium@2001:620:0:26:a90d:3125:a7b6:ec3f) Quit ()
[6:36] * sleinen1 (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[6:38] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[6:44] * sleinen1 (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[6:46] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[6:51] * jtaguinerd (~Adium@121.54.32.130) has joined #ceph
[6:59] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Ping timeout: 480 seconds)
[7:14] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[7:15] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[7:18] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[7:18] * Warod (warod@lakka.kapsi.fi) Quit (Ping timeout: 480 seconds)
[7:20] * Warod (warod@lakka.kapsi.fi) has joined #ceph
[7:29] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[7:30] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit ()
[7:30] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[7:31] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[7:38] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[7:44] * AfC (~andrew@nat-gw1.syd4.anchor.net.au) Quit (Ping timeout: 480 seconds)
[7:45] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[7:46] * sleinen1 (~Adium@2001:620:0:26:a11f:fca9:a444:27a7) has joined #ceph
[7:53] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[7:57] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[7:58] * lianghaoshen (~slhhust@119.39.124.239) has joined #ceph
[7:59] * dis (~dis@109.110.66.165) Quit (Ping timeout: 480 seconds)
[8:01] * dis (~dis@109.110.66.7) has joined #ceph
[8:01] * fghaas (~florian@83-238-245-215.ip.netia.com.pl) has joined #ceph
[8:13] * AfC (~andrew@101.119.28.210) has joined #ceph
[8:14] * sleinen1 (~Adium@2001:620:0:26:a11f:fca9:a444:27a7) Quit (Quit: Leaving.)
[8:14] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[8:14] * fghaas (~florian@83-238-245-215.ip.netia.com.pl) Quit (Quit: Leaving.)
[8:22] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[8:37] * AfC (~andrew@101.119.28.210) Quit (Ping timeout: 480 seconds)
[8:41] * zack_dolby (~textual@e0109-114-22-14-183.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[8:45] * perfectsine (~perfectsi@if01-gn01.dal05.softlayer.com) Quit (Remote host closed the connection)
[8:48] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[8:51] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[8:52] * fghaas (~florian@213.17.226.11) has joined #ceph
[8:53] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[8:55] * thomnico (~thomnico@2a01:e35:8b41:120:51bd:4913:9399:150b) has joined #ceph
[8:56] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[8:57] * analbeard (~shw@141.0.32.124) has joined #ceph
[8:58] * ksingh (~Adium@2001:708:10:10:3cb7:891c:a70a:2d2) has joined #ceph
[9:08] * shang (~ShangWu@175.41.48.77) has joined #ceph
[9:11] * joelio (~Joel@88.198.107.214) Quit (Ping timeout: 480 seconds)
[9:12] * WintermeW_ (~WintermeW@212-83-158-61.rev.poneytelecom.eu) Quit (Remote host closed the connection)
[9:12] * wschulze (~wschulze@p54BEDDB2.dip0.t-ipconnect.de) has joined #ceph
[9:14] * Cube (~Cube@66-87-67-4.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[9:14] * Cube (~Cube@66-87-67-4.pools.spcsdns.net) has joined #ceph
[9:16] * joelio (~Joel@88.198.107.214) has joined #ceph
[9:19] * Sysadmin88 (~IceChat77@176.254.32.31) Quit (Quit: If you think nobody cares, try missing a few payments)
[9:28] * WintermeW (~WintermeW@212-83-158-61.rev.poneytelecom.eu) has joined #ceph
[9:33] * l3iggs (~oftc-webi@c-24-130-225-173.hsd1.ca.comcast.net) has joined #ceph
[9:34] * sleinen (~Adium@130.59.94.192) has joined #ceph
[9:34] <l3iggs> i'm having some trouble mounting my cephFS
[9:35] <l3iggs> i wonder if anyone can help me out here
[9:35] <l3iggs> i'm trying to mount like this:
[9:35] <l3iggs> sudo mount -t ceph 192.168.1.216:6789:/ /mnt/mycephfs -o name=admin,secret=`ceph-authtool -p /etc/ceph/ceph.client.admin.keyring`
[9:35] <l3iggs> but after a bit i always get mount error 5 = Input/output error
[9:36] <l3iggs> anyone have any suggestions on how to solve this?
[9:42] <l3iggs> anyone out there?
[9:42] <Gugge-47527> anything in /var/log/messagaes ?
[9:42] <Gugge-47527> or whereever your distro logs stuff :)
[9:42] * sleinen (~Adium@130.59.94.192) Quit (Ping timeout: 480 seconds)
[9:42] * sleinen (~Adium@2001:620:0:26:486b:e641:ea95:3a5a) has joined #ceph
[9:43] <l3iggs> yes
[9:43] <l3iggs> in my /var/log/messages
[9:44] <l3iggs> [ 2434.652825] libceph: mon0 192.168.1.216:6789 session established
[9:44] <l3iggs> libceph: mon0 192.168.1.216:6789 session established
[9:44] <l3iggs> libceph: client4212 fsid X....
[9:44] <Gugge-47527> do you have access to /etc/ceph/ceph.client.admin.keyring as the user running the command?
[9:45] <l3iggs> yes
[9:45] <l3iggs> ceph-authtool -p /etc/ceph/ceph.client.admin.keyring
[9:45] * AfC (~andrew@gateway.syd.operationaldynamics.com) has joined #ceph
[9:45] <l3iggs> spits out my key
[9:45] <Gugge-47527> you did try with the key directly in the command right?
[9:46] <Gugge-47527> just to rule something strange out
[9:46] <l3iggs> so does sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring
[9:47] <l3iggs> just tried it, it hangs for a while then mount error 5 = Input/output error
[9:47] <Gugge-47527> or -o user=admin,secretfile=/etc/ceph/ceph.client.admin.keyring
[9:47] <l3iggs> well no
[9:47] <l3iggs> i thought the secret file was ONLY the key
[9:48] <Gugge-47527> that may be, its been a while since i played with cephfs :)
[9:48] <l3iggs> as per http://ceph.com/docs/master/start/quick-cephfs/
[9:48] <l3iggs> i've tried that
[9:48] <l3iggs> no dice
[9:48] <Gugge-47527> but i always got errors in my logs when mount didnt work
[9:49] <Gugge-47527> so i cant really help you :(
[9:49] <baffle> Hi, I'm having problems getting RadosGW to auth with keystone; It seems to be unable to auth using the shared admin token? I'm using 0.72 from official repo on Ubuntu. Configuration and log: http://pastebin.com/Pd6PpVHe Keystone is supernew from git but works with other services.
[9:49] <baffle> Anyone have 0.72 working with keystone? :)
[9:49] <l3iggs> darn
[9:49] <l3iggs> thanks anyway
[9:50] <baffle> Not using CephFS, sorry.
[9:50] <l3iggs> anyone else have any idea how to solve mount error 5 = Input/output error?
[9:51] <Gugge-47527> maybe look at the mds log when you try to mount
[9:51] <l3iggs> yeah
[9:51] <l3iggs> i was trying that
[9:51] <l3iggs> a lot is spinning by in that log
[9:51] <l3iggs> and i stared at it for a while during the mount
[9:52] <l3iggs> and nothing jumped out at me
[9:55] * oro (~oro@2001:620:20:16:c901:fc01:9cf9:26e) has joined #ceph
[9:55] <l3iggs> i have a ton of is_readable now=2014-03-25 01:54:43.448251 lease_expire=0.000000 messages
[9:55] <l3iggs> 0(leader).paxos(paxos active c 1507..2140) is_readable now=2014-03-25 01:55:28.451004 lease_expire=0.000000 has v0 lc 2140
[9:58] <l3iggs> i get these warnings also: osd.1 [WRN] map e426 wrongly marked me down
[9:58] * andreask (~andreask@213.150.31.17) has joined #ceph
[9:58] * ChanServ sets mode +v andreask
[9:59] * AfC (~andrew@gateway.syd.operationaldynamics.com) Quit (Quit: Leaving.)
[10:04] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[10:06] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[10:07] * tryggvil (~tryggvil@178.19.53.254) Quit ()
[10:07] <classicsnail> I've seen that on 0.72.2 a fair bit, in my case it seemed to be related to too many PGs
[10:08] <classicsnail> after I rebuilt the cluster with a recalculated pg count, I don't get it
[10:09] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) has joined #ceph
[10:11] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[10:13] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[10:29] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[10:36] * lianghaoshen (~slhhust@119.39.124.239) Quit (Remote host closed the connection)
[10:37] * Lea (~LeaChim@host86-159-235-225.range86-159.btcentralplus.com) has joined #ceph
[10:38] * renzhi (~renzhi@192.241.193.44) Quit (Ping timeout: 480 seconds)
[10:43] * allsystemsarego (~allsystem@5-12-37-194.residential.rdsnet.ro) has joined #ceph
[10:43] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[10:47] * wschulze (~wschulze@p54BEDDB2.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[10:48] * wschulze (~wschulze@p54BEDDB2.dip0.t-ipconnect.de) has joined #ceph
[10:49] * wschulze (~wschulze@p54BEDDB2.dip0.t-ipconnect.de) Quit ()
[10:56] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:59] <loicd> houkouonchi-work: thanks !
[11:00] * xmltok (~xmltok@216.103.134.250) Quit (Remote host closed the connection)
[11:00] * jtaguinerd1 (~Adium@121.54.44.183) has joined #ceph
[11:01] * ksingh (~Adium@2001:708:10:10:3cb7:891c:a70a:2d2) Quit (Ping timeout: 480 seconds)
[11:01] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[11:01] * ksingh (~Adium@85-76-35-67-nat.elisa-mobile.fi) has joined #ceph
[11:06] * jtaguinerd (~Adium@121.54.32.130) Quit (Ping timeout: 480 seconds)
[11:06] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[11:06] * oro (~oro@2001:620:20:16:c901:fc01:9cf9:26e) Quit (Ping timeout: 480 seconds)
[11:11] * ksingh1 (~Adium@2001:708:10:91:9472:88f2:be0f:c09c) has joined #ceph
[11:12] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[11:14] * andreask (~andreask@213.150.31.17) has left #ceph
[11:15] * ksingh (~Adium@85-76-35-67-nat.elisa-mobile.fi) Quit (Read error: No route to host)
[11:15] * ksingh2 (~Adium@85-76-35-67-nat.elisa-mobile.fi) has joined #ceph
[11:17] * fatih_ (~fatih@162.243.172.91) has joined #ceph
[11:18] * oro (~oro@2001:620:20:222:c0c7:3647:c369:d16b) has joined #ceph
[11:22] * ksingh1 (~Adium@2001:708:10:91:9472:88f2:be0f:c09c) Quit (Ping timeout: 480 seconds)
[11:22] * andreask (~andreask@213.150.31.17) has joined #ceph
[11:22] * ChanServ sets mode +v andreask
[11:22] * fatih (~fatih@78.186.36.182) Quit (Ping timeout: 480 seconds)
[11:23] * sleinen (~Adium@2001:620:0:26:486b:e641:ea95:3a5a) Quit (Ping timeout: 480 seconds)
[11:24] * tryggvil (~tryggvil@178.19.53.254) Quit (Quit: tryggvil)
[11:24] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[11:25] * ksingh (~Adium@b-v4-0205.vpn.csc.fi) has joined #ceph
[11:25] * ksingh2 (~Adium@85-76-35-67-nat.elisa-mobile.fi) Quit (Ping timeout: 480 seconds)
[11:28] * bboris (~boris@78.90.142.146) Quit (Ping timeout: 480 seconds)
[11:35] * fatih (~fatih@78.186.36.182) has joined #ceph
[11:37] * fatih_ (~fatih@162.243.172.91) Quit (Read error: Operation timed out)
[11:38] * andreask (~andreask@213.150.31.17) Quit (Ping timeout: 480 seconds)
[11:47] * ksingh1 (~Adium@2001:708:10:91:c5a7:8b05:cb21:6cbd) has joined #ceph
[11:52] * fdmanana (~fdmanana@bl5-172-157.dsl.telepac.pt) has joined #ceph
[11:52] * ksingh (~Adium@b-v4-0205.vpn.csc.fi) Quit (Ping timeout: 480 seconds)
[11:52] * ksingh (~Adium@2001:708:10:10:b17f:dfa9:c81a:1767) has joined #ceph
[11:55] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) Quit (Ping timeout: 480 seconds)
[11:59] * ksingh1 (~Adium@2001:708:10:91:c5a7:8b05:cb21:6cbd) Quit (Ping timeout: 480 seconds)
[12:00] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[12:03] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[12:05] * bboris (~boris@router14.mail.bg) has joined #ceph
[12:07] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) has joined #ceph
[12:11] * fatih (~fatih@78.186.36.182) Quit (Quit: Linkinus - http://linkinus.com)
[12:11] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) Quit (Ping timeout: 480 seconds)
[12:14] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[12:15] * andreask (~andreask@213.150.31.3) has joined #ceph
[12:15] * ChanServ sets mode +v andreask
[12:16] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[12:17] * andreask (~andreask@213.150.31.3) has left #ceph
[12:18] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[12:23] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[12:24] * oro (~oro@2001:620:20:222:c0c7:3647:c369:d16b) Quit (Ping timeout: 480 seconds)
[12:30] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[12:33] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[12:35] <jtaguinerd1> Hi guys
[12:36] <jtaguinerd1> increasing object size would also mean increasing the storage usage??
[12:36] <jtaguinerd1> thanks in advance
[12:43] * thb (~me@2a02:2028:230:fbb0:6267:20ff:fec9:4e40) has joined #ceph
[12:44] * diegows_ (~diegows@190.190.5.238) has joined #ceph
[12:46] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[12:53] * andreask (~andreask@213.150.31.3) has joined #ceph
[12:53] * ChanServ sets mode +v andreask
[12:54] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) has joined #ceph
[12:55] * andreask (~andreask@213.150.31.3) has left #ceph
[12:58] <glambert> jtaguinerd1, obvious answer would be yes, but would you expect otherwise?
[13:00] * finster (~finster@cmdline.guru) Quit (Ping timeout: 480 seconds)
[13:00] * fghaas (~florian@213.17.226.11) Quit (Quit: Leaving.)
[13:01] * fghaas (~florian@213.17.226.11) has joined #ceph
[13:01] * fghaas (~florian@213.17.226.11) Quit ()
[13:02] * oro (~oro@2001:620:20:16:c901:fc01:9cf9:26e) has joined #ceph
[13:05] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) has joined #ceph
[13:14] * fdmanana (~fdmanana@bl5-172-157.dsl.telepac.pt) Quit (Quit: Leaving)
[13:20] * garphy`aw is now known as garphy
[13:29] * Fruit (wsl@2001:981:a867:2:216:3eff:fe10:122b) has joined #ceph
[13:30] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[13:30] <Fruit> hi, ceph-osd -i 0 --mkfs --mkkey gives me the following error: provided osd id 0 != superblock's -1
[13:30] <Fruit> filesystem is empty
[13:31] * zack_dolby (~textual@p852cae.tokynt01.ap.so-net.ne.jp) has joined #ceph
[13:31] * zack_dolby (~textual@p852cae.tokynt01.ap.so-net.ne.jp) Quit ()
[13:33] <jtaguinerd1> glambert, I have increased my object size to 32MB and I expect it will consume 32MB each object but i don't expect significant increase in the storage consumption. Please correct me if I am wrong, a 20 GB rbd using a 32MB object size will give you 640 objects but if I list the current number of object I won't see 640 as the image only takes up the actual disk as the data contained within. As you fill up the image the number of object will grow at his maximu
[13:33] <Fruit> the code is a bit weird; it creates a OSDSuperblock sb object that's otherwise uninitialized and compares the sb.whoami to my -i parameter
[13:38] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[13:39] * capri_on (~capri@212.218.127.222) Quit (Quit: Leaving)
[13:43] <Fruit> how is that supposed to work at all?
[13:44] * zack_dolby (~textual@p852cae.tokynt01.ap.so-net.ne.jp) has joined #ceph
[13:47] <Fruit> sigh nevermind, osd datastore was pointing at the wrong directory
[13:53] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[13:55] * markbby (~Adium@168.94.245.1) has joined #ceph
[13:57] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[13:57] * alexxy[home] (~alexxy@79.173.81.171) Quit (Remote host closed the connection)
[13:58] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[13:58] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[13:58] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[14:04] * finster (~finster@cmdline.guru) has joined #ceph
[14:06] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[14:07] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[14:08] * tryggvil (~tryggvil@178.19.53.254) Quit (Quit: tryggvil)
[14:08] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[14:09] * `jpg (~josephgla@ppp121-44-151-43.lns20.syd7.internode.on.net) Quit (Ping timeout: 480 seconds)
[14:13] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) Quit (Quit: sprachgenerator)
[14:15] * isodude (~isodude@kungsbacka.oderland.com) has joined #ceph
[14:22] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[14:22] * fghaas (~florian@213.17.226.11) has joined #ceph
[14:23] * bboris (~boris@router14.mail.bg) Quit (Remote host closed the connection)
[14:24] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[14:30] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[14:35] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[14:36] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[14:42] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[14:47] * ninkotech__ (~duplo@217-112-170-132.adsl.avonet.cz) has joined #ceph
[14:49] * JCL1 (~JCL@2601:9:5980:39b:b953:dd0:b24f:5ffb) has joined #ceph
[14:50] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[14:50] * ninkotech__ (~duplo@217-112-170-132.adsl.avonet.cz) Quit (Remote host closed the connection)
[14:51] * ninkotech__ (~duplo@217-112-170-132.adsl.avonet.cz) has joined #ceph
[14:54] * JCL (~JCL@2601:9:5980:39b:3db9:964d:f47b:25a1) Quit (Ping timeout: 480 seconds)
[14:55] * ninkotech__ (~duplo@217-112-170-132.adsl.avonet.cz) Quit (Remote host closed the connection)
[14:55] * ninkotech__ (~duplo@217-112-170-132.adsl.avonet.cz) has joined #ceph
[14:55] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[14:56] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[14:59] * ninkotech__ (~duplo@217-112-170-132.adsl.avonet.cz) Quit (Remote host closed the connection)
[14:59] * ninkotech__ (~duplo@217-112-170-132.adsl.avonet.cz) has joined #ceph
[15:04] * ninkotech (~duplo@217-112-170-132.adsl.avonet.cz) has joined #ceph
[15:04] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:06] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[15:06] <Svedrin> I've just installed a new ceph cluster with 5 mons, 3 osds and 3 mds. ceph -s says HEALTH_WARN, but the services are connected fine
[15:06] <Svedrin> how do I find out why it's degraded? :/
[15:06] * ninkotech__ (~duplo@217-112-170-132.adsl.avonet.cz) Quit (Read error: Operation timed out)
[15:07] <Svedrin> http://paste.debian.net/89697/ ??? this is ceph -s
[15:07] * markbby (~Adium@168.94.245.1) Quit (Quit: Leaving.)
[15:07] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[15:08] * markbby (~Adium@168.94.245.1) has joined #ceph
[15:08] * ninkotech_ (~duplo@217-112-170-132.adsl.avonet.cz) has joined #ceph
[15:09] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[15:10] * powhihs (~hjg@0001c8bd.user.oftc.net) has joined #ceph
[15:10] <powhihs> hi
[15:10] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[15:11] * ninkotech (~duplo@217-112-170-132.adsl.avonet.cz) Quit (Read error: Operation timed out)
[15:11] * Cube (~Cube@66-87-67-4.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[15:13] <powhihs> i set in ceph.conf inside [osd] osd_data = /var/lib/ceph/osd/osd.$id
[15:13] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[15:13] <powhihs> ceph-deploy prepare & activate doesn't actually use the naming i would like to use, it keeps using cluster's name which by default is 'ceph'
[15:17] * ninkotech_ (~duplo@217-112-170-132.adsl.avonet.cz) Quit (Ping timeout: 480 seconds)
[15:18] * BillK (~BillK-OFT@124-148-70-238.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[15:25] <winston-d> powhihs: you will have to explicitly use the '--cluster' argument for ceph-deploy if you have non default cluster name
[15:27] * JCL1 (~JCL@2601:9:5980:39b:b953:dd0:b24f:5ffb) Quit (Quit: Leaving.)
[15:29] <powhihs> winston-d: hm, i see, so as mkcephfs is depricated, it won't catch those settings from ceph.conf and apply also
[15:30] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has joined #ceph
[15:30] * bboris (~boris@router14.mail.bg) has joined #ceph
[15:30] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has left #ceph
[15:30] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[15:34] * sprachgenerator (~sprachgen@130.202.135.188) has joined #ceph
[15:38] * leseb (~leseb@185.21.172.77) Quit (Killed (NickServ (Too many failed password attempts.)))
[15:39] * gregmark (~Adium@68.87.42.115) has joined #ceph
[15:43] * leseb (~leseb@185.21.172.77) has joined #ceph
[15:44] <bboris> hi, is anyone online?
[15:49] <powhihs> yes bboris
[15:50] * `jpg (~josephgla@ppp121-44-146-74.lns20.syd7.internode.on.net) has joined #ceph
[15:51] * reed (~reed@75-101-54-131.dsl.static.sonic.net) has joined #ceph
[15:51] * jtaguinerd1 (~Adium@121.54.44.183) Quit (Read error: Connection reset by peer)
[15:51] * jharley (~jharley@76-10-151-146.dsl.teksavvy.com) has joined #ceph
[15:55] <powhihs> i got /var/lib/ceph/osd/ceph-0 if i rename ceph-1 to osd.10 i got librados: osd.10 authentication error (1) Operation not permitted
[15:55] <powhihs> what's the deal with the keyring, does it have to do something with the directory name?
[15:57] <jharley> powhihs: the directory name identifies the OSD to the monitors, and the keyring for that OSD is now not matching the one associated with OSD 10
[15:57] <jharley> ( if memory serves, anyway )
[15:57] <jharley> ceph has *all* the securities :)
[15:58] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) has joined #ceph
[16:01] <powhihs> jharley: okay, how can i make it matching the right way then
[16:01] <winston-d> jharley: which means once a OSD was created, thereis no way to rename it other than remove/re-add OSD?
[16:01] <jharley> powhihs: what are you trying to do, exactly?
[16:02] <jharley> winston-d: I???m not sure why you want to rename it?
[16:02] <powhihs> jharley: i'm doing test cases, cause we will upgrade and the case is strange. let me explain
[16:02] * fdmanana (~fdmanana@bl5-172-157.dsl.telepac.pt) has joined #ceph
[16:02] <jharley> winston-d: the data in that directory is for the OSD in question.. renaming it means that the data in that OSD is different from what the monitors think it is
[16:03] * JoeGruher (~JoeGruher@134.134.139.70) has joined #ceph
[16:03] <jharley> powhihs: sure, I???ll try to understand and help
[16:03] <powhihs> sec pastie...
[16:05] * b0e (~aledermue@juniper1.netways.de) Quit (Remote host closed the connection)
[16:05] <winston-d> jharley: when using mkcephfs, i can name OSDs *beforehand* in ${cluster-name}.conf, but I didn't figure how to do that with ceph-deploy. ceph-deploy seems to ignore that.
[16:05] * shang (~ShangWu@116.6.103.93) has joined #ceph
[16:06] * fghaas (~florian@213.17.226.11) Quit (Quit: Leaving.)
[16:07] <jharley> winston-d: I???ve never bothered to dictate the names of the OSDs, and don???t usually bother using mkcephfs either.. I use ???ceph-disk-prepare??? and ???ceph-disk-activate??? a fair bit, though
[16:07] <powhihs> jharley: http://pastie.org/private/o0fxj6wkpsow6t788kua
[16:07] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has joined #ceph
[16:08] <jharley> powhihs: cool. why is it important to you to have an ???OSD.10????
[16:08] <jharley> powhihs: I don???t usually bother to statically set me OSD???s in ceph.conf and just let the monitors assign IDs to them
[16:08] <zidarsk8> hello, how can I get the name of my current monmap ?
[16:09] <powhihs> jharley: because the current configuration of the cluster we have running is configured lamely with osd.10, osd.11 osd.12 on first node, osd.20 osd.21 osd.22 on second node
[16:10] * thomnico (~thomnico@2a01:e35:8b41:120:51bd:4913:9399:150b) Quit (Read error: No route to host)
[16:10] <powhihs> and if we are going to do anything with it i need to simulate the environment
[16:10] <powhihs> i'll need to breaka bit, fix it and so on, i'm far from upgrading it smoothly yet
[16:13] * thomnico (~thomnico@2a01:e35:8b41:120:51bd:4913:9399:150b) has joined #ceph
[16:13] * humbolt (~elias@62-46-148-194.adsl.highway.telekom.at) has joined #ceph
[16:13] * Cube (~Cube@66-87-67-4.pools.spcsdns.net) has joined #ceph
[16:15] <jharley> powhihs: I???ve never tried to rename an OSD, that being said I think you???ll need to clone the key for ???osd.1??? to ???osd.10??? (???ceph auth list??? and ???ceph auth import??? are likely your friends here)
[16:16] <jharley> powhihs: at which point, the keyring in the OSD directory structure will match the value in the auth. db
[16:17] * humbolt1 (~elias@178-190-244-65.adsl.highway.telekom.at) Quit (Ping timeout: 480 seconds)
[16:17] <jharley> powhihs: there???s also ???ceph osd crush create-or-move??? which might help you out.. but I???m not sure
[16:18] <powhihs> jharley: thanks
[16:18] <jharley> powhihs: no worries; sorry I can???t be more helpful
[16:19] <powhihs> it's still confusing how ceph-0 is mapped to osd.0
[16:19] <jharley> powhihs: oh?
[16:19] * ircolle (~Adium@2601:1:8380:2d9:78a6:22a9:7df3:e2ec) has joined #ceph
[16:19] <jharley> powhihs: ???ceph??? is your cluster name
[16:19] <jharley> powhihs: so, it???s the OSD numbered 0 in cluster ceph
[16:20] <powhihs> hm, the directory name of osds, probably cannot be changed
[16:20] <powhihs> i might think of ceph-11
[16:20] <powhihs> but to name it osd-11 it wont catch it i guess
[16:20] * perfectsine (~perfectsi@if01-gn01.dal05.softlayer.com) has joined #ceph
[16:20] * zirpu (~zirpu@00013c46.user.oftc.net) has left #ceph
[16:21] * jcsp (~Adium@0001bf3a.user.oftc.net) has joined #ceph
[16:21] <powhihs> e.g. [osd] osd_data = /var/lib/ceph/osd/osd.$id is not effective anymore
[16:23] <bboris> i'm still trying to simulate an osd nearfull and full problem by changing the settings in ceph.conf. "mon osd nearfull/full ratio" doesn't do what i expected, but "osd failsafe nearfull/full ratio" prevents writing past the percent specified
[16:23] <bboris> the problem is ceph -s or ceph health detail says it's ok
[16:24] <bboris> i expected health_warn, osds full or something like that
[16:24] <bboris> ceph -w says osd near full though
[16:27] * jtaguinerd (~Adium@121.54.44.183) has joined #ceph
[16:27] * sputnik13 (~sputnik13@64.134.221.62) has joined #ceph
[16:29] <powhihs> jharley: got it working.
[16:29] <jharley> powhihs: great!
[16:30] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[16:30] <powhihs> simply, after mounting /dev/sdc1 to /var/lib/ceph/osd/osd.10
[16:30] <powhihs> ceph-osd -i 0 --osd-data /var/lib/ceph/osd/osd.10 --osd-journal /var/lib/ceph/osd/osd.10/journal
[16:30] <powhihs> and got the osd up & running :)
[16:30] * JeffK (~Jeff@38.99.52.10) has joined #ceph
[16:30] <powhihs> i guess this is the manual way which leaves me with scripting my own init
[16:32] <jharley> powhihs: I think it???s running.. but it???s still very much osd.0 with data in the directory named ???osd.10???, though?
[16:32] <jharley> powhihs: that ???-i??? flag sets the ID
[16:33] <powhihs> infact you are right, i'll inspect the running configuration further
[16:33] <powhihs> jharley: how can i stop the ceph-osd process ? kill -WHAT_SIGNAL?
[16:33] <powhihs> i mean safely
[16:34] <winston-d> jharley: so questions regarding to ceph-deploy, if OSDs are deployed by ceph-deploy, it doesn't write the configuration to ceph.conf. Do I have to dump the config out of running cluster somehow, or there's not need to put OSD config into ceph.conf?
[16:35] <jharley> powhihs: a regular kill, so SIGHUP
[16:36] <jharley> winston-d: my understanding is that there???s no need to put the osd config in ceph.conf
[16:36] <jharley> winston-d: current releases of ceph (I???ve only been using it since cuttlefish) allow dynamic additions of OSDs to the cluster, which.. is far nicer :)
[16:39] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[16:40] <jharley> winston-d: the monitors are storing the osds in the monitor database
[16:41] <jharley> winston-d: you can take a look at it with ???ceph osd dump???, I believe
[16:41] <winston-d> jharley: ah, that explains a lot
[16:42] <powhihs> jharley: you were right, on the current running CEPH we got osd.110 which uses id 110 :)
[16:42] <jharley> powhihs: ceph is magical.. but not that magical ;)
[16:43] <powhihs> i see how messy is setup now
[16:45] <jharley> powhihs: I enjoyed cpeh a lot more when I stopped caring abotu OSD ID???s
[16:46] <bboris> i still don't understand when will ceph tell me about the full osds?
[16:47] <powhihs> why the hell one will mess with the ID enumaration...
[16:47] <powhihs> enumeration*
[16:49] * Fruit (wsl@2001:981:a867:2:216:3eff:fe10:122b) has left #ceph
[16:49] <powhihs> jharley: i see, id is to be incremented, not to make osd name to be more sexy per node
[16:50] <jharley> powhihs: you got it :)
[16:50] <jharley> bboris: what do you want it to tell you?
[16:51] <bboris> HEALTH_WARN: osd.X full
[16:51] <bboris> anything but HEALTH_OK
[16:51] <jharley> bboris: oh, sorry. I missed your message up there
[16:51] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[16:52] <jharley> bboris: I don???t know if an OSD getting full results in a health warning, to be honest
[16:52] <bboris> it is a health warning because i cant write one byte to the cluster anymore :)
[16:54] * JoeGruher (~JoeGruher@134.134.139.70) Quit (Remote host closed the connection)
[16:55] <jtaguinerd> bboris: it's a safety mechanism of ceph that once you reach full ratio you won't be able to write or read anymore to prevent data loss
[16:56] <bboris> no doubt about it
[16:56] <jtaguinerd> bboris are all your osd near full?
[16:57] <jtaguinerd> or others still have plenty of capacity?
[16:57] <bboris> but as an administrator i want the system to tell me that there is no more space left and its time to go add an osd
[16:57] <bboris> i'm testing currently
[16:57] <bboris> 2 osds running
[16:57] <bboris> both full
[16:57] <bboris> because replica num = 2
[16:59] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:00] <jtaguinerd> bboris: you should be getting a warning right?
[17:01] <bboris> should, yes
[17:01] <bboris> ceph health detail returns HEALTH_OK
[17:02] <jtaguinerd> bboris: that's weird. I've encountered having full osd a lot of times and ceph never fails to warn me
[17:02] <jtaguinerd> what's the value of your osd full ratio and osd nearfull ratio?
[17:04] <bboris> "mon_osd_full_ratio": "0.35",
[17:04] <bboris> "mon_osd_nearfull_ratio": "0.3",
[17:04] <bboris> "osd_failsafe_full_ratio": "0.37",
[17:04] <bboris> "osd_failsafe_nearfull_ratio": "0.32",
[17:05] <bboris> ceph -w outputs "osd.0 [WRN] OSD near full (36%)" from time to time
[17:05] <bboris> but ceph -s or ceph health says it's all fine
[17:05] <bboris> i'm starting to think ceph -s has a hard-coded value?
[17:06] <jtaguinerd> bboris: the default value of mon osd full ratio is .95 and mon osd nearfull ration is .85
[17:07] <jtaguinerd> bboris: what's the result of ceph -s?
[17:08] <bboris> yes, i overwrote them
[17:08] <bboris> http://pastebin.com/VtyXuaT8
[17:09] <bboris> could it be that i have 4 more osds? but they are out and down, the cluster should not be looking at them as free space
[17:09] * JoeGruher (~JoeGruher@jfdmzpr01-ext.jf.intel.com) has joined #ceph
[17:09] <bboris> actually, the two that are full must be raported anyway
[17:10] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[17:10] <bboris> also, ceph is v0.77, had other problems with emperor
[17:18] * jtaguinerd1 (~Adium@121.54.32.136) has joined #ceph
[17:18] * fghaas (~florian@62-111-217-18.ip.netia.com.pl) has joined #ceph
[17:24] * jtaguinerd (~Adium@121.54.44.183) Quit (Ping timeout: 480 seconds)
[17:25] <fedgoat> since we're talking about full ratios, i had a full cluster the other day..added more OSD's and rebalanced, but tried to remove the buckets with all the data..and now i have 2 stale buckets under the user that WILL NOT DIE or be removed even know the data was removed with radosgw-admin bucket rm --bucket=xyz Does anyone know how to get rid of these buckets in the users omap index...this looks to be an unresolved bug..but its really an
[17:25] <fedgoat> noying
[17:26] <fedgoat> http://tracker.ceph.com/issues/5197 http://tracker.ceph.com/issues/5219
[17:28] * oro (~oro@2001:620:20:16:c901:fc01:9cf9:26e) Quit (Ping timeout: 480 seconds)
[17:30] * BManojlovic (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[17:32] * shang (~ShangWu@116.6.103.93) Quit (Quit: Ex-Chat)
[17:32] * shang (~ShangWu@42-64-43-100.dynamic-ip.hinet.net) has joined #ceph
[17:33] * fghaas (~florian@62-111-217-18.ip.netia.com.pl) Quit (Quit: Leaving.)
[17:33] * thanhtran (~thanhtran@113.172.211.64) has joined #ceph
[17:33] * thomnico (~thomnico@2a01:e35:8b41:120:51bd:4913:9399:150b) Quit (Quit: Ex-Chat)
[17:34] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) has joined #ceph
[17:35] <wrencsok> anyone have any tips on the radosgw-admin and getting usage (bw / storage consumed) data out in intervals less than 6 hours? source code implies we can get hourly reports, our testing is giving us data in 6 hour intervals using the --start-date and --end-date parameters. emperor 72.2
[17:36] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) Quit (Ping timeout: 480 seconds)
[17:38] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:38] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[17:43] <thanhtran> anyone know how to upgrade ceph v0.72 Emperor to v0.8 or v.078 on Ubuntu 12.10, please show me steps to upgrade
[17:44] <wrencsok> i always seem to have to apt-get update/upgrade twice. 1st time picks up most things. a second time picks up the changes it missed, usually the actuall ceph package. restart the mons first, then restart the other things.
[17:45] <glambert> thanhtran, afaik because 0.78 is a development release it's a build from source job rather than an apt-get job
[17:46] * Cube (~Cube@66-87-67-4.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[17:46] <glambert> thanhtran, of course you could always build your own .deb file from the source
[17:47] <wrencsok> and let me rephrase my question before heading to the email lists. does anyone use the radosgw as object store gateway and track usage data per user and bucket? if so, are you able to get statistics out in less than 6 hour increments?
[17:48] * ksingh (~Adium@2001:708:10:10:b17f:dfa9:c81a:1767) has left #ceph
[17:48] <thanhtran> glambert, I installed ceph v0.72 emperor by apt-get and ceph-deploy tool, so if I upgrade to v0.8 by building from source code, do this have any problems?
[17:51] * analbeard (~shw@141.0.32.124) Quit (Quit: Leaving.)
[17:53] * rmoe (~quassel@12.164.168.117) has joined #ceph
[17:54] <pmatulis> thanhtran: you risk creating havoc with apt, which is never a good thing
[17:55] * bitblt (~don@128-107-239-235.cisco.com) has joined #ceph
[17:55] <glambert> thanhtran, I wouldn't recommend it
[17:55] <glambert> plus, 0.8 isn't out yet afaik
[17:55] <glambert> 0.78 is
[17:55] <glambert> wrencsok, I'm using radosgw but not for getting statistics out of it
[17:55] <glambert> sorry
[17:55] <glambert> didn't realise you could tbh!
[17:56] <wrencsok> we're trying to build a billing and perf metric system on a per user/bucket basis. we'd like a better resolution.
[18:00] <l3iggs> hey everyone, i can't seem to mount my cephFS properly
[18:00] <l3iggs> i keep getting mount error 5 = Input/output error
[18:00] <l3iggs> anyone have any ideas about how to fix this?
[18:00] * jharley (~jharley@76-10-151-146.dsl.teksavvy.com) has left #ceph
[18:02] * bboris (~boris@router14.mail.bg) Quit (Quit: leaving)
[18:02] * l3iggs (~oftc-webi@c-24-130-225-173.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[18:03] <jcsp> l3iggs: your first step will be to go and check the status of the clusters, "ceph status" has an MDS line to tell you about the status of the mds cluster
[18:03] * ido_ (~ido@lolcocks.com) Quit (Quit: Reconnecting)
[18:03] <jcsp> then you can look in the log for the MDS to see if it is telling you anything
[18:03] * ido (~ido@00014f21.user.oftc.net) has joined #ceph
[18:04] <thanhtran> glambert and pmatulis, must i reinstall the whole ceph from source code? currently, I have ceph cluster on production with 24 osd at about 40TB of capacity, its data is about 500GB, so I can't reinstall the whole ceph cluster, have any other approach to upgrade?
[18:04] * alram (~alram@38.122.20.226) has joined #ceph
[18:05] <glambert> wrencsok, stick it in the ceph-users mailing list and I'll keep an eye on it, if someone doesn't come back to you this evening I'll have a look myself cos I'd like to know!
[18:05] <glambert> thanhtran, pretty much what I'm having to do, wait until 0.8 firefly is released and upgrade via apt-get
[18:06] <pmatulis> thanhtran: i concur. do not mess so much with a production system. why do you want to upgrade anyway?
[18:07] <glambert> personally, I want to upgrade to get the rbd-fuse fix but I'm not pissing with my system for that until 0.8 is out
[18:07] <thanhtran> glambert, do you know when release v.08?
[18:07] <glambert> no idea sorry
[18:07] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has left #ceph
[18:07] <pmatulis> thanhtran: around mid may
[18:08] <glambert> pmatulis, :-|
[18:08] * xarses (~andreww@12.164.168.117) has joined #ceph
[18:08] <pmatulis> thanhtran: but i doubt debs will be made for 12.10
[18:09] <thanhtran> pmatulis, my ceph cluster have problem with performance and radosgw, I found some fixed bug in v0.8 that can fix these problems
[18:10] * xcrracer (~xcrracer@fw-ext-v-1.kvcc.edu) has joined #ceph
[18:11] <pmatulis> thanhtran: you're in a bit of a pickle
[18:11] <thanhtran> my issues includes "connection refused", "slower query", heartbeat_check no reply, osds have cpu load very high (about 300% - 500%) when I view by htop
[18:14] <thanhtran> pmatulis, you means that release v0.8 won't be created for ubuntu 12.10?
[18:15] <pmatulis> thanhtran: that question doesn't make much sense. it's a matter of packaging
[18:16] <pmatulis> thanhtran: and like i said, i doubt that inktank will make firefly packages for ubuntu 12.10, which is EOL next month
[18:17] <pmatulis> thanhtran: and 13.04 is also EOL already
[18:17] <pmatulis> thanhtran: so you really need to think about this stuff
[18:20] <thanhtran> pmatulis, thank for your information very much, I'm very worry about your information that you told me
[18:22] <stepheno> Hey all, quick question about hardware requirements. Would it be a terrible idea to have the MDS service on the same machine as a bunch of OSDs?
[18:22] * shang (~ShangWu@42-64-43-100.dynamic-ip.hinet.net) Quit (Read error: Connection reset by peer)
[18:22] <stepheno> plenty of cpu and ram( 64GB ram, 2x 6-core xeon)
[18:22] * sputnik13 (~sputnik13@64.134.221.62) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[18:23] <thanhtran> the servers in my ceph cluster are running with ubuntu 12.10, if the things that you told me are true, I will have many things to do
[18:24] * sjustwork (~sam@2607:f298:a:607:91fb:e6c2:dd0e:da92) has joined #ceph
[18:24] <pmatulis> thanhtran: since ubuntu changed non-LTS releases from 18 months of support to 9 months it has become a given for serious installations to use LTS (12.04, 14.04, etc)
[18:26] <pmatulis> 13.04 was the first release to have a 9-month support window
[18:26] <pmatulis> thanhtran: https://wiki.ubuntu.com/Releases
[18:33] <thanhtran> pmatulis, thank you once again for your information
[18:39] <powhihs> thanks guys
[18:39] <powhihs> cya
[18:39] * powhihs (~hjg@0001c8bd.user.oftc.net) Quit (Quit: leaving)
[18:42] * jtaguinerd1 (~Adium@121.54.32.136) Quit (Quit: Leaving.)
[18:48] * wschulze (~wschulze@p54BEDDB2.dip0.t-ipconnect.de) has joined #ceph
[18:50] * clewis (~clewis@12.251.157.126) has joined #ceph
[18:50] * clewis (~clewis@12.251.157.126) has left #ceph
[18:51] * Pedras1 (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[18:51] * garphy is now known as garphy`aw
[18:52] * madkiss (~madkiss@88.128.80.2) has joined #ceph
[18:59] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) has joined #ceph
[19:06] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[19:06] <thanhtran> bye
[19:07] * xarses (~andreww@12.164.168.117) Quit (Read error: Operation timed out)
[19:07] <joshd> winston-d: pong
[19:08] * thanhtran (~thanhtran@113.172.211.64) Quit (Quit: Going offline, see ya! (www.adiirc.com))
[19:11] * JRGruher (~JoeGruher@134.134.139.72) has joined #ceph
[19:11] * JoeGruher (~JoeGruher@jfdmzpr01-ext.jf.intel.com) Quit (Remote host closed the connection)
[19:13] * Cube (~Cube@66-87-67-4.pools.spcsdns.net) has joined #ceph
[19:16] * sputnik13 (~sputnik13@64.134.221.62) has joined #ceph
[19:20] * xarses (~andreww@12.164.168.117) has joined #ceph
[19:20] * sputnik13 (~sputnik13@64.134.221.62) Quit ()
[19:25] * Haksoldier (~islamatta@88.234.49.215) has joined #ceph
[19:25] * bboris (~boris@78.90.142.146) has joined #ceph
[19:25] <Haksoldier> EUZUBILLAHIMINE??EYTANIRRACIM BISMILLAHIRRAHMANIRRAHIM
[19:25] <Haksoldier> ALLAHU EKBERRRRR! LA ?LAHE ?LLALLAH MUHAMMEDEN RESULULLAH!
[19:25] <Haksoldier> I did the obligatory prayers five times a day to the nation. And I promised myself that, who (beside me) taking care not to make the five daily prayers comes ahead of time, I'll put it to heaven. Who says prayer does not show attention to me I do not have a word for it.! Prophet Muhammad (s.a.v.)
[19:25] <Haksoldier> hell if you did until the needle tip could not remove your head from prostration Prophet Muhammad pbuh
[19:25] * Haksoldier (~islamatta@88.234.49.215) has left #ceph
[19:26] <mjevans> The documentation in ganeti (2.10+) is... still a work in progress and grepping at the source code is giving me the usual object orientated nightmare of results that make no sense due to lacking context. Is anyone aware of a better source of documentation?
[19:26] <mjevans> (Creating a new ganeti storage cluster with ceph; I'd like to /try/ their method if it even exists yet)
[19:26] <mjevans> At the very least I am sure that userspace access mode is possible.
[19:27] <mjevans> (afk)
[19:28] * Boltsky (~textual@office.deviantart.net) has joined #ceph
[19:30] * markbby (~Adium@168.94.245.1) Quit (Quit: Leaving.)
[19:33] * markbby (~Adium@168.94.245.1) has joined #ceph
[19:45] * l3iggs (~oftc-webi@c-24-130-225-173.hsd1.ca.comcast.net) has joined #ceph
[19:48] * diegows_ (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[19:51] * JRGruher (~JoeGruher@134.134.139.72) Quit (Remote host closed the connection)
[19:56] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[19:56] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:58] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[19:58] * l3iggs (~oftc-webi@c-24-130-225-173.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[19:59] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[19:59] * dmsimard1 (~Adium@ap03.wireless.co.mtl.iweb.com) has joined #ceph
[20:01] * dmsimard2 (~Adium@108.163.152.66) has joined #ceph
[20:03] * dmsimard1 (~Adium@ap03.wireless.co.mtl.iweb.com) Quit (Read error: Operation timed out)
[20:04] * wschulze (~wschulze@p54BEDDB2.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[20:04] * `jpg (~josephgla@ppp121-44-146-74.lns20.syd7.internode.on.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[20:05] * dmsimard (~Adium@108.163.152.2) Quit (Ping timeout: 480 seconds)
[20:13] * markbby (~Adium@168.94.245.1) Quit (Quit: Leaving.)
[20:13] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) has joined #ceph
[20:19] * bboris (~boris@78.90.142.146) Quit (Quit: leaving)
[20:22] * markbby (~Adium@168.94.245.1) has joined #ceph
[20:23] * madkiss (~madkiss@58805002.test.dnsbl.oftc.net) Quit (Quit: Leaving.)
[20:26] * JoeGruher (~JoeGruher@134.134.139.70) has joined #ceph
[20:28] * bitblt (~don@128-107-239-235.cisco.com) Quit (Quit: Leaving)
[20:29] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[20:29] * ChanServ sets mode +v andreask
[20:34] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[20:35] * alphe (~alphe@0001ac6f.user.oftc.net) has joined #ceph
[20:36] <alphe> how can I remove that 2 active+remapped+backfill_tooful ?
[20:36] * bitblt (~don@ip24-255-48-170.tc.ph.cox.net) has joined #ceph
[20:37] * dmsimard2 (~Adium@108.163.152.66) Quit (Read error: Connection reset by peer)
[20:38] * analbeard (~shw@141.0.32.124) has joined #ceph
[20:41] <Kioob> alphe: "tooful" : you have a FULL OSD
[20:41] <Kioob> (near full)
[20:42] <alphe> kibob yes but it around 85%
[20:42] <Kioob> be carefull, on full cluster, you can have a big out of service time
[20:42] <alphe> if I lower the weight by usage I still get the warning
[20:42] <Kioob> reduce margin, or add OSD in your cluster
[20:43] <alphe> Kioob I know all that but I can t and anyway the rbd image will not top the max drive space
[20:43] <alphe> warning level is at 85 %
[20:44] <Kioob> then reduce the margin
[20:44] <alphe> how can I know what is the osd near full apart doing a df -h is there a ceph tool for that ?
[20:44] <Kioob> "osd backfill full ratio"
[20:44] <Kioob> "ceph health detail"
[20:44] <alphe> kioob marging a 5 % gives 6 toofull ...
[20:44] <Kioob> "ceph health detail" will say you where is the problem
[20:45] <alphe> kioob for a weird reason if I lower more the margin it makes things worst
[20:46] <alphe> pg 2.6e5 is stuck unclean for 87093.288616
[20:46] <alphe> so how do I unstuck it ?
[20:46] <alphe> how do I change the warning level I tryed in ceph.conf but that didn t effected anything
[20:47] <Kioob> fix the backfill_tooful
[20:47] <mjevans> Depends why it's stuck, maybe the OSDs in that pool are full.
[20:47] <alphe> around 85%
[20:47] <alphe> osd.12 is near full at 85%
[20:47] <alphe> osd.17 is near full at 85%
[20:47] <Kioob> backfill_tooful, will stuck PG that need backfill.
[20:47] <mjevans> I think Kioob is saying it isn't filling them because the reserve space minimum doesn't exist.
[20:48] <Kioob> yes mjevans
[20:48] <mjevans> Ceph tries, very hard, to help you plan for failures.
[20:48] <alphe> reservem space ?
[20:48] <Kioob> the "osd backfill full ratio" say that if the OSD as more that 85% used space, then no backfill will be allowed
[20:49] <Kioob> so your PG stucks, waiting that you fix that
[20:49] <mjevans> Kioob: what would the 15% reserved space be used for; atomic re-writes of existing blocks?
[20:49] <Kioob> Three options : 1) free some space. 2) add OSD. 3) increase the "osd backfill full ratio" parameter.
[20:50] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[20:50] <gregsfortytwo> don't increase the ratio; the local filesystems will hate you
[20:50] <Kioob> mjevans: security, to can out an OSD and spread its data on others OSD, for example
[20:50] <Kioob> +1 gregsfortytwo
[20:50] <mjevans> Local FS fragmentation; yes that too
[20:50] <alphe> increase the "osd backfill full ratio" parameter. tryed that
[20:51] <alphe> but it had no real effect
[20:51] <Kioob> it will unlock your backfills
[20:52] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has left #ceph
[20:52] <alphe> I still have some osd with 76 %
[20:53] <mjevans> OSDs are supposed to describe distinct failure units
[20:53] <mjevans> IE: a single disk or the storage portion there of.
[20:54] <alphe> mjeanson I know that
[20:54] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[20:54] * bitblt (~don@ip24-255-48-170.tc.ph.cox.net) Quit (Quit: Leaving)
[20:55] <mjevans> If you're super detailed about your setup, big iron might have osds grouped by IO bus/controller, then host, then power group, then rack/network isolation, etc.
[20:55] <alphe> I reweighted by utilization to 103 ...
[20:55] <alphe> that will temporarly solve the problem
[20:55] * bitblt (~don@128-107-239-233.cisco.com) has joined #ceph
[20:56] <alphe> mjeanson I don t have 2000 host ...
[20:56] <alphe> nor 2000 osd ...
[20:56] <alphe> it is a little happy cluster
[20:57] <alphe> with 3 % of span I get only 2% of the objects remapped
[20:58] <mjevans> Yeah... my setup is for a /small/ buisness, so it's at the /absolute/ minimum size. The only reason it exists at all is for hardware maintenance (so I can abstract away exactly which of the 2-3 servers it's running on)
[20:59] * dxd828 (~dxd828@dsl-dynamic-77-44-45-38.interdsl.co.uk) has joined #ceph
[21:00] <alphe> what I don t understand is that there is 4 osd with less than 80% (betwin 72% and 80%) and the rest of osd are betwin 80% ans 86 %
[21:03] <alphe> after adding to my ceph.conf file osd backfill ratio I will need to restart the cluster ?
[21:14] * alphe (~alphe@0001ac6f.user.oftc.net) Quit (Quit: Leaving)
[21:19] * garphy`aw is now known as garphy
[21:20] <mjevans> Of course... guy that left... you have to restart the cluster nodes or at least have them re-read the config file to use those settings.
[21:21] <mjevans> what is the world comming to; people trying to do horridly complicated data mangler things who've never heard of killall -sHUP thing
[21:23] * dxd828 (~dxd828@dsl-dynamic-77-44-45-38.interdsl.co.uk) Quit (Quit: Computer has gone to sleep.)
[21:23] <mjevans> Though it seems for Ceph this is still a feature request: http://tracker.ceph.com/issues/2459
[21:24] * tryggvil (~tryggvil@178.19.53.254) Quit (Quit: tryggvil)
[21:25] * xarses_ (~andreww@12.164.168.117) has joined #ceph
[21:25] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[21:26] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[21:27] * dxd828 (~dxd828@dsl-dynamic-77-44-45-38.interdsl.co.uk) has joined #ceph
[21:36] * analbeard (~shw@141.0.32.124) Quit (Quit: Leaving.)
[21:38] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[21:38] * fedgoatbah (~fedgoat@cpe-24-28-22-21.austin.res.rr.com) has joined #ceph
[21:44] * fedgoat (~fedgoat@cpe-24-28-22-21.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[22:00] * markbby (~Adium@168.94.245.1) Quit (Quit: Leaving.)
[22:03] * markbby (~Adium@168.94.245.1) has joined #ceph
[22:04] <mjevans> ceph-deploy sure is different from before. How do I make it work with partiations I already prepared filesystems, and /etc/fstab entries to 'break right away if a clueless admin after me plugs the disks in on the worng host'?
[22:05] * markbby (~Adium@168.94.245.1) Quit ()
[22:14] * BillK (~BillK-OFT@124-148-70-238.dyn.iinet.net.au) has joined #ceph
[22:15] <mjevans> Ok... that deploy page needs a little bit of polish or redirection to other pages
[22:23] <ponyofdeath> hi, using libvirt 1.2 where do i specify the rbd cache parameters?
[22:23] <ponyofdeath> http://paste.ubuntu.com/7153387 which use to work is not erroring out
[22:24] <mjevans> I gave up on libvirt a few versions ago. Ganeti is much nicer. Sorry I really don't know about it.
[22:25] <lurbs> ponyofdeath: http://ceph.com/docs/master/rbd/qemu-rbd/#running-qemu-with-rbd
[22:25] <lurbs> "Since QEMU 1.2, QEMU???s cache options control librbd caching"
[22:26] <joshd> ponyofdeath: set cache="writeback" on the driver element of the disk
[22:26] <lurbs> I assume that if you're using libvirt >= 1.2 then QEMU will also be well above 1.2.
[22:26] <ponyofdeath> yes
[22:27] <ponyofdeath> so set cache="writeback"
[22:27] <ponyofdeath> <driver name='qemu' type='raw' cache='writeback'/>
[22:28] <ponyofdeath> and remove it from the source protocol='rbd' name="libvirt/paht:cache..."
[22:28] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[22:29] <joshd> yup, just have name="pool/image" there. you can put extra settings in ceph.conf (like writethrough_until_flush)
[22:29] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[22:29] <ponyofdeath> does qemu read the settings from ceph.conf
[22:29] <ponyofdeath> thought all the settings were in libvirt
[22:30] <joshd> qemu reads /etc/ceph/ceph.conf
[22:31] <mjevans> Thus also having qemu give up on libvirt... ;p
[22:31] <joshd> the libvirt settings override anything in the conf file if they conflict
[22:32] <ponyofdeath> so what cache param's are recommended in /etc/ceph/ceph.conf
[22:32] <ponyofdeath> writethrough_until_flush
[22:33] <joshd> if you've got old guests, yes. the defaults should be ok, but you could change the cache size or other settings
[22:39] <ponyofdeath> 12.04 with kernel 3.11
[22:39] * mschiff (~mschiff@mx10.schiffbauer.net) Quit (Remote host closed the connection)
[22:39] <ponyofdeath> so il leave writethrough out
[22:39] <ponyofdeath> and use defaults
[22:42] * humbolt (~elias@62-46-148-194.adsl.highway.telekom.at) Quit (Quit: Leaving.)
[22:43] * dxd828 (~dxd828@dsl-dynamic-77-44-45-38.interdsl.co.uk) Quit (Quit: Textual IRC Client: www.textualapp.com)
[22:45] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[22:47] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[22:53] <mjevans> Which filesystem are you using ponyofdeath ? BTRFS, if you're using it, really recommends 3.12+
[22:54] <ponyofdeath> mjevans: yeah will be upgrading to trusty soon :) which has 13
[22:54] * linuxkidd (~linuxkidd@rtp-isp-nat1.cisco.com) Quit (Read error: Connection reset by peer)
[22:54] <ponyofdeath> 3.13
[22:55] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[22:56] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[22:56] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[22:56] <ponyofdeath> whould i convert rbd images to format 3
[22:56] <ponyofdeath> or format 2
[22:56] <ponyofdeath> what is required
[22:56] <ponyofdeath> to use each format
[22:59] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: Leaving)
[22:59] * allsystemsarego (~allsystem@50c25c2.test.dnsbl.oftc.net) Quit (Quit: Leaving)
[22:59] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[22:59] <lurbs> ponyofdeath: The kernel driver only started talking format 2 relatively recently. 3.10, I think.
[23:01] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[23:02] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[23:02] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[23:05] * ircolle (~Adium@2601:1:8380:2d9:78a6:22a9:7df3:e2ec) Quit (Quit: Leaving.)
[23:07] * BillK (~BillK-OFT@124-148-70-238.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[23:14] * sprachgenerator (~sprachgen@130.202.135.188) Quit (Quit: sprachgenerator)
[23:15] <mjevans> Offhand, does anyone have a /manual/ full reset of ceph things on a given node command? (or a ceph invokable that reliably does that)?
[23:18] <mjevans> the ceph-deploy purgedata tool gives: [ceph_deploy][ERROR ] RuntimeError: refusing to purge data while ceph is still installed
[23:18] <mjevans> I just want to nuke it all and start from a hand crafted ceph.conf file.
[23:19] <lurbs> You should be able to use ceph-deploy to uninstall the Ceph packages first.
[23:20] <mjevans> No, I want the ceph packages
[23:20] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[23:20] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[23:20] <mjevans> I want to absolutely nuke the configuration, OSDs, etc to get back to a clean slate
[23:21] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[23:21] <mjevans> I don't want ceph-deploy to /touch/ packages, that is something I do my self because it isn't as simple as install the named package (various pinning issues)
[23:31] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[23:34] * mattt (~textual@S010690724001c795.vc.shawcable.net) has joined #ceph
[23:36] <JoeGruher> is there a delete version of 'osd crush rule create-erasure'?
[23:36] * BillK (~BillK-OFT@58-7-115-16.dyn.iinet.net.au) has joined #ceph
[23:36] * Sysadmin88 (~IceChat77@176.254.32.31) has joined #ceph
[23:38] <dmick> probably just crush rule rm
[23:38] * The_Bishop (~bishop@2001:470:50b6:0:14a8:d133:21b2:88cd) has joined #ceph
[23:39] * mattt (~textual@S010690724001c795.vc.shawcable.net) Quit (Quit: Computer has gone to sleep.)
[23:42] * dmsimard (~Adium@108.163.152.2) Quit (Quit: Leaving.)
[23:43] * alram (~alram@38.122.20.226) Quit (Quit: leaving)
[23:43] <JoeGruher> aha
[23:48] * joao|lap (~JL@a95-92-33-54.cpe.netcabo.pt) has joined #ceph
[23:48] * ChanServ sets mode +o joao|lap
[23:51] <mjevans> How can I just make ceph-deploy use a provided ceph.conf? It keeps failing because I've already distributed one that describes the configuration I want to end up with... and the automatically generated ones do not well reflect my needs.
[23:51] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[23:53] <ponyofdeath> lurbs: thanks! what are the features of v2 is there a page somewhere :)
[23:54] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[23:54] <lurbs> Layering, basically. Snapshots and clones.
[23:54] <ponyofdeath> ahh nice, no massive performance improvements
[23:54] <dmick> clones are pretty massive provisioning performance improvements
[23:55] * shang (~ShangWu@42-64-43-100.dynamic-ip.hinet.net) has joined #ceph
[23:55] <dmick> particularly for short-running instances
[23:55] <lurbs> dmick: Assuming your management layer takes advantage of them.
[23:55] * lurbs glares at OpenStack.
[23:55] <dmick> ;D
[23:55] <ponyofdeath> :) that would require golden images
[23:55] <ponyofdeath> right
[23:55] <dmick> I'm sure they're accepting patch...oh, wait
[23:56] * markbby (~Adium@168.94.245.4) has joined #ceph
[23:57] * The_Bishop (~bishop@2001:470:50b6:0:14a8:d133:21b2:88cd) Quit (Ping timeout: 480 seconds)
[23:58] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.