#ceph IRC Log

Index

IRC Log for 2014-09-01

Timestamps are in GMT/BST.

[0:01] * masta (~masta@190.7.213.210) has joined #ceph
[0:07] * [fred] (fred@earthli.ng) Quit (Ping timeout: 480 seconds)
[0:12] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[0:12] * Qu310 (~Qu310@ip-121-0-1-110.static.dsl.onqcomms.net) Quit (Read error: No route to host)
[0:13] * Qu310 (~Qu310@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[0:17] * [fred] (fred@earthli.ng) has joined #ceph
[0:29] * bitserker (~toni@169.38.79.188.dynamic.jazztel.es) Quit (Quit: Leaving.)
[0:43] * sleinen1 (~Adium@2001:620:0:68::100) Quit (Read error: Connection reset by peer)
[0:55] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[1:07] * LeaChim (~LeaChim@host86-174-29-56.range86-174.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[1:09] * JC (~JC@AVelizy-151-1-38-63.w82-120.abo.wanadoo.fr) has joined #ceph
[1:15] * Nats (~natscogs@114.31.195.238) has joined #ceph
[1:19] * oms101 (~oms101@p20030057EA1EE400C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:25] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[1:27] * oms101 (~oms101@p20030057EA597000C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:51] * mtl1 (~Adium@c-98-245-49-17.hsd1.co.comcast.net) has joined #ceph
[1:53] * JC (~JC@AVelizy-151-1-38-63.w82-120.abo.wanadoo.fr) Quit (Quit: Leaving.)
[1:59] * lupu (~lupu@86.107.101.214) has joined #ceph
[1:59] * mtl2 (~Adium@c-98-245-49-17.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[2:04] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[2:16] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Quit: Konversation terminated!)
[2:19] * diegows (~diegows@190.190.5.238) has joined #ceph
[2:37] * Pedras (~Adium@50.185.218.255) has joined #ceph
[2:38] <flaf> Hello,
[2:39] * lupu (~lupu@86.107.101.214) Quit (Ping timeout: 480 seconds)
[2:40] <flaf> On Ubuntu 14.04, I want to test the OCF resource agent for rdb (/usr/lib/ocf/resource.d/ceph/rbd).
[2:40] <flaf> After "apt-get install ceph-resource-agents", I try :
[2:41] <flaf> --> /usr/lib/ocf/resource.d/ceph/rbd meta-data
[2:41] <flaf> to see the parameter of this OCF resource agent?
[2:42] <flaf> But I have this message error:
[2:42] <flaf> /usr/lib/ocf/resource.d/ceph/rbd: 12: .: Can't open /lib/heartbeat/ocf-shellfuncs
[2:43] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[2:43] <flaf> Did I forget to install a package?
[2:44] <flaf> With apt-file, I don't see a package wich contains /lib/heartbeat/ocf-shellfuncs.
[2:46] <flaf> But But I have this file "/usr/lib/ocf/lib/heartbeat/ocf-shellfuncs" in the "resource-agents" package.
[2:48] <flaf> Ooops!
[2:48] <flaf> Sorry for the noise.
[2:49] <flaf> I must define environment variable, like this:
[2:49] <flaf> OCF_FUNCTIONS_DIR=/usr/lib/ocf/lib/heartbeat /usr/lib/ocf/resource.d/ceph/rbd meta-data
[2:49] <flaf> Sorry and have a good weekend. ;)
[2:57] * ivan` (~ivan`@000130ca.user.oftc.net) Quit (Remote host closed the connection)
[3:00] * ivan` (~ivan`@000130ca.user.oftc.net) has joined #ceph
[3:05] * ivan` (~ivan`@000130ca.user.oftc.net) Quit (Remote host closed the connection)
[3:11] * ivan` (~ivan`@000130ca.user.oftc.net) has joined #ceph
[3:17] * ivan` (~ivan`@000130ca.user.oftc.net) Quit (Remote host closed the connection)
[3:23] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[3:26] * ivan` (~ivan`@000130ca.user.oftc.net) has joined #ceph
[3:27] * masta (~masta@190.7.213.210) Quit (Ping timeout: 480 seconds)
[3:33] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[3:42] * hasues (~hazuez@12.216.44.38) has joined #ceph
[3:51] * diegows (~diegows@190.190.5.238) has joined #ceph
[3:55] * mnaser (~textual@MTRLPQ5401W-LP130-02-1178024983.dsl.bell.ca) Quit (Read error: Connection reset by peer)
[3:58] * zhaochao (~zhaochao@111.204.252.1) has joined #ceph
[4:00] * yguang11_ (~yguang11@2406:2000:ef96:e:95fa:e1c8:1aa5:ba42) has joined #ceph
[4:00] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[4:01] * yguang11_ (~yguang11@2406:2000:ef96:e:95fa:e1c8:1aa5:ba42) Quit (Remote host closed the connection)
[4:02] * yguang11 (~yguang11@2406:2000:ef96:e:95fa:e1c8:1aa5:ba42) has joined #ceph
[4:06] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[4:12] * matt_ (~matt@c-68-57-115-186.hsd1.va.comcast.net) has joined #ceph
[4:14] * matt_ (~matt@c-68-57-115-186.hsd1.va.comcast.net) Quit ()
[4:14] * apolloJess (~Thunderbi@202.60.8.252) has joined #ceph
[4:19] * lucas1 (~Thunderbi@218.76.25.66) Quit (Ping timeout: 480 seconds)
[4:36] * nhm (~nhm@65-128-184-37.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[4:36] * KevinPerks1 (~Adium@2606:a000:80a1:1b00:a472:9242:c53b:b3fb) has joined #ceph
[4:37] * yguang11 (~yguang11@2406:2000:ef96:e:95fa:e1c8:1aa5:ba42) Quit (Ping timeout: 480 seconds)
[4:37] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[4:38] * KevinPerks (~Adium@2606:a000:80a1:1b00:915c:8337:52fa:6c77) Quit (Ping timeout: 480 seconds)
[4:44] * nhm (~nhm@174-20-45-249.mpls.qwest.net) has joined #ceph
[4:44] * ChanServ sets mode +o nhm
[4:59] * vbellur (~vijay@122.178.195.138) has joined #ceph
[5:04] * KevinPerks1 (~Adium@2606:a000:80a1:1b00:a472:9242:c53b:b3fb) has left #ceph
[5:05] * wschulze1 (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[5:10] * dlan (~dennis@116.228.88.131) Quit (Remote host closed the connection)
[5:11] * dlan (~dennis@116.228.88.131) has joined #ceph
[5:13] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[5:29] * wschulze1 (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[5:29] * Vacuum_ (~vovo@88.130.214.161) has joined #ceph
[5:33] * yanzheng (~zhyan@171.221.143.132) Quit (Quit: This computer has gone to sleep)
[5:36] * michalefty (~micha@ip25045ed2.dynamic.kabel-deutschland.de) has joined #ceph
[5:36] * Vacuum (~vovo@i59F791EA.versanet.de) Quit (Ping timeout: 480 seconds)
[5:39] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[5:39] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[5:39] * yguang11 (~yguang11@2406:2000:ef96:e:e49a:2f90:ee48:5a63) has joined #ceph
[5:52] * bitserker (~toni@169.38.79.188.dynamic.jazztel.es) has joined #ceph
[5:58] * jtaguinerd (~jtaguiner@112.205.9.45) has joined #ceph
[5:59] <jtaguinerd> hi guys anybody here tried using bcache for OSD caching?
[6:04] * saurabh (~saurabh@121.244.87.117) has joined #ceph
[6:08] * michalefty (~micha@ip25045ed2.dynamic.kabel-deutschland.de) Quit (Quit: Leaving.)
[6:10] * yguang11_ (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[6:14] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[6:15] * yguang11 (~yguang11@2406:2000:ef96:e:e49a:2f90:ee48:5a63) Quit (Ping timeout: 480 seconds)
[6:17] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[6:26] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:30] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[6:35] * zack_dolby (~textual@p843a3d.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[6:35] * zack_dolby (~textual@p843a3d.tokynt01.ap.so-net.ne.jp) has joined #ceph
[6:36] * Sysadmin88 (~IceChat77@176.250.164.108) Quit (Quit: Some folks are wise, and some otherwise.)
[6:36] * Sysadmin88 (~IceChat77@176.250.164.108) has joined #ceph
[6:40] * RameshN (~rnachimu@121.244.87.117) has joined #ceph
[6:42] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:42] <Psi-Jack> heh, why would you be using pacemaker with Ceph?
[6:42] * zack_dolby (~textual@p843a3d.tokynt01.ap.so-net.ne.jp) Quit (Read error: Operation timed out)
[6:42] * bitserker (~toni@169.38.79.188.dynamic.jazztel.es) Quit (Quit: Leaving.)
[6:43] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[6:49] * vbellur (~vijay@122.178.195.138) Quit (Ping timeout: 480 seconds)
[6:56] * branto (~borix@178.253.163.238) has joined #ceph
[6:57] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[6:58] * sleinen1 (~Adium@2001:620:0:68::101) has joined #ceph
[7:02] * jtaguinerd1 (~jtaguiner@203.215.116.66) has joined #ceph
[7:02] <flaf> Psi-Jack: your question is for me?
[7:03] <flaf> I guess so. ;)
[7:03] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[7:04] <flaf> Psi-Jack: because with Ceph I can have a rbd (RADOS block device) and I can use this rbd on a node.
[7:04] <flaf> But the node which uses the rbd is a SPOF.
[7:05] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[7:06] <flaf> If my node is down, I want to find a way to have failback with another node wich will use the rbd too.
[7:07] * jtaguinerd (~jtaguiner@112.205.9.45) Quit (Read error: Operation timed out)
[7:08] <flaf> So, to do this on Linux, I think I can use pacemaker.
[7:08] <flaf> But maybe I'm wrong.
[7:08] <flaf> Maybe there are other ways (and better ways)?
[7:09] <jtaguinerd1> Hi flaf, you might want to try drbd with ocfs2 on RBD
[7:10] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:10] <jtaguinerd1> but depends also what application will be consuming the rbd
[7:11] * vbellur (~vijay@121.244.87.117) has joined #ceph
[7:12] <flaf> I don't understand. Are you saying RBD (Rados block device) -> Drdb -> ocfs2 (the file system)?
[7:12] <Psi-Jack> flaf: That's a BAD idea.
[7:13] <flaf> Psi-Jack: ah ok. Why?
[7:13] <Psi-Jack> ceph already has it's own better implementation of clusternig built-in. You should be using that as it was designed.
[7:14] <Psi-Jack> Pacemaker is something you use when you want to have moveable highly-available resources. Mostly designed for things that don't already have such capability built-in, or needs pacemaker to implement it. Ceph already has this by having it's own, better, faster, and more resilient clustering.
[7:14] <runfromnowhere> Whoa whoa whoa
[7:14] * yguang11_ (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[7:14] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[7:14] <runfromnowhere> Pacemaker isn't entirely inappropriate here
[7:14] <Psi-Jack> OIn the case or using an RBD directly, why are you doing that, over using CephFS?
[7:15] * yguang11 (~yguang11@2406:2000:ef96:e:e49a:2f90:ee48:5a63) has joined #ceph
[7:15] <runfromnowhere> CephFS has issues
[7:15] <Psi-Jack> Depends, on use, but it can very easily be. :)
[7:15] <runfromnowhere> I have several filesystem use cases that CephFS, as is, cannot handle
[7:15] <runfromnowhere> In my production environment
[7:15] <Psi-Jack> I ran Ceph for 2 years, with 0 CephFS, RBD, or any other issues.
[7:15] <Psi-Jack> Like?
[7:15] <runfromnowhere> Ever put 7 million files in one directory?
[7:15] <flaf> Psi-Jack: but with a cluster ceph, I can create a rbd, but the node which uses the rbd is uniq, isn't it?
[7:16] <runfromnowhere> CephFS doesn't like it that much
[7:16] <runfromnowhere> I, personally, don't like it that much either LOL
[7:16] <runfromnowhere> But it's a requirement of my current environment and I'm powerless to change it
[7:16] <Psi-Jack> runfromnowhere: I would never put 7 million files in /1/ directory.
[7:16] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[7:16] <runfromnowhere> LOL I wholeheartedly agree that it is a terrible idea
[7:16] <Psi-Jack> I would, in fact, engineer it correctly fixing THAT kind of proble,. :)
[7:16] <runfromnowhere> But it's not my idea
[7:16] <runfromnowhere> And I don't have the ability to change it
[7:16] <runfromnowhere> I just have to make it go, and CephFS can't
[7:17] <Psi-Jack> And that's why I make the big bucks. Because I make those decisions. :)
[7:17] <runfromnowhere> LOL I'm not sure if that's appropriate to say in this case
[7:17] <Psi-Jack> It's always appropriate to say. :)
[7:17] <Psi-Jack> 7 million files in 1 directory is horrible, in /any/ situation. Whoever thought of that shoul dbe shot. ;)
[7:18] <runfromnowhere> Agreed
[7:18] <Psi-Jack> But, don't mind me on that. I'm directly blunt. Within reason. ;)
[7:18] <runfromnowhere> Oh, no, you're absolutely correct
[7:18] <Psi-Jack> flaf: Whta is your end goal?
[7:18] <runfromnowhere> But there are overriding business reasons why it cannot be changed within the timeframe available
[7:19] <Psi-Jack> I know ceph, I know pacemaker, and I know HA extensively to be able to come up with many better options for what you may be trying to achieve.
[7:19] <runfromnowhere> That being the case, I can either attempt to make a stink about it and kill the business that pays my checks....or I can find a way to make it go.
[7:19] <runfromnowhere> But we're a bit far from the original conversation
[7:19] <Psi-Jack> runfromnowhere: For such a case.. I simply wouldn't support using ceph, until they fix the major problem.
[7:20] <Psi-Jack> Simple as that. :)
[7:20] <flaf> Psi-Jack: I want to have a cluster of "Web" nodes. 1 master which uses the rbd (files uses by the web server). If the master is down, there is a failback and another node uses the rbd.
[7:20] <runfromnowhere> Heh also not a business-viable solution :P
[7:20] <Psi-Jack> Webserver? Use cephfs.
[7:21] <Psi-Jack> Have multiple webservers share the same cephfs mount and do load-balancing, better solution.
[7:21] <flaf> Psi-Jack: But can a cephfs be mounted in several nodes?
[7:21] <Psi-Jack> flaf: yes
[7:21] <runfromnowhere> flaf: Yeah, CephFS is designed to be a shared filesystem that can be mounted on many nodes at once
[7:22] <Psi-Jack> You can even mount a partial of cephfs, say like server1,server2,server3:/webserver
[7:22] <Psi-Jack> Instead of just the cephfs root. :)
[7:22] <runfromnowhere> With the kernel, not with FUSE
[7:22] <runfromnowhere> At least last I knew
[7:22] <Psi-Jack> Don't use fuse, use kernel, simple. :)
[7:23] <flaf> In fact, I have seen in the doc that cephFS wasn't ready for production so I removed this way.
[7:23] <Psi-Jack> flaf: It's generally ready and quite reliable, actually.
[7:23] <runfromnowhere> In my experience CephFS has limits but fails gracefully
[7:23] <runfromnowhere> I've never lost any data with CephFS
[7:23] <runfromnowhere> But I have had MDSs become unresponsive
[7:23] <Psi-Jack> Nor I.
[7:24] <runfromnowhere> Which basically locked up the whole thing until they were kicked
[7:24] <Psi-Jack> And I had my 3-node ceph cluster running 3 ceph-mds servers, before it was "supported"
[7:24] <runfromnowhere> However, again, this was under high-stress usage
[7:24] <runfromnowhere> I'd say experiment with it and see if it works for you - it quite likely will
[7:25] <flaf> runfromnowhere Psi-Jack : ok thx for your explanations. I will see the CephFS ;)
[7:26] <Psi-Jack> I've run many webservers on CephFS, production, and personal.
[7:26] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[7:28] <flaf> Oh, just another question: is there a relation (same technology etc.) between Ceph/Rbd and drbd?
[7:29] <Psi-Jack> No.
[7:30] <Psi-Jack> The underlying technology of Ceph is RADOS.
[7:30] <runfromnowhere> Heh just a vaguely similar-sounding acronum
[7:30] <Psi-Jack> And DRBD, is just DRBD, which is horrible, IMHO.
[7:30] <runfromnowhere> erm acronym
[7:31] <flaf> Ok, I thought there were some relations. :) thx.
[7:31] <Psi-Jack> I wouldn't recommend DRBD to anyone, now that ceph is here, personally. :)
[7:32] <Psi-Jack> Unless, it's just for distributed replicated backups, at best, that is. Nothing highly available-needed.
[7:33] <Psi-Jack> Even Ceph does that better, though, complete with snapshots, you can make a snapshot in cephfs just by making a directory in a hidden virtual directory, and bingo, snapshot made.
[7:35] * michalefty (~micha@p20030071CF45BC003D3DE1871930706A.dip0.t-ipconnect.de) has joined #ceph
[7:38] * michalefty1 (~micha@p20030071CE0266553D3DE1871930706A.dip0.t-ipconnect.de) has joined #ceph
[7:41] * rdas (~rdas@121.244.87.115) has joined #ceph
[7:41] * zack_dolby (~textual@e0109-114-22-0-206.uqwimax.jp) has joined #ceph
[7:43] * michalefty (~micha@p20030071CF45BC003D3DE1871930706A.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[7:45] * sleinen1 (~Adium@2001:620:0:68::101) Quit (Ping timeout: 480 seconds)
[7:45] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[7:49] * zack_dolby (~textual@e0109-114-22-0-206.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[7:50] <amatus> runfromnowhere: ever thought of coding up an ldpreload shim that puts the files in seperate directories?
[7:51] <amatus> so i'm having a strange issue with cephfs, i've copied a few hundred 1-7GB files into a directory, and the rsync just stopped, i killed the rsync and tried to ls the directory and that hangs too
[7:51] <amatus> i restarted the mds, didn't help
[7:51] <runfromnowhere> amatus: Hmm how would that end up working?
[7:51] <runfromnowhere> This isn't a technique I'm familiar with
[7:52] <amatus> you can intercept libc calls with an ldpreload shim, so intercept open/diropen/etc to modify the given paths appropriately
[7:54] <amatus> anyone have any ideas for debugging a cephfs directory that just hangs when you try to access it?
[7:54] <runfromnowhere> Jeebus
[7:54] <runfromnowhere> I wonder if that would work
[7:54] <runfromnowhere> It seems like a good chunk of effort but if it DID work it'd be really funny LOL
[7:56] <runfromnowhere> Ideally I'd rather change it all to be a Ceph Object Store thing
[7:56] <runfromnowhere> But that still requires putting a shim in somewhere to transmute filesystem operations into object store operations
[7:56] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[7:57] <amatus> check out the source code to fakeroot
[7:57] <runfromnowhere> hmm
[7:57] <runfromnowhere> I'll read it
[7:58] <runfromnowhere> Thanks for the pointer :)
[7:58] <amatus> np
[8:00] * zack_dolby (~textual@e0109-114-22-0-206.uqwimax.jp) has joined #ceph
[8:04] * Pedras (~Adium@50.185.218.255) Quit (Quit: Leaving.)
[8:05] * kuaizi1981 (~kuaizi198@218.94.128.51) has joined #ceph
[8:09] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[8:13] * madkiss (~madkiss@chello080108052132.20.11.vie.surfer.at) has joined #ceph
[8:14] * karnan (~karnan@121.244.87.117) has joined #ceph
[8:24] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[8:25] * slo_ (~oftc-webi@194.249.247.164) Quit (Remote host closed the connection)
[8:32] * dgurtner (~dgurtner@102-236.197-178.cust.bluewin.ch) has joined #ceph
[8:33] * ashishchandra (~ashish@49.32.0.254) has joined #ceph
[8:35] * wangqty (~qiang@125.33.125.7) has joined #ceph
[8:38] * haomaiwa_ (~haomaiwan@223.223.183.114) Quit (Read error: Operation timed out)
[8:39] <wangqty> Hi, I want to know the can erasure-code-pool work with rbd in ceph v0.80.5? Thanks.
[8:41] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) has joined #ceph
[8:44] * lcavassa (~lcavassa@89.184.114.246) has joined #ceph
[8:51] * i_m (~ivan.miro@deibp9eh1--blueice3n2.emea.ibm.com) has joined #ceph
[8:52] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[8:53] * AfC (~andrew@93.94.208.154) has joined #ceph
[8:55] * haomaiwang (~haomaiwan@223.223.183.114) has joined #ceph
[9:00] * hasues (~hazuez@12.216.44.38) Quit (Quit: Leaving.)
[9:05] * dis (~dis@109.110.66.126) has joined #ceph
[9:08] * zack_dolby (~textual@e0109-114-22-0-206.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[9:09] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[9:10] * zack_dolby (~textual@e0109-114-22-0-206.uqwimax.jp) has joined #ceph
[9:11] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:16] * analbeard (~shw@support.memset.com) has joined #ceph
[9:21] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) Quit (Quit: Leaving.)
[9:21] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[9:22] * vbellur (~vijay@209.132.188.8) has joined #ceph
[9:23] * AfC (~andrew@93.94.208.154) Quit (Quit: Leaving.)
[9:23] * rendar (~I@host39-6-dynamic.7-79-r.retail.telecomitalia.it) has joined #ceph
[9:25] * fdmanana (~fdmanana@bl5-77-181.dsl.telepac.pt) has joined #ceph
[9:26] * t4nk946 (~oftc-webi@103.6.103.83) has joined #ceph
[9:27] <t4nk946> Hi, Is there anyone can connect to ceph.com ? , I got message 403 forbidden
[9:28] <liiwi> loads fine for me
[9:29] <t4nk946> Oh. then.. my IP blocked..
[9:31] * blackmen (~Ajit@121.244.87.115) has joined #ceph
[9:32] <t4nk946> How can I solve this ? I don't understand why admin block IPs.
[9:34] * garphy`aw is now known as garphy
[9:35] * dgurtner (~dgurtner@102-236.197-178.cust.bluewin.ch) Quit (Read error: Connection reset by peer)
[9:36] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:37] * fsimonce (~simon@host135-17-dynamic.8-79-r.retail.telecomitalia.it) has joined #ceph
[9:38] <t4nk946> Who is the ceph web admin ?, Please check the config of IP access
[9:41] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) Quit (Remote host closed the connection)
[9:42] * dgurtner (~dgurtner@217.192.177.51) has joined #ceph
[9:42] * madkiss (~madkiss@chello080108052132.20.11.vie.surfer.at) Quit (Remote host closed the connection)
[9:42] * madkiss (~madkiss@chello080108052132.20.11.vie.surfer.at) has joined #ceph
[9:44] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[9:52] * hyperbaba (~hyperbaba@private.neobee.net) has joined #ceph
[9:54] * garphy is now known as garphy`aw
[9:57] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[9:59] * yanzheng (~zhyan@171.221.143.132) Quit (Quit: ??????)
[9:59] * zack_dolby (~textual@e0109-114-22-0-206.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[9:59] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[10:00] * thomnico (~thomnico@2a01:e35:8b41:120:2135:9410:bc07:ad4a) has joined #ceph
[10:01] * garphy`aw is now known as garphy
[10:04] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:04] * t4nk946 (~oftc-webi@103.6.103.83) Quit (Remote host closed the connection)
[10:05] * zack_dolby (~textual@e0109-114-22-0-206.uqwimax.jp) has joined #ceph
[10:07] * bavila (~bavila@mail.pt.clara.net) Quit (Read error: Connection reset by peer)
[10:08] * danieljh (~daniel@0001b4e9.user.oftc.net) Quit (Quit: Lost terminal)
[10:16] * tab (~oftc-webi@194.249.247.164) has joined #ceph
[10:18] * vbellur (~vijay@209.132.188.8) Quit (Ping timeout: 480 seconds)
[10:28] * vbellur (~vijay@121.244.87.117) has joined #ceph
[10:34] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[10:42] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[10:42] * garphy is now known as garphy`aw
[10:45] * yguang11 (~yguang11@2406:2000:ef96:e:e49a:2f90:ee48:5a63) Quit (Remote host closed the connection)
[10:46] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[10:48] * rdas (~rdas@121.244.87.115) Quit (Quit: Leaving)
[10:52] * garphy`aw is now known as garphy
[10:54] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[10:58] * darkling (~hrm@00012bd0.user.oftc.net) has joined #ceph
[10:59] * b0e (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[11:05] * AfC (~andrew@93.94.208.154) has joined #ceph
[11:07] * garphy is now known as garphy`aw
[11:09] * Andreas-IPO (~andreas@2a01:2b0:2000:11::cafe) has joined #ceph
[11:11] * rdas (~rdas@121.244.87.115) has joined #ceph
[11:14] * linjan (~linjan@176.195.6.203) has joined #ceph
[11:17] * zack_dolby (~textual@e0109-114-22-0-206.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[11:21] * jtaguinerd2 (~jtaguiner@203.215.116.66) has joined #ceph
[11:21] * jtaguinerd1 (~jtaguiner@203.215.116.66) Quit (Read error: Connection reset by peer)
[11:22] * apolloJess (~Thunderbi@202.60.8.252) Quit (Ping timeout: 480 seconds)
[11:23] * apolloJess (~Thunderbi@202.60.8.252) has joined #ceph
[11:27] * tremon (~arno.schu@87.213.105.245) has joined #ceph
[11:28] * yanzheng (~zhyan@171.221.143.132) Quit (Quit: ??????)
[11:29] * lupu (~lupu@86.107.101.214) has joined #ceph
[11:33] * AfC (~andrew@93.94.208.154) Quit (Ping timeout: 480 seconds)
[11:36] * bkunal (~bkunal@121.244.87.115) has joined #ceph
[11:36] * bkunal (~bkunal@121.244.87.115) Quit (Remote host closed the connection)
[11:38] * cooldharma06 (~chatzilla@218.248.24.19) Quit (Remote host closed the connection)
[11:39] * florent (~florent@2a04:2500:0:103:35fe:649a:cc6a:baa0) has joined #ceph
[11:45] * bkunal (~bkunal@121.244.87.115) has joined #ceph
[11:46] * lucas1 (~Thunderbi@222.240.148.154) has joined #ceph
[11:46] * bipinkunal (~bkunal@121.244.87.115) has joined #ceph
[11:47] * bkunal (~bkunal@121.244.87.115) Quit ()
[11:48] * bavila (~bavila@mail.pt.clara.net) has joined #ceph
[11:52] * lucas1 (~Thunderbi@222.240.148.154) Quit (Quit: lucas1)
[11:59] * AfC (~andrew@93.94.208.154) has joined #ceph
[12:02] * rdas (~rdas@121.244.87.115) Quit (Read error: Operation timed out)
[12:11] * Sysadmin88 (~IceChat77@176.250.164.108) Quit (Quit: Pull the pin and count to what?)
[12:13] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Ping timeout: 480 seconds)
[12:21] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[12:37] * AfC (~andrew@93.94.208.154) Quit (Ping timeout: 480 seconds)
[12:41] * zack_dolby (~textual@p843a3d.tokynt01.ap.so-net.ne.jp) has joined #ceph
[12:44] * rdas (~rdas@121.244.87.115) has joined #ceph
[12:45] * rdas (~rdas@121.244.87.115) Quit ()
[12:51] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) has joined #ceph
[13:00] * nljmo (~nljmo@5ED6C263.cm-7-7d.dynamic.ziggo.nl) has joined #ceph
[13:03] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[13:04] * monsterz_ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[13:04] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Read error: Connection reset by peer)
[13:05] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[13:05] * monsterz_ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Read error: Connection reset by peer)
[13:06] * wangqty (~qiang@125.33.125.7) Quit (Quit: Leaving.)
[13:06] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Read error: Connection reset by peer)
[13:06] * garphy`aw is now known as garphy
[13:06] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[13:07] <boichev> can someone help me with an initial CEPH installation, I added 2 OSDs that are IN, but not UP..... I have 1 mon on the same node http://pastebin.com/W8xpcc8d
[13:11] <flaf> boichev: it's basic, but have you start the osd daemon on the hosts? (netstat -ltunp)
[13:12] * lucas1 (~Thunderbi@222.247.57.50) has joined #ceph
[13:12] <boichev> flaf I see 10 ceph-osd processes
[13:12] <boichev> flaf I tried with "ceph-disk activate-all" and it didn't seem to help
[13:12] * lucas1 (~Thunderbi@222.247.57.50) Quit ()
[13:15] <flaf> boichev: sorry I have no idea.
[13:17] * i_m (~ivan.miro@deibp9eh1--blueice3n2.emea.ibm.com) Quit (Quit: Leaving.)
[13:17] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) has joined #ceph
[13:18] * lupu (~lupu@86.107.101.214) Quit (Ping timeout: 480 seconds)
[13:19] <flaf> boichev: maybe you can try a telnet between the OSDs (with the ports of ceph-osd)
[13:22] <boichev> flaf it works http://pastebin.com/ejSqASaX
[13:23] * MrBy3 (~MrBy@85.115.23.42) Quit (Quit: Leaving)
[13:23] <flaf> You should try to telnet an ceph-osd from another OSD.
[13:24] <boichev> I have only 1 host with 1 mon and 2 osds
[13:24] * branto (~borix@178.253.163.238) Quit (Ping timeout: 480 seconds)
[13:24] <flaf> Ah sorry.
[13:30] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) Quit (Ping timeout: 480 seconds)
[13:33] * zhaochao (~zhaochao@111.204.252.1) has left #ceph
[13:34] <steveeJ> boichev: can you show "ceph osd tree" ?
[13:35] * branto (~borix@178.253.163.238) has joined #ceph
[13:36] <boichev> steveeJ http://pastebin.com/Bf19iAsT
[13:37] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Remote host closed the connection)
[13:38] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[13:38] <steveeJ> boichev: and "ceph osd dump" without the grep
[13:40] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[13:41] <boichev> steveeJ http://pastebin.com/4LsBfTPn
[13:42] <steveeJ> that's odd, the sockets are all zero
[13:43] <steveeJ> i'm never a big help with deployments since on gentoo ceph-deploy doesn't work and i have no idea what could go wrong
[13:43] <boichev> i didn't use ceph-deploy
[13:43] <steveeJ> but, do you have entries for your osds in your ceph.conf ?
[13:44] <steveeJ> oh that's good then
[13:44] <boichev> can it be that eplicated size = min_size
[13:44] <steveeJ> no, that is not related to up/down state of the OSDs
[13:44] <steveeJ> it's a connection problem
[13:45] <steveeJ> can you show your ceph.conf?
[13:46] <steveeJ> we'll talk about min_size/size later when your OSDs are up ;)
[13:46] <boichev> http://pastebin.com/ZtVaCE1g
[13:48] <steveeJ> try adding 'osd addr = 192.18.10.3' to your osd.{0,1} sections
[13:48] * karnan (~karnan@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:49] <steveeJ> erm, correct my typo too ;)
[13:50] <steveeJ> your mds doesn't show either in "ceph -s"
[13:50] <steveeJ> so add the addr to your mds section too
[13:50] <steveeJ> but that could be annother deployment problem. let's stick with OSDs first
[13:51] <boichev> same issue
[13:52] * madkiss (~madkiss@chello080108052132.20.11.vie.surfer.at) Quit (Quit: Leaving.)
[13:52] <steveeJ> have you restarted the osd daeons?
[13:52] <steveeJ> *daemons
[13:52] <boichev> restarted everything
[13:52] * madkiss (~madkiss@chello080108052132.20.11.vie.surfer.at) has joined #ceph
[13:53] <steveeJ> okay, try adding "public network = 192.168.10.0/24" to your [global] section
[13:55] * madkiss (~madkiss@chello080108052132.20.11.vie.surfer.at) Quit ()
[13:55] * madkiss (~madkiss@chello080108052132.20.11.vie.surfer.at) has joined #ceph
[13:55] <boichev> :( same probably ... starting osd.0 at :/0 osd_data /srv/ceph/osd0 /srv/ceph/osd0/journal
[13:56] <boichev> :/0 shold be something else
[13:56] <steveeJ> yeah it should
[13:58] <steveeJ> is there anything on those OSDs? otherwise i suggest to start from scratch
[13:58] <steveeJ> i haven't had the problem and there's nothing more to configure
[13:58] * RameshN (~rnachimu@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:59] * apolloJess (~Thunderbi@202.60.8.252) Quit (Quit: apolloJess)
[14:01] <steveeJ> may be the try specifying a port for the monitor
[14:01] * drankis (~drankis__@89.111.13.198) has joined #ceph
[14:01] <steveeJ> mon addr = 192.168.10.3:6789/0
[14:02] <boichev> in global or in [mon]
[14:02] <steveeJ> just add "/0" to your existing line in [mon.0]
[14:03] <boichev> ok
[14:03] <steveeJ> annother thing, can "storage1" be resolved on that host?
[14:03] <boichev> yes
[14:03] <boichev> and it matches hostname -s
[14:03] <steveeJ> have you deployed the OSDs before the mon?
[14:04] <boichev> not sure :)
[14:04] * saurabh (~saurabh@121.244.87.117) Quit (Quit: Leaving)
[14:04] <boichev> can the mds stop the OSD for going up
[14:04] <boichev> because the mds dies with ERROR: failed to authenticate:
[14:05] <boichev> probably some error with the keyring
[14:05] <steveeJ> well, cephx is not properly setup from what i can see
[14:05] <steveeJ> i'll provide my config if you like to compare things
[14:05] <boichev> thanks :)
[14:06] <steveeJ> http://pastebin.com/hubRH09H
[14:07] <flaf> steveeJ: what is the meaning of "/0" in "mon addr = 192.168.100.1:6789/0"?
[14:07] <loicd> florent: \o
[14:08] <florent> loicd, salut comment va ?
[14:08] <loicd> apparently there is a form that's still valid to claim your t-shirt
[14:08] <loicd> florent: super :-)
[14:08] <loicd> https://docs.google.com/forms/d/1Pzs-bp7g1Q52rqCCNOE-i5GDvkgBgde8-foX9O2gLUg/viewform
[14:08] * garphy is now known as garphy`aw
[14:08] <florent> loicd, cool ; )
[14:09] <loicd> I can't remember if I filled it... :-)
[14:09] <steveeJ> flaf: i've had trouble getting connections to the mon without them. i found that syntax in the logs and tried adjusting my config
[14:10] <loicd> ccourtaut: ping ?
[14:11] <florent> loicd, form filled ; )
[14:11] <loicd> cool
[14:11] <steveeJ> flaf: if i remember correctly i found the syntax in the monmap
[14:12] <flaf> steveeJ: but what the meaning of this syntax?
[14:13] * lupu (~lupu@86.107.101.214) has joined #ceph
[14:13] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[14:14] <steveeJ> flaf: it's not mentioned anywhere in docs. so that's a good question. /XX usually means a netmask
[14:15] <flaf> oups, sorry, it's a netmask. Yes it's curious.
[14:16] <steveeJ> i'm tempted to try again without it
[14:16] <steveeJ> but i don't want to cause too much trouble right now
[14:17] <steveeJ> well, no risk no fun.
[14:17] <flaf> :)
[14:18] <steveeJ> okay, absolutely no effect on one host
[14:18] * garphy`aw is now known as garphy
[14:18] <steveeJ> makes sense for /0 making no change
[14:18] <steveeJ> that doesn't help boichev though
[14:20] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Ping timeout: 480 seconds)
[14:21] <flaf> 0 is a curios value for a netmask. 32 I understand but 0...
[14:21] <flaf> *curious
[14:24] * ksingh (~Adium@2001:708:10:10:3d10:1b37:b213:694f) has joined #ceph
[14:26] <ksingh> Calamari People , need help after installing calamari i am getting message as 3 Ceph servers are connected to Calamari, but no Ceph cluster has been created yet.
[14:26] <ksingh> how to add an existing ceph cluster to calamari ??
[14:28] <boichev> to start everything from scratch I should just delete /etc/ceph and /srv/ceph/* yes ?
[14:29] * CAPSLOCK2000 (~oftc@2001:610:748:1::8) Quit (Quit: ZNC - http://znc.in)
[14:30] * CAPSLOCK2000 (~oftc@2001:610:748:1::8) has joined #ceph
[14:33] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has joined #ceph
[14:39] * michalefty1 (~micha@p20030071CE0266553D3DE1871930706A.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[14:41] * florent (~florent@2a04:2500:0:103:35fe:649a:cc6a:baa0) Quit (Ping timeout: 480 seconds)
[14:42] * lupu (~lupu@86.107.101.214) Quit (Ping timeout: 480 seconds)
[14:43] * florent (~florent@2a04:2500:0:103:35fe:649a:cc6a:baa0) has joined #ceph
[14:50] * michalefty (~micha@p20030071CE7119283D3DE1871930706A.dip0.t-ipconnect.de) has joined #ceph
[14:51] * analbeard (~shw@support.memset.com) Quit (Remote host closed the connection)
[14:51] * analbeard (~shw@support.memset.com) has joined #ceph
[14:51] * jtaguinerd (~jtaguiner@112.205.22.106) has joined #ceph
[14:55] * ashishchandra (~ashish@49.32.0.254) Quit (Quit: Leaving)
[14:57] * jtaguinerd2 (~jtaguiner@203.215.116.66) Quit (Ping timeout: 480 seconds)
[15:04] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[15:05] * RameshN (~rnachimu@101.222.253.181) has joined #ceph
[15:15] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[15:17] * ganders (~root@200-127-158-54.net.prima.net.ar) has joined #ceph
[15:19] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[15:20] * jtaguinerd (~jtaguiner@112.205.22.106) Quit (Quit: Leaving.)
[15:23] <steveeJ> boichev: no not that everything, just the OSDs
[15:24] <steveeJ> boichev: try following http://ceph.com/docs/master/rados/operations/add-or-rm-osds/ in reverse. so just completely remove your existing osds
[15:24] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) has joined #ceph
[15:28] * yanzheng (~zhyan@171.221.143.132) Quit (Quit: This computer has gone to sleep)
[15:29] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[15:30] * coreping (~Michael_G@hugin.coreping.org) Quit (Quit: WeeChat 0.4.3)
[15:30] * coreping (~Michael_G@hugin.coreping.org) has joined #ceph
[15:39] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) Quit (Quit: Leaving.)
[15:42] * mathias (~mathias@tmo-097-127.customers.d1-online.com) has joined #ceph
[15:42] <mathias> in which package (ubuntu) do I find the "rdb" command?
[15:43] <Kioob`Taff> rbd*, not rdb
[15:43] <Kioob`Taff> in ceph package
[15:43] * capri_on (~capri@212.218.127.222) has joined #ceph
[15:44] <mathias> ahhh stupid ...
[15:44] <mathias> thx
[15:44] <Kioob`Taff> (ceph-common under debian)
[15:44] * jtaguinerd (~jtaguiner@203.215.116.66) has joined #ceph
[15:45] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) Quit (Quit: Leaving.)
[15:45] * bipinkunal (~bkunal@121.244.87.115) Quit (Ping timeout: 480 seconds)
[15:46] * kuaizi1981 (~kuaizi198@218.94.128.51) Quit (Read error: Connection reset by peer)
[15:47] * kuaizi1981 (~kuaizi198@218.94.128.51) has joined #ceph
[15:50] * capri (~capri@212.218.127.222) Quit (Ping timeout: 480 seconds)
[15:51] <mathias> how do I do authentication for remote systems? I figured out there are these keyrings on the ceph (admin) nodes but distributing the admin keyring is probably not the best idea. I didnt find this in the documentation yet ...
[15:53] * marvin0815 (~oliver.bo@dhcp-admin-217-66-51-235.pixelpark.com) Quit (Quit: leaving)
[15:56] <mathias> finally found it - no worries
[15:58] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) has joined #ceph
[15:58] <flaf> mathias: where please?
[16:00] * branto (~borix@178.253.163.238) has left #ceph
[16:06] <mathias> flaf: at least this looks like it (still reading) http://ceph.com/docs/master/rados/operations/authentication/
[16:06] * jtaguinerd (~jtaguiner@203.215.116.66) Quit (Quit: Leaving.)
[16:06] * jtaguinerd (~jtaguiner@203.215.116.66) has joined #ceph
[16:08] <flaf> mathias: thx
[16:08] <mathias> but I dont fully understand the second part of the "ADD A KEY" section
[16:08] <mathias> in the first part I create that key ... well then I have it sitting in the ceph admin node. and next I copy a different file from a cluster node to my client node?
[16:08] <mathias> not getting it ...
[16:09] <amatus> mathias: sup
[16:10] <mathias> amatus: I am not understanding how I authorize a new client to access the cluster. I followed the link above (http://ceph.com/docs/master/rados/operations/authentication/#add-a-key) and created a new key (keyring.foo)
[16:10] <mathias> running ceph auth list seems to print the new client regardless of the ceph node I run it run - looks good
[16:10] <mathias> but whats that second step all about?
[16:12] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) has joined #ceph
[16:12] <flaf> I have the same questions about keyring. For example, precise question: in the futur OSD node, should I install the client.admin keyring file?
[16:16] <mathias> Why do we create that keyring.foo file with the client key and then dont use it any further?
[16:18] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) Quit (Read error: Connection reset by peer)
[16:18] * sleinen (~Adium@2001:620:0:68::101) has joined #ceph
[16:21] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[16:22] <amatus> mathias: do you have /var/lib/ceph/bootstrap-osd/ceph.keyring and /var/lib/ceph/bootstrap-mds/ceph.keyring?
[16:22] <amatus> i found those and used them to bootstrap new osds and mdss
[16:23] * _NiC (~kristian@aeryn.ronningen.no) Quit (Ping timeout: 480 seconds)
[16:25] <mathias> amatus: I have /var/lib/ceph/bootstrap-osd/ceph.keyring on the OSDs and MONs. It includes a single key only
[16:27] * _NiC (~kristian@aeryn.ronningen.no) has joined #ceph
[16:30] <amatus> anyone have any ideas for debugging a cephfs directory that just hangs when you try to access it?
[16:31] * yanzheng (~zhyan@171.221.143.132) Quit (Quit: This computer has gone to sleep)
[16:31] * _NiC (~kristian@aeryn.ronningen.no) Quit ()
[16:31] * _NiC (~kristian@aeryn.ronningen.no) has joined #ceph
[16:38] * bipinkunal (~bkunal@1.22.78.3) has joined #ceph
[16:40] * KevinPerks (~Adium@2606:a000:80a1:1b00:ec3f:9ce3:5841:a621) has joined #ceph
[16:44] * dgbaley27 (~matt@c-98-245-167-2.hsd1.co.comcast.net) has joined #ceph
[16:45] * dgbaley27 (~matt@c-98-245-167-2.hsd1.co.comcast.net) Quit ()
[16:45] * ksingh (~Adium@2001:708:10:10:3d10:1b37:b213:694f) has left #ceph
[16:46] * dgbaley27 (~matt@c-98-245-167-2.hsd1.co.comcast.net) has joined #ceph
[16:46] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[16:48] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) Quit (Ping timeout: 480 seconds)
[16:48] * michalefty (~micha@p20030071CE7119283D3DE1871930706A.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[16:48] * mathias (~mathias@tmo-097-127.customers.d1-online.com) Quit (Quit: leaving)
[16:50] * KevinPerks (~Adium@2606:a000:80a1:1b00:ec3f:9ce3:5841:a621) Quit (Quit: Leaving.)
[16:51] * florent (~florent@2a04:2500:0:103:35fe:649a:cc6a:baa0) Quit (Ping timeout: 480 seconds)
[16:55] * florent (~florent@2a04:2500:0:103:35fe:649a:cc6a:baa0) has joined #ceph
[17:06] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[17:08] * RameshN (~rnachimu@101.222.253.181) Quit (Ping timeout: 480 seconds)
[17:08] * linjan (~linjan@176.195.6.203) Quit (Ping timeout: 480 seconds)
[17:10] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:10] * linjan (~linjan@176.195.6.203) has joined #ceph
[17:11] * lofejndif (~lsqavnbok@tor-exit0-readme.dfri.se) has joined #ceph
[17:14] * dneary (~dneary@96.237.180.105) has joined #ceph
[17:14] * KevinPerks (~Adium@2606:a000:80a1:1b00:f168:fadf:43:a31b) has joined #ceph
[17:16] * vbellur (~vijay@122.172.242.182) has joined #ceph
[17:27] * dneary (~dneary@96.237.180.105) Quit (Ping timeout: 480 seconds)
[17:29] * lupu (~lupu@86.107.101.214) has joined #ceph
[17:31] * garphy is now known as garphy`aw
[17:33] * dgbaley27 (~matt@c-98-245-167-2.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[17:36] * RameshN (~rnachimu@101.222.247.174) has joined #ceph
[17:37] * Pedras (~Adium@50.185.218.255) has joined #ceph
[17:39] * garphy`aw is now known as garphy
[17:45] <iggy> crank up debug output on the mds?
[17:47] * mtl2 (~Adium@c-98-245-49-17.hsd1.co.comcast.net) has joined #ceph
[17:48] <amatus> i found that i had a process d-stated with that directory open, unkillable
[17:48] <amatus> i had to reboot, which is not going to be ok in production
[17:48] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:49] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[17:49] <iggy> cephfs isn't production ready yet, so not surprising
[17:50] * Pedras (~Adium@50.185.218.255) Quit (Quit: Leaving.)
[17:50] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[17:50] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[17:54] * mtl1 (~Adium@c-98-245-49-17.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[17:55] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[17:56] * linjan (~linjan@176.195.6.203) Quit (Ping timeout: 480 seconds)
[17:57] * adamcrume (~quassel@50.247.81.99) has joined #ceph
[17:59] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[18:05] <jcsp> amatus: you should increase the log level (http://ceph.com/docs/master/rados/troubleshooting/log-and-debug/#subsystem-log-and-debug-settings) on the MDS and the client, so that if it happens again we'll have the logs.
[18:07] <amatus> ok thanks
[18:22] * jtaguinerd1 (~jtaguiner@112.205.22.106) has joined #ceph
[18:26] * dgurtner (~dgurtner@217.192.177.51) Quit (Read error: Operation timed out)
[18:26] * sleinen (~Adium@2001:620:0:68::101) Quit (Read error: Connection reset by peer)
[18:29] * jtaguinerd (~jtaguiner@203.215.116.66) Quit (Ping timeout: 480 seconds)
[18:31] * blackmen (~Ajit@121.244.87.115) Quit (Quit: Leaving)
[18:31] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[18:36] * RameshN (~rnachimu@101.222.247.174) Quit (Ping timeout: 480 seconds)
[18:37] * darkling (~hrm@00012bd0.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:37] * danieljh (~daniel@0001b4e9.user.oftc.net) has joined #ceph
[18:38] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[18:38] * florent (~florent@2a04:2500:0:103:35fe:649a:cc6a:baa0) Quit (Quit: Leaving)
[18:39] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Operation timed out)
[18:40] * hasues (~hazuez@12.216.44.38) has joined #ceph
[18:46] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[18:46] * tremon (~arno.schu@87.213.105.245) Quit (Quit: Leaving)
[18:49] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[18:49] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[18:51] * sleinen1 (~Adium@2001:620:0:68::102) has joined #ceph
[18:56] * masta (~masta@190.7.213.210) has joined #ceph
[18:57] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[18:59] * mathias (~mathias@pd95b4613.dip0.t-ipconnect.de) has joined #ceph
[18:59] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[19:00] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[19:01] <mathias> I am trying to map an RBD to my client but get a permission denied - my caps seem ok in my eyes: http://pastebin.com/hQgUswHJ Any ideas?
[19:03] * adamcrume (~quassel@50.247.81.99) Quit (Remote host closed the connection)
[19:12] * alekseyp (~masta@190.7.213.210) has joined #ceph
[19:12] * masta (~masta@190.7.213.210) Quit (Read error: Connection reset by peer)
[19:22] * dgurtner (~dgurtner@15-236.197-178.cust.bluewin.ch) has joined #ceph
[19:28] * adebarbara (~adebarbar@200.0.230.235) has joined #ceph
[19:32] <adebarbara> Quick question, on this page http://ceph.com/docs/master/start/os-recommendations/#linux-kernel, ceph recommends v3.14 or later, but also recommends platforms as Trusty Tahr (LST ubuntu). I have trusty that come with 3.13.0, do I have to upgrade to 3.14 Linux kernel on that box?
[19:32] * dgurtner (~dgurtner@15-236.197-178.cust.bluewin.ch) Quit (Read error: Connection reset by peer)
[19:36] <flaf> adebarbara: I don't think so. I have installed ceph on Trusty directly with ceph package without specific manipulatons.
[19:37] <adebarbara> Is any advantage on getting kernel 3.14 (for ceph), and thats is why is recommended?
[19:38] * lcavassa (~lcavassa@89.184.114.246) Quit (Quit: Leaving)
[19:38] <runfromnowhere> ^ I'm also interested in this as well, coming from a trusty-based release
[19:40] <flaf> Good question, I don't know. Sorry.
[19:40] <runfromnowhere> Well
[19:40] <runfromnowhere> Just to put some things on the pile
[19:40] <runfromnowhere> I've had no problem deploying on 3.13
[19:40] <runfromnowhere> Just wondering if 3.14 adds fixes/benefits to the in-kernel portions :)
[19:42] <adebarbara> flaf, thanks anyway. So, you are using it without any problem over 3.13?
[19:44] <flaf> em... currently I'm just testing ceph in VM (trusty). Nothing in production. I'm not ready. ;)
[19:44] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[19:45] * sleinen1 (~Adium@2001:620:0:68::102) Quit (Read error: Connection reset by peer)
[19:45] <flaf> I can just say I have no issue with ceph in trusty.
[19:45] <flaf> (apt-get install ceph etc. no problem)
[19:45] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Read error: No route to host)
[19:45] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[19:45] <runfromnowhere> Trusty has some VM issues just FYI
[19:46] <runfromnowhere> Unrelated to ceph
[19:46] <runfromnowhere> But can cause fairly hard to diagnose issues with VMs hosted on Trusty
[19:47] <flaf> Ah ok. Thx.
[19:47] * sleinen1 (~Adium@2001:620:0:68::104) has joined #ceph
[19:47] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[19:48] <adebarbara> runfromnowhere, what kind of VMs?
[19:49] <absynth_> trusty is 14.4, right?
[19:49] * diegows (~diegows@190.190.5.238) has joined #ceph
[19:49] <adebarbara> trusty == 14.04.1 (Ubuntu LTS)
[19:53] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:54] * adamcrume (~quassel@2601:9:6680:47:9436:783b:dc33:41a0) has joined #ceph
[19:54] * adamcrume_ (~quassel@2601:9:6680:47:9436:783b:dc33:41a0) has joined #ceph
[19:54] * adamcrume_ (~quassel@2601:9:6680:47:9436:783b:dc33:41a0) Quit (Remote host closed the connection)
[19:56] <runfromnowhere> adebarbara: Kernel revs before 3.13.0-33 have a serious issue with KSM, which is enabled by default
[19:56] <runfromnowhere> For qemu/kvm I believe
[19:56] <runfromnowhere> I'm not sure if it would cause an issue with Xen
[19:57] <runfromnowhere> But it can cause extremely unpredictable behavior in your virtual guests
[19:57] <runfromnowhere> Packet loss, scheduler lag, crashes
[19:57] * jtaguinerd1 (~jtaguiner@112.205.22.106) Quit (Quit: Leaving.)
[19:59] * bjornar_ (~bjornar@ti0099a430-0158.bb.online.no) has joined #ceph
[19:59] <bjornar_> Is the snapshot/cow feature enabled in trusty/icehouse by default?
[19:59] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[20:00] <bjornar_> I mean, does the default trusty repository contain the cow-patch?
[20:01] * BManojlovic (~steki@95.180.4.243) has joined #ceph
[20:06] <mathias> I am trying to map an RBD to my client but get a permission denied - my caps seem ok in my eyes: http://pastebin.com/hQgUswHJ Any ideas?
[20:09] <adebarbara> runfromnowhere, trusty is currently on 3.13.0-35, so I suppose is fixed right now.
[20:12] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[20:15] * mathias (~mathias@pd95b4613.dip0.t-ipconnect.de) Quit (Quit: leaving)
[20:17] <runfromnowhere> adebarbara: Yeah you should be good with that kernel version. It's just a really nasty issue and can be super-tough to track down so I like to warn people when I can
[20:20] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[20:21] <adebarbara> runfromnowhere, Thanks, I think you probably save more than one headache, currently deploying that on production and I don't know if we are deploying the last kernel. I gonna double check that.
[20:21] <adebarbara> Do you got any reference to point me to (bug report, etc)
[20:24] * mathias (~mathias@pd95b4613.dip0.t-ipconnect.de) has joined #ceph
[20:31] <mathias> I just did a rados ls on my rbd pool and noticed there is one object that has the name of my block devices. As I started writing I could see that multiple other objects where created in the pool with names like "rb.0.107e.2ae8944a.000000000018" - is that normal bahavior? Isnt it going to be kind of confusing to ls a pool that is actually used?
[20:38] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[20:40] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[20:43] <fghaas> mathias: that is exactly the intended behavior
[20:43] <fghaas> you can do "rbd info" to get the block device prefix on an rbd image
[20:44] <fghaas> the first object you see is the header object that holds the image metadata, and the subsequent objects are the actual data that you write into the image
[20:45] * dgbaley27 (~matt@c-98-245-167-2.hsd1.co.comcast.net) has joined #ceph
[20:50] * vbellur (~vijay@122.172.242.182) Quit (Read error: Connection reset by peer)
[20:52] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[20:58] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[20:59] * linjan (~linjan@176.195.6.203) has joined #ceph
[21:01] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[21:02] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[21:07] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Remote host closed the connection)
[21:09] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[21:11] * kuaizi1981 (~kuaizi198@218.94.128.51) Quit (Read error: Connection reset by peer)
[21:11] * kuaizi1981 (~kuaizi198@218.94.128.51) has joined #ceph
[21:24] * rendar (~I@host39-6-dynamic.7-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[21:26] <mathias> fghaas: can a user access those other objects directly and break stuff?
[21:26] <fghaas> ah but of course :)
[21:26] <mathias> fghaas: exactly what I expected :D
[21:26] <mathias> thx ;)
[21:26] <fghaas> of course the ser would have to have direct access to your rados pool
[21:27] <fghaas> s/ser/user/
[21:27] <kraken> fghaas meant to say: of course the user would have to have direct access to your rados pool
[21:27] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[21:27] * rendar (~I@host39-6-dynamic.7-79-r.retail.telecomitalia.it) has joined #ceph
[21:27] <fghaas> as opposed to just using the block device through a map or through librbd or qemu
[21:28] <mathias> well not perfect when we use librados from php or something similar
[21:30] <fghaas> that's why you put your rbd's in one pool, and the stuff that your php app uses in another
[21:31] <fghaas> and create users that you just restrict to one pool
[21:31] <mathias> ah ok that is only how it works for RDB not for regular objects?
[21:31] <fghaas> what?
[21:32] * dignus (~jkooijman@t-x.dignus.nl) Quit (Ping timeout: 480 seconds)
[21:33] <mathias> do I only get this rb.0.107e.2ae8944a.* objects using RBD and will not see them using "regular" objects like created with "rados"?
[21:33] <iggy> don't think of rbd objects differently
[21:33] * dignus (~jkooijman@t-x.dignus.nl) has joined #ceph
[21:34] <iggy> other than the consistent naming of course
[21:34] <runfromnowhere> adebarbara: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1346917
[21:34] <adebarbara> runfromnowhere, thank, already founded it ;)
[21:36] * garphy is now known as garphy`aw
[21:36] * designated_ (~rroberts@host-177-39-52-24.midco.net) has joined #ceph
[21:36] <runfromnowhere> Ah cool
[21:38] <adebarbara> runfromnowhere, Also found this http://tracker.ceph.com/issues/8818
[21:39] <adebarbara> I see no mention of 3.13.0 ... anyone having issues on trusty (3.13.0-35)?
[21:40] * BManojlovic (~steki@95.180.4.243) Quit (Ping timeout: 480 seconds)
[21:40] * designated (~rroberts@host-177-39-52-24.midco.net) Quit (Ping timeout: 480 seconds)
[21:41] * fdmanana (~fdmanana@bl5-77-181.dsl.telepac.pt) Quit (Quit: Leaving)
[21:41] <runfromnowhere> adebarbara: Oh interesting, although it seems to be safe for 3.13
[21:41] <mathias> adebarbara: I am running trusty and noticed high network latencies a lot but didnt know where it was coming from
[21:41] <mathias> I am checking
[21:42] <runfromnowhere> The ticket claims the deadlock was introduced in 3.15
[21:42] * dignus (~jkooijman@t-x.dignus.nl) Quit (Ping timeout: 480 seconds)
[21:43] * mathias (~mathias@pd95b4613.dip0.t-ipconnect.de) Quit (Quit: leaving)
[21:45] * garphy`aw is now known as garphy
[21:46] * dignus (~jkooijman@t-x.dignus.nl) has joined #ceph
[21:52] * dignus_ (~jkooijman@t-x.dignus.nl) has joined #ceph
[21:54] * linjan (~linjan@176.195.6.203) Quit (Ping timeout: 480 seconds)
[21:58] * dignus (~jkooijman@t-x.dignus.nl) Quit (Ping timeout: 480 seconds)
[21:58] * hasues (~hazuez@12.216.44.38) Quit (Quit: Leaving.)
[21:59] * steki (~steki@net53-179-245-109.mbb.telenor.rs) has joined #ceph
[21:59] <adebarbara> You guys are right, I miss read some mentioning 3.13.0-24
[22:06] * bipinkunal (~bkunal@1.22.78.3) Quit (Ping timeout: 480 seconds)
[22:08] * dignus_ (~jkooijman@t-x.dignus.nl) Quit (Read error: Operation timed out)
[22:39] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[22:44] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[22:44] * ganders (~root@200-127-158-54.net.prima.net.ar) Quit (Quit: WeeChat 0.4.1)
[22:48] * garphy is now known as garphy`aw
[23:09] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[23:10] * adebarbara (~adebarbar@200.0.230.235) Quit (Ping timeout: 480 seconds)
[23:22] * ultimape (~Ultimape@c-174-62-192-41.hsd1.vt.comcast.net) has joined #ceph
[23:26] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[23:29] * steki (~steki@net53-179-245-109.mbb.telenor.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[23:30] * steki (~steki@net244-187-245-109.mbb.telenor.rs) has joined #ceph
[23:30] * bjornar_ (~bjornar@ti0099a430-0158.bb.online.no) Quit (Ping timeout: 480 seconds)
[23:31] * DV_ (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[23:32] * BManojlovic (~steki@net194-168-245-109.mbb.telenor.rs) has joined #ceph
[23:38] * steki (~steki@net244-187-245-109.mbb.telenor.rs) Quit (Ping timeout: 480 seconds)
[23:41] * DV_ (~veillard@veillard.com) has joined #ceph
[23:43] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[23:44] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Remote host closed the connection)
[23:51] * BManojlovic (~steki@net194-168-245-109.mbb.telenor.rs) Quit (Ping timeout: 480 seconds)
[23:51] * tinklebear (~tinklebea@66.55.144.246) has joined #ceph
[23:55] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.