#ceph IRC Log

Index

IRC Log for 2014-09-03

Timestamps are in GMT/BST.

[0:00] * bandrus (~Adium@216.57.72.205) Quit (Quit: Leaving.)
[0:00] <dmick> nothing's stopping that from being added as explicit options to rbd_fuse.c, it's just not there now. If you have problems, ask; that stuff can be hard to debug
[0:02] * bandrus (~Adium@216.57.72.205) has joined #ceph
[0:03] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has left #ceph
[0:03] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[0:04] <steveeJ> it seems to be really slow. that could however be my docker's network configuration
[0:04] <sreddy> I've been trying to deploy ceph using ceph-deploy, behind firewall. I've setup a local repo.
[0:05] <sreddy> some one here helped me download release.asc file on which ceph-deploy keeps failing
[0:05] <sreddy> Ive coped the file to the local repo locaiton: /opt/repo/rpm-firefly/rhel6/release.asc
[0:06] <sreddy> still ceph-deploy is unable to read the file
[0:06] <sreddy> all the directory perms are at 755
[0:06] <dmick> sreddy: start with the error
[0:07] <sreddy> [wdc01cpm001ccz020][WARNIN] This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. [wdc01cpm001ccz020][INFO ] Running command: sudo rpm --import /opt/repo/rpm-firefly/rhel6/release.asc [wdc01cpm001ccz020][WARNIN] error: /opt/repo/rpm-firefly/rhel6/release.asc: import read failed(2). [wdc01cpm001ccz020][ERROR ] RuntimeError: command returned non-zero exit status: 1 [ceph_deploy][ERROR ] RuntimeError: Fa
[0:07] <sreddy> error: /opt/repo/rpm-firefly/rhel6/release.asc: import read failed(2).
[0:08] <sreddy> RuntimeError: Failed to execute command: rpm --import /opt/repo/rpm-firefly/rhel6/release.asc
[0:09] <dmick> import read failed(2) *seems* to indicate ENOENT (i.e. can't find the file)
[0:09] <dmick> long shot: are you sure that the user you're running ceph-deploy as has read permission to every directory on that path?
[0:09] <dmick> or rather search permission
[0:09] <dmick> ('x')
[0:09] <sreddy> let me try
[0:11] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[0:15] <sreddy> @dmick, permission are r+x
[0:15] <cephalobot> sreddy: Error: "dmick," is not a valid command.
[0:16] * dmsimard is now known as dmsimard_away
[0:16] <dmick> permissions are three bitfields, and there are five elements in that path, so, can I assume you mean "the all bits" and "on each of those five elements"?
[0:17] <dmick> (or "the other bits" if you like)?
[0:17] <sreddy> -rwxr-xr-x 1 root root 1752 Sep 2 22:13 release.asc
[0:17] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[0:17] <sreddy> drwxr-xr-x 3 root root 4096 Sep 2 19:11 repo
[0:18] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[0:18] <sreddy> drwxr-xr-x 3 root root 4096 Sep 2 19:11 rpm-firefly
[0:25] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[0:26] <sreddy> dmicj, is there a way to disable this import of pubkey?
[0:27] * Gill (~Gill@static-72-80-16-227.nycmny.fios.verizon.net) Quit (Quit: Gill)
[0:28] <steveeJ> dmick: rbd-fuse does not seem to work with CEPH_ARGS. df: ?/mnt/rbd/docker?: Transport endpoint is not connected
[0:28] <steveeJ> the rbd command works. i can list images inside the pool
[0:28] * jobewan (~jobewan@snapp.centurylink.net) Quit (Quit: Leaving)
[0:30] * jdillaman (~jdillaman@pool-108-18-232-208.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[0:30] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[0:31] <dmick> sreddy: try "strace -o /tmp/strace.out -f /usr/bin/rpm --import /opt/repo/rpm-firefly/rhel6/release.asc". It's odd that it's failing if all the directories are searchable
[0:31] <dmick> steveeJ: unfortunately all that says is "something went wrong", it doesn't really narrow down what
[0:31] <steveeJ> will take a bit
[0:32] <dmick> there are generic fuse options that will run rbd-fuse in debug mode, IIRC
[0:32] <steveeJ> whpos, thought the strace command was for me
[0:32] * fsimonce (~simon@host135-17-dynamic.8-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[0:32] * jdillaman (~jdillaman@pool-108-18-232-208.washdc.fios.verizon.net) has joined #ceph
[0:32] <steveeJ> didn't read properly, time for bed soon
[0:33] * sreddy (~oftc-webi@32.97.110.56) Quit (Remote host closed the connection)
[0:33] <steveeJ> dmick: I'd really be happy to get this working. I want to have rbd image mapping in userspace
[0:34] <steveeJ> I'll try as admin, just to be sure it's a CEPH_ARGS problem
[0:34] <dmick> what was your exact command?
[0:35] <steveeJ> CEPH_ARGS="-n client.docker" rbd-fuse -p docker /mnt/rbd/docker/
[0:35] * _nitti_ (~nitti@162.222.47.218) Quit (Remote host closed the connection)
[0:35] <steveeJ> i've tried --id docker and --name client.docker too
[0:35] <dmick> and so "rbd works" means you can do "rbd -n client.docker -p docker ls"?
[0:36] * JayJ (~jayj@157.130.21.226) Quit (Quit: Computer has gone to sleep.)
[0:36] <steveeJ> right now I've switched back to admin
[0:36] <steveeJ> and "rbd ls -l docker" works
[0:36] <dmick> but wait, is docker a pool or an image?
[0:37] <steveeJ> docker is a pool. and "rbd-fuse -p docker /mnt/rbd/docker/" works too
[0:37] <steveeJ> as admin, specifying the keyfile = .. option in the ceph.conf
[0:37] <steveeJ> caps: [mon] allow r
[0:37] <steveeJ> caps: [osd] allow * pool=docker
[0:37] <dmick> rbd ls -l docker should be interrogating the docker image, is why I'm confused
[0:37] <steveeJ> would be enough to use rbd-fuse right?
[0:38] <steveeJ> dmick: really? i always check out my pools like that. it shows snapshots and their parents too
[0:38] <steveeJ> ls [-l | --long] [pool-name]
[0:38] <dmick> huh. ok
[0:38] <steveeJ> it's one of the exceptions
[0:38] <gleam> i am correct that there's no good way to snapshot rgw buckets, right? you can snapshot the objects but then rebuilding the bucket+items is a pain?
[0:38] <gleam> or did that get added
[0:39] <dmick> anyway...those perms look okayish to me, but if they work with rbd, they are ok
[0:40] <dmick> but I'm sure I'm missing something. Client and/or rbd debug options might help too
[0:40] * rendar (~I@host187-180-dynamic.32-79-r.retail.telecomitalia.it) Quit ()
[0:41] <dmick> rbd-fuse -d might give more info (that's a generic FUSE option to run in foreground with debug on)
[0:41] <steveeJ> dmick: okay, thanks for guiding so far. I'll retry as the other user tomorrow
[0:45] * joef1 (~Adium@2601:9:280:f2e:d858:a165:c3e6:da81) Quit (Quit: Leaving.)
[0:46] * joef (~Adium@2601:9:280:f2e:cbf:768f:5bad:ac6c) has joined #ceph
[0:46] * joef (~Adium@2601:9:280:f2e:cbf:768f:5bad:ac6c) Quit ()
[0:48] * joef (~Adium@c-24-130-254-66.hsd1.ca.comcast.net) has joined #ceph
[0:48] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[0:48] * joef (~Adium@c-24-130-254-66.hsd1.ca.comcast.net) has left #ceph
[0:49] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:51] * linjan (~linjan@176.195.196.165) Quit (Ping timeout: 480 seconds)
[0:53] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) Quit (Ping timeout: 480 seconds)
[0:53] * yanzheng (~zhyan@171.221.139.239) Quit (Quit: This computer has gone to sleep)
[0:56] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) has joined #ceph
[0:56] * ircolle (~Adium@2601:1:a580:145a:89c2:e59d:c643:3a31) Quit (Quit: Leaving.)
[1:03] * sjustwork (~sam@2607:f298:a:607:f118:4266:5b77:8449) Quit (Quit: Leaving.)
[1:03] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[1:05] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[1:06] * zack_dolby (~textual@p843a3d.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:08] * vItalatinho (~vitalatin@187.66.11.2) has joined #ceph
[1:08] <vItalatinho> hello
[1:09] * vItalatinho (~vitalatin@187.66.11.2) Quit ()
[1:10] * zhaozhiming (~zhaozhimi@192.200.151.156) has joined #ceph
[1:16] * zhaozhiming (~zhaozhimi@192.200.151.156) Quit (Quit: Computer has gone to sleep.)
[1:17] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[1:17] * oms101 (~oms101@p20030057EA405500C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:17] * BManojlovic (~steki@212.200.65.141) Quit (Ping timeout: 480 seconds)
[1:25] * oms101 (~oms101@p20030057EA44EA00C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:32] <infernix> has anyone considered using multipath to address the same rbd mapped volume?
[1:32] <infernix> i go from 3MB 128k random read on 1 rbd volume to 25MB 128k random read with multiple
[1:32] <infernix> that's almost 8 times faster
[1:33] <infernix> and that's just one thread
[1:34] <squisher> infernix, interesting. But that'll only be of advantage when you have a small number of clients compared to the number of nodes
[1:35] <infernix> squisher: i'm specifically trying to improve one node single thread performance
[1:35] <infernix> one rbd0 volume one thread > 3MB/s 128k random read directio
[1:36] <squisher> it probably works well because read locks are not contested
[1:36] <squisher> but yeah, good to know
[1:37] <infernix> 10 rbd volumes 22mb/s. 20 volumes 38mb/s. 30 volumes 53mb/s
[1:37] <infernix> let's go nuts
[1:38] * infernix maps the same volume 100 times
[1:39] <infernix> 100 volumes 93MB/s single thread
[1:39] <infernix> lets up the threadcount
[1:40] <infernix> does not like that
[1:40] * infernix tones it down
[1:42] <infernix> oh wow
[1:43] <infernix> well, multithread its less of an improvement. 1 volume 8 thread= 95mb/s, 10 volume 8 thread = 150mb/s
[1:44] <infernix> rados bench however easily hits 1500mb/s
[1:44] <infernix> i wish rbd would be closer to that :|
[1:46] * nwat (~textual@eduroam-238-17.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:47] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[1:50] * _nitti (~nitti@c-66-41-30-224.hsd1.mn.comcast.net) has joined #ceph
[1:52] * yanzheng (~zhyan@171.221.139.239) Quit ()
[1:56] * jtaguinerd (~jtaguiner@203.215.116.66) Quit (Quit: Leaving.)
[1:58] * jtaguinerd (~jtaguiner@203.215.116.66) has joined #ceph
[2:00] * zack_dolby (~textual@e0109-114-22-3-142.uqwimax.jp) has joined #ceph
[2:00] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[2:05] * joef (~Adium@c-24-130-254-66.hsd1.ca.comcast.net) has joined #ceph
[2:07] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[2:07] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:10] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[2:18] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[2:19] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[2:21] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[2:22] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[2:24] * joef (~Adium@c-24-130-254-66.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[2:24] * zack_dolby (~textual@e0109-114-22-3-142.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[2:29] * zack_dolby (~textual@e0109-114-22-3-142.uqwimax.jp) has joined #ceph
[2:30] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:31] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[2:31] * sputnik13 (~sputnik13@207.8.121.241) Quit ()
[2:32] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:37] * _nitti (~nitti@c-66-41-30-224.hsd1.mn.comcast.net) Quit (Remote host closed the connection)
[2:37] * marrusl (~mark@cpe-24-193-20-3.nyc.res.rr.com) Quit (Read error: Connection reset by peer)
[2:38] <Nats> any thoughts on how many osd's one needs for the hashing algorithm to use up disk space evenly? we're at 30 and still seeing significant variation
[2:39] * joef (~Adium@2620:79:0:8207:31bc:c6c1:1381:23b0) has joined #ceph
[2:40] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[2:42] * DP (~oftc-webi@zccy01cs105.houston.hp.com) has joined #ceph
[2:46] * yanzheng (~zhyan@171.221.139.239) Quit (Quit: This computer has gone to sleep)
[2:47] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[2:47] * lucas1 (~Thunderbi@222.247.57.50) has joined #ceph
[2:48] * squisher (~david@2601:0:580:8be:811e:a7b3:1bf0:1dd7) Quit (Quit: Leaving)
[2:54] <dmick> Nats: have you changed the number of PGs per pool? And how big are your objects?
[2:54] <Nats> rbd 4mb, 4096 pg's
[2:55] <Nats> emperor
[2:56] <Nats> overall at 75% raw; i've got osd's as low as 63% and as high as 90%
[2:57] * joef (~Adium@2620:79:0:8207:31bc:c6c1:1381:23b0) has left #ceph
[2:57] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[2:58] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[2:58] * yanzheng (~zhyan@171.221.139.239) Quit (Quit: This computer has gone to sleep)
[2:59] <dmick> seems a bit odd
[2:59] <dmick> I know there was a relatively-late tunable that affected fair placement
[3:00] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[3:00] <Nats> yeah. it was deployed at start of year, been on emperor all along so i thought i would be relatively safe for tunables
[3:03] * reed (~reed@75-101-54-131.dsl.static.sonic.net) has joined #ceph
[3:04] * squisher (~squisher@2601:0:580:8be:3285:a9ff:fe9c:4b04) has joined #ceph
[3:04] <dmick> I can't remember for certain but I think it's chooseleaf_vary_r
[3:04] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) Quit (Remote host closed the connection)
[3:05] <dmick> anyway, I'm just grasping. it doesn't seem like that's all that unusual for 30 OSDs, but I understand the desire to smooth that out
[3:05] <Nats> i have tried the temporary reweight route
[3:06] <Nats> but at least for me, i just end up with stuck pg's
[3:07] * alram_ (~alram@38.122.20.226) Quit (Quit: leaving)
[3:07] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[3:11] * stj (~stj@tully.csail.mit.edu) Quit (Ping timeout: 480 seconds)
[3:16] * sz0 (~sz0@94.55.197.185) Quit ()
[3:17] <Nats> chooseleaf_vary_r appears to be a firefly addition
[3:18] <Nats> so an upgrade is somewhere in my future
[3:18] * joshd1 (~jdurgin@2607:f298:a:607:b195:4f8f:73aa:fcb7) Quit (Quit: Leaving.)
[3:19] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) has joined #ceph
[3:19] * ChanServ sets mode +o elder
[3:20] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) Quit ()
[3:23] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[3:27] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[3:28] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) has joined #ceph
[3:28] * ChanServ sets mode +o elder
[3:30] <JayJ> Can anyone tell me where to download ceph-extras/debian/ packages for Ubuntu Trusty? Repository is obviously missing trusty packages and I see there is a bug open with no updates here: http://tracker.ceph.com/issues/8303
[3:45] * zerick (~eocrospom@190.187.21.53) Quit (Ping timeout: 480 seconds)
[3:46] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Quit: Computer has gone to sleep.)
[3:47] * stj (~stj@2001:470:8b2d:bb8:21d:9ff:fe29:8a6a) has joined #ceph
[3:52] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit (Quit: valeech)
[3:52] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[4:02] * haomaiwang (~haomaiwan@182.48.117.114) Quit (Remote host closed the connection)
[4:02] * DP (~oftc-webi@zccy01cs105.houston.hp.com) Quit (Remote host closed the connection)
[4:03] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[4:04] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[4:06] * zhaochao (~zhaochao@111.204.252.1) has joined #ceph
[4:07] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[4:09] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[4:10] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[4:10] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[4:16] * Concubidated (~Adium@66-87-74-89.pools.spcsdns.net) has joined #ceph
[4:18] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Ping timeout: 480 seconds)
[4:19] * haomaiwa_ (~haomaiwan@182.48.117.114) has joined #ceph
[4:21] * apolloJess (~Thunderbi@202.60.8.252) has joined #ceph
[4:22] * drankis (~drankis__@89.111.13.198) Quit (Read error: Connection reset by peer)
[4:23] * flaxy (~afx@78.130.171.69) Quit (Quit: WeeChat 0.4.2)
[4:25] * dneary (~dneary@96.237.180.105) has joined #ceph
[4:25] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Read error: Operation timed out)
[4:29] * flaxy (~afx@dark.deflax.net) has joined #ceph
[4:29] * joshd (~jdurgin@2602:306:c5db:310:8105:4585:9d92:c2) Quit (Quit: Leaving.)
[4:30] * flaxy (~afx@dark.deflax.net) Quit ()
[4:33] * kuaizi1981 (~kuaizi198@218.94.128.51) Quit ()
[4:33] * zhaozhiming (~zhaozhimi@192.200.151.170) has joined #ceph
[4:34] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Quit: Computer has gone to sleep.)
[4:36] * bkopilov (~bkopilov@213.57.16.164) Quit (Read error: Operation timed out)
[4:37] * haomaiwa_ (~haomaiwan@182.48.117.114) Quit (Remote host closed the connection)
[4:37] * haomaiwang (~haomaiwan@124.248.205.4) has joined #ceph
[4:38] * zhaozhiming (~zhaozhimi@192.200.151.170) Quit ()
[4:39] * flaxy (~afx@dark.deflax.net) has joined #ceph
[4:46] * zhaozhiming (~zhaozhimi@114.80.243.211) has joined #ceph
[4:46] * zhaozhiming (~zhaozhimi@114.80.243.211) Quit (Remote host closed the connection)
[4:46] * zhaozhiming (~zhaozhimi@192.200.151.171) has joined #ceph
[4:46] * zhaozhiming (~zhaozhimi@192.200.151.171) Quit (Remote host closed the connection)
[4:46] * zhaozhiming (~zhaozhimi@192.200.151.171) has joined #ceph
[4:48] <zhaozhiming> hi all, I get a s3 error code 500 when put object. How can I trace the error reason?
[4:48] <zhaozhiming> I use java code to invoke s3 api.
[4:49] <zhaozhiming> I create bucket is ok, but put object is error.
[5:14] * yanzheng (~zhyan@171.221.139.239) Quit (Quit: This computer has gone to sleep)
[5:19] * lupu (~lupu@86.107.101.214) Quit (Ping timeout: 480 seconds)
[5:24] * Concubidated (~Adium@66-87-74-89.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[5:26] * Concubidated (~Adium@66-87-74-89.pools.spcsdns.net) has joined #ceph
[5:26] * Qu310 (~Qu310@ip-121-0-1-110.static.dsl.onqcomms.net) Quit ()
[5:26] * Vacuum_ (~vovo@88.130.202.88) has joined #ceph
[5:29] * lucas1 (~Thunderbi@222.247.57.50) Quit (Quit: lucas1)
[5:29] * zhaozhiming (~zhaozhimi@192.200.151.171) Quit (Quit: Computer has gone to sleep.)
[5:31] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[5:33] * Vacuum (~vovo@i59F791D3.versanet.de) Quit (Ping timeout: 480 seconds)
[5:36] * dneary (~dneary@96.237.180.105) Quit (Ping timeout: 480 seconds)
[5:39] * bkunal (~bkunal@121.244.87.115) has joined #ceph
[5:41] * yanzheng (~zhyan@171.221.139.239) Quit (Quit: This computer has gone to sleep)
[5:41] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[5:43] * yanzheng (~zhyan@171.221.139.239) Quit ()
[5:54] * MACscr (~Adium@c-98-214-170-53.hsd1.il.comcast.net) Quit (Quit: Leaving.)
[5:55] * joef (~Adium@c-24-130-254-66.hsd1.ca.comcast.net) has joined #ceph
[5:55] * joef (~Adium@c-24-130-254-66.hsd1.ca.comcast.net) Quit ()
[5:56] * Concubidated (~Adium@66-87-74-89.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[6:00] * bandrus (~Adium@216.57.72.205) Quit (Quit: Leaving.)
[6:01] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[6:01] * zhaozhiming (~zhaozhimi@114.80.243.211) has joined #ceph
[6:13] * zhaozhiming (~zhaozhimi@114.80.243.211) Quit (Quit: Computer has gone to sleep.)
[6:14] * Concubidated (~Adium@66.87.74.89) has joined #ceph
[6:17] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[6:20] * KevinPerks1 (~Adium@2606:a000:80a1:1b00:1c4f:3f6f:a441:b7f0) Quit (Quit: Leaving.)
[6:25] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[6:34] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[6:37] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has joined #ceph
[6:40] * ashishchandra (~ashish@49.32.0.151) has joined #ceph
[6:43] * RameshN (~rnachimu@121.244.87.117) has joined #ceph
[6:46] * apolloJess (~Thunderbi@202.60.8.252) Quit (Read error: Connection reset by peer)
[6:46] * apolloJess (~Thunderbi@202.60.8.252) has joined #ceph
[6:52] * Nats (~natscogs@114.31.195.238) Quit (Read error: Connection reset by peer)
[6:52] * Nats (~natscogs@114.31.195.238) has joined #ceph
[6:53] * lucas1 (~Thunderbi@222.247.57.50) has joined #ceph
[6:55] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[7:01] <dmick> JayJ: not sure we have them available anywhere, but some of them are probably built into trusty; which one(s) specifically are you looking for?
[7:06] * rdas (~rdas@121.244.87.115) has joined #ceph
[7:08] * carlosdanger (~Carlos@173.227.47.162) has joined #ceph
[7:09] * vbellur (~vijay@121.244.87.117) has joined #ceph
[7:12] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[7:13] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[7:13] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[7:13] * saurabh (~saurabh@121.244.87.117) has joined #ceph
[7:14] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[7:14] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[7:18] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[7:21] * yanzheng (~zhyan@171.221.139.239) Quit (Quit: This computer has gone to sleep)
[7:21] * ricardo (~ricardo@2404:130:0:1000:549:f999:bfed:1c69) Quit (Quit: Leaving)
[7:27] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[7:29] * carlosdanger_ (~Carlos@pool-108-40-195-27.snloca.fios.verizon.net) has joined #ceph
[7:30] * adamcrume (~quassel@2601:9:6680:47:f41c:255b:d68a:90cb) Quit (Remote host closed the connection)
[7:30] * lucas1 (~Thunderbi@222.247.57.50) Quit (Quit: lucas1)
[7:30] * chuffpdx_ (~chuffpdx@208.186.186.51) has joined #ceph
[7:30] * chuffpdx (~chuffpdx@208.186.186.51) Quit (Read error: Connection reset by peer)
[7:32] * carlosdanger (~Carlos@173.227.47.162) Quit (Read error: Operation timed out)
[7:32] * carlosdanger_ is now known as carlosdanger
[7:35] * yanzheng (~zhyan@171.221.139.239) Quit (Quit: This computer has gone to sleep)
[7:39] * zhaozhiming (~zhaozhimi@114.80.243.211) has joined #ceph
[7:39] * zhaozhiming (~zhaozhimi@114.80.243.211) Quit (Remote host closed the connection)
[7:39] * zhaozhiming (~zhaozhimi@192.200.151.154) has joined #ceph
[7:39] * zhaozhiming (~zhaozhimi@192.200.151.154) Quit (Remote host closed the connection)
[7:39] * zhaozhiming (~zhaozhimi@192.200.151.154) has joined #ceph
[7:41] * zhaozhiming_ (~zhaozhimi@114.80.243.211) has joined #ceph
[7:44] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:46] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[7:47] * zhaozhiming (~zhaozhimi@192.200.151.154) Quit (Ping timeout: 480 seconds)
[7:48] * zhaozhiming_ (~zhaozhimi@114.80.243.211) Quit (Quit: Lingo - http://www.lingoirc.com)
[7:48] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[7:52] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has joined #ceph
[8:01] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[8:05] * apolloJess (~Thunderbi@202.60.8.252) Quit (Remote host closed the connection)
[8:05] * apolloJess (~Thunderbi@202.60.8.252) has joined #ceph
[8:07] * lcavassa (~lcavassa@89.184.114.246) has joined #ceph
[8:11] * peedu (~peedu@170.91.235.80.dyn.estpak.ee) has joined #ceph
[8:13] * peedu_ (~peedu@185.46.20.35) has joined #ceph
[8:19] * peedu (~peedu@170.91.235.80.dyn.estpak.ee) Quit (Ping timeout: 480 seconds)
[8:21] * sleinen (~Adium@2001:620:1000:3:7ed1:c3ff:fedc:3223) has joined #ceph
[8:23] * yanzheng (~zhyan@171.221.139.239) Quit (Quit: This computer has gone to sleep)
[8:25] * carlosdanger (~Carlos@pool-108-40-195-27.snloca.fios.verizon.net) Quit (Quit: carlosdanger)
[8:27] * b0e (~aledermue@213.95.25.82) has joined #ceph
[8:28] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[8:29] * haomaiwa_ (~haomaiwan@182.48.117.114) has joined #ceph
[8:33] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[8:34] * schmee (~quassel@phobos.isoho.st) has joined #ceph
[8:35] * schmee_ (~quassel@phobos.isoho.st) Quit (Ping timeout: 480 seconds)
[8:35] * haomaiwang (~haomaiwan@124.248.205.4) Quit (Ping timeout: 480 seconds)
[8:44] * drankis (~drankis__@89.111.13.198) has joined #ceph
[8:44] * Sysadmin88 (~IceChat77@176.250.164.108) Quit (Quit: Light travels faster then sound, which is why some people appear bright, until you hear them speak)
[8:51] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) Quit (Quit: shimo)
[8:52] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Remote host closed the connection)
[8:57] * zack_dolby (~textual@e0109-114-22-3-142.uqwimax.jp) Quit (Read error: Connection reset by peer)
[8:57] * zack_dolby (~textual@e0109-114-22-3-142.uqwimax.jp) has joined #ceph
[9:03] * dgurtner (~dgurtner@172-228.197-178.cust.bluewin.ch) has joined #ceph
[9:03] * thomnico (~thomnico@2a01:e35:8b41:120:9456:1a39:bdaf:324a) has joined #ceph
[9:04] * Jakey (uid1475@id-1475.uxbridge.irccloud.com) has joined #ceph
[9:05] * ade (~abradshaw@193.202.255.218) has joined #ceph
[9:05] * lucas1 (~Thunderbi@222.247.57.50) has joined #ceph
[9:09] * garphy`aw is now known as garphy
[9:13] * analbeard (~shw@support.memset.com) has joined #ceph
[9:15] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:20] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:21] * fsimonce (~simon@host135-17-dynamic.8-79-r.retail.telecomitalia.it) has joined #ceph
[9:22] * MACscr (~Adium@c-98-214-170-53.hsd1.il.comcast.net) has joined #ceph
[9:25] * sleinen (~Adium@2001:620:1000:3:7ed1:c3ff:fedc:3223) Quit (Quit: Leaving.)
[9:36] * brother| is now known as brother
[9:39] * michalefty (~micha@p20030071CE0462979DF2CAA74F39527E.dip0.t-ipconnect.de) has joined #ceph
[9:48] * mancdaz (~mancdaz@2a00:1a48:7807:102:94f4:6b56:ff08:886c) has joined #ceph
[9:50] * jpierre03 (~jpierre03@voyage.prunetwork.fr) Quit (Remote host closed the connection)
[9:50] * jpierre03_ (~jpierre03@voyage.prunetwork.fr) Quit (Read error: Connection reset by peer)
[9:52] * jpierre03 (~jpierre03@voyage.prunetwork.fr) has joined #ceph
[9:52] * lucas1 (~Thunderbi@222.247.57.50) Quit (Remote host closed the connection)
[9:53] * jpierre03_ (~jpierre03@voyage.prunetwork.fr) has joined #ceph
[9:55] * mattronix (~matthew@mail.mattronix.nl) has joined #ceph
[9:56] <mattronix> Hello Everyone,
[9:56] <mattronix> Playing around withceph and its awesome :)
[9:58] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[9:58] * ChanServ sets mode +v andreask
[9:59] * zack_dolby (~textual@e0109-114-22-3-142.uqwimax.jp) Quit (Read error: Connection reset by peer)
[9:59] * zack_dolby (~textual@e0109-114-22-3-142.uqwimax.jp) has joined #ceph
[10:02] * apolloJess (~Thunderbi@202.60.8.252) Quit (Ping timeout: 480 seconds)
[10:02] * peedu (~peedu@170.91.235.80.dyn.estpak.ee) has joined #ceph
[10:03] * branto (~branto@nat-pool-brq-t.redhat.com) has joined #ceph
[10:04] * dgurtner (~dgurtner@172-228.197-178.cust.bluewin.ch) Quit (Ping timeout: 480 seconds)
[10:04] * peedu_ (~peedu@185.46.20.35) Quit (Read error: Operation timed out)
[10:06] * peedu_ (~peedu@185.46.20.35) has joined #ceph
[10:08] * zhaozhiming (~zhaozhimi@192.200.151.152) has joined #ceph
[10:09] * zack_dolby (~textual@e0109-114-22-3-142.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[10:09] * lucas1 (~Thunderbi@222.247.57.50) has joined #ceph
[10:10] * dgurtner (~dgurtner@217.192.177.51) has joined #ceph
[10:10] * zhaozhiming_ (~zhaozhimi@114.80.243.211) has joined #ceph
[10:10] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[10:11] * michalefty (~micha@p20030071CE0462979DF2CAA74F39527E.dip0.t-ipconnect.de) has left #ceph
[10:12] * rwheeler (~rwheeler@173.48.207.57) Quit (Ping timeout: 480 seconds)
[10:12] * peedu (~peedu@170.91.235.80.dyn.estpak.ee) Quit (Ping timeout: 480 seconds)
[10:13] * zhaozhiming_ (~zhaozhimi@114.80.243.211) Quit ()
[10:14] * rwheeler (~rwheeler@173.48.207.57) has joined #ceph
[10:17] * zhaozhiming (~zhaozhimi@192.200.151.152) Quit (Ping timeout: 480 seconds)
[10:18] * zack_dolby (~textual@e0109-114-22-3-142.uqwimax.jp) has joined #ceph
[10:24] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[10:24] * zack_dolby (~textual@e0109-114-22-3-142.uqwimax.jp) Quit (Read error: Connection reset by peer)
[10:26] * blackmen (~Ajit@121.244.87.115) has joined #ceph
[10:28] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[10:28] * zack_dolby (~textual@e0109-114-22-3-142.uqwimax.jp) has joined #ceph
[10:29] * jpierre03 (~jpierre03@voyage.prunetwork.fr) Quit (Ping timeout: 480 seconds)
[10:29] * jpierre03_ (~jpierre03@voyage.prunetwork.fr) Quit (Ping timeout: 480 seconds)
[10:32] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[10:37] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[10:45] * fitzdsl (~rvrignaud@2a04:2500:0:b00:f21f:afff:fe46:48a8) has joined #ceph
[10:47] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[10:48] * jpierre03 (~jpierre03@voyage.prunetwork.fr) has joined #ceph
[10:48] * jpierre03_ (~jpierre03@voyage.prunetwork.fr) has joined #ceph
[10:51] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has left #ceph
[10:55] * rendar (~I@host60-177-dynamic.8-79-r.retail.telecomitalia.it) has joined #ceph
[10:57] * darkling (~hrm@00012bd0.user.oftc.net) has joined #ceph
[10:58] * zack_dolby (~textual@e0109-114-22-3-142.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[10:58] * zack_dolby (~textual@e0109-114-22-3-142.uqwimax.jp) has joined #ceph
[10:58] * jtaguinerd1 (~jtaguiner@112.205.19.199) has joined #ceph
[10:59] * zack_dolby (~textual@e0109-114-22-3-142.uqwimax.jp) Quit ()
[11:02] * chuffpdx__ (~chuffpdx@208.186.186.51) has joined #ceph
[11:02] * jtaguinerd (~jtaguiner@203.215.116.66) Quit (Ping timeout: 480 seconds)
[11:04] * chuffpdx_ (~chuffpdx@208.186.186.51) Quit (Read error: Connection reset by peer)
[11:05] * linjan (~linjan@176.195.196.165) has joined #ceph
[11:05] * yanzheng (~zhyan@171.221.139.239) Quit (Quit: This computer has gone to sleep)
[11:14] * mgarcesMZ (~mgarces@5.206.228.5) has joined #ceph
[11:14] <mgarcesMZ> hi guys
[11:15] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[11:15] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Read error: Connection reset by peer)
[11:16] * jcsp (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[11:18] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[11:19] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[11:21] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[11:24] * RameshN (~rnachimu@121.244.87.117) Quit (Read error: Operation timed out)
[11:25] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[11:27] * cooldharma06 (~chatzilla@218.248.24.19) has joined #ceph
[11:28] <cooldharma06> hi all..:)
[11:39] * RameshN (~rnachimu@121.244.87.117) has joined #ceph
[11:42] <cooldharma06> any guide is available for making ceph between two machine.
[11:46] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[11:47] <ashishchandra> cooldharma06: do you mean you want to install ceph across couple of machines
[11:47] <cooldharma06> yes i want to make osd in two machines
[11:48] <cooldharma06> ashishchandra and i am just confused with the things when making setup. any reference guide or link to make a setup.
[11:48] <ashishchandra> cooldharma06: http://ceph.com/docs/master/start/quick-start-preflight/
[11:49] <ashishchandra> cooldharma06: please refer this doc, its pretty easy and if u are stuck please let me know
[11:49] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[11:50] <cooldharma06> thanks and sure i will ping you if i stucked..:)
[11:51] <cooldharma06> upto what time you will be in irc?
[11:53] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[11:55] <ashishchandra> upto 7:00 pm IST
[11:57] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[11:59] <cooldharma06> oh ok thanks ..:)
[11:59] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[12:01] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[12:01] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[12:04] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[12:04] <mgarcesMZ> cooldharma06: if you are using just 2 OSD, dont forget that the ???osd pool default size??? property defaults to 3, so you will never have a clean state, unless you put ???osd pool default size = 2???
[12:04] <mgarcesMZ> am I correct guys?
[12:06] <fghaas> mgarcesMZ: that actually depends on your version; I believe this changed in firefly (previously the default was 2)
[12:07] <mgarcesMZ> oh, Im new to ceph, so firefly defaults to me :)
[12:08] <mgarcesMZ> fghaas: btw, still no reply from Inktank
[12:08] * yanzheng (~zhyan@171.221.139.239) Quit (Quit: This computer has gone to sleep)
[12:08] <fghaas> mgarcesMZ: but I did get one. :) I'll be in touch by email shortly
[12:08] <mgarcesMZ> nice
[12:09] * _NiC (~kristian@aeryn.ronningen.no) Quit (Ping timeout: 480 seconds)
[12:10] * steveeJ (~junky@HSI-KBW-085-216-022-246.hsi.kabelbw.de) Quit (Remote host closed the connection)
[12:12] * _NiC (~kristian@aeryn.ronningen.no) has joined #ceph
[12:13] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[12:14] * madkiss (~madkiss@chello080108052132.20.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[12:17] * lucas1 (~Thunderbi@222.247.57.50) Quit (Remote host closed the connection)
[12:20] * lupu (~lupu@86.107.101.214) has joined #ceph
[12:22] * rdas (~rdas@121.244.87.115) Quit (Quit: Leaving)
[12:25] * steveeJ (~junky@HSI-KBW-085-216-022-246.hsi.kabelbw.de) has joined #ceph
[12:26] * haomaiwa_ (~haomaiwan@182.48.117.114) Quit (Remote host closed the connection)
[12:28] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[12:28] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[12:36] * danieljh (~daniel@0001b4e9.user.oftc.net) has joined #ceph
[12:37] * lucas1 (~Thunderbi@222.247.57.50) has joined #ceph
[12:38] * lucas1 (~Thunderbi@222.247.57.50) Quit ()
[12:38] <ashishchandra> fghaas, mgarcesMZ: by default we have osd pool default size =3, if you want to have 2 osds then you need to set the above parameter
[12:39] <cooldharma06> thanks mgarcesMZ ashishchandra i ll make note of that one..:)
[12:43] * cok (~chk@2a02:2350:18:1012:e9c2:2737:1d35:1ff1) has joined #ceph
[12:47] * rendar (~I@host60-177-dynamic.8-79-r.retail.telecomitalia.it) Quit (Read error: Connection reset by peer)
[12:50] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Ping timeout: 480 seconds)
[12:53] * madkiss (~madkiss@2001:6f8:12c3:f00f:a9c4:52c0:defe:4219) has joined #ceph
[12:57] * rendar (~I@host60-177-dynamic.8-79-r.retail.telecomitalia.it) has joined #ceph
[12:58] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) has joined #ceph
[13:00] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[13:07] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[13:08] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Remote host closed the connection)
[13:09] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[13:12] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit ()
[13:14] * cok (~chk@2a02:2350:18:1012:e9c2:2737:1d35:1ff1) has left #ceph
[13:17] * marrusl (~mark@cpe-72-229-1-142.nyc.res.rr.com) has joined #ceph
[13:17] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[13:17] * dis is now known as Guest1350
[13:17] * dis (~dis@109.110.67.24) has joined #ceph
[13:19] * Guest1350 (~dis@109.110.66.126) Quit (Ping timeout: 480 seconds)
[13:22] * marrusl (~mark@cpe-72-229-1-142.nyc.res.rr.com) Quit (Remote host closed the connection)
[13:23] * marrusl (~mark@2604:2000:60e3:8900:c189:aa89:a02d:58b1) has joined #ceph
[13:27] * tab (~oftc-webi@194.249.247.164) Quit (Quit: Page closed)
[13:29] * KevinPerks (~Adium@2606:a000:80a1:1b00:24f8:39af:f63e:b4d6) has joined #ceph
[13:31] * lcavassa (~lcavassa@89.184.114.246) Quit (Quit: Leaving)
[13:31] * jtaguinerd1 (~jtaguiner@112.205.19.199) Quit (Quit: Leaving.)
[13:34] <cooldharma06> facing this error when running ceph-deploy new node1 node2 under cephuser -> No handlers could be found for logger "ceph_deploy"
[13:34] <cooldharma06> any suggestions
[13:36] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[13:37] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:41] * true (~antrue@2a02:6b8:0:401:697c:6695:cdd2:4d89) Quit (Quit: Leaving.)
[13:41] * true (~antrue@2a02:6b8:0:401:14f7:cca2:4631:98ac) has joined #ceph
[13:43] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[13:45] * nljmo_ (~nljmo@5ED6C263.cm-7-7d.dynamic.ziggo.nl) Quit (Quit: Textual IRC Client: www.textualapp.com)
[13:46] * nljmo (~nljmo@5ED6C263.cm-7-7d.dynamic.ziggo.nl) has joined #ceph
[13:48] * isodude (josef@jj.oderland.com) has joined #ceph
[13:50] * diegows (~diegows@190.190.5.238) has joined #ceph
[13:53] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[13:54] <true> hi, guys
[13:55] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[13:56] <true> i need help... again
[13:56] <true> i see strange things
[13:56] <true> /dev/sdb4 2734970332 5160732 2590858052 1% /var/lib/ceph/osd/ceph-0
[13:56] <true> /dev/sdc4 2734970332 39889104 2556129680 2% /var/lib/ceph/osd/ceph-4
[13:56] <true> /dev/sdd 2884154032 44971068 2692653252 2% /var/lib/ceph/osd/ceph-8
[13:56] <true> /dev/sde 2884154032 34706536 2702917784 2% /var/lib/ceph/osd/ceph-12
[13:56] <true> /dev/sdf 2884154032 2539799000 197825320 93% /var/lib/ceph/osd/ceph-16
[13:56] <true> on only one node
[13:57] * saurabh (~saurabh@121.244.87.117) Quit (Quit: Leaving)
[13:58] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[13:58] <true> i found mistake in crush map... there is item osd.16 weight -0.000, how I can fix it?
[13:59] * lupu (~lupu@86.107.101.214) Quit (Quit: Leaving.)
[13:59] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[14:00] * leseb (~leseb@81-64-215-19.rev.numericable.fr) Quit (Quit: ZNC - http://znc.in)
[14:00] * leseb (~leseb@81-64-215-19.rev.numericable.fr) has joined #ceph
[14:00] <mgarcesMZ> guys, newbie question here??? when I connect with python to my radosgw, and create 15000 objects in a container??? why I can only list 10 000?
[14:06] * vbellur (~vijay@122.172.172.11) has joined #ceph
[14:06] * D-Spair (~dphillips@cpe-74-130-79-134.swo.res.rr.com) Quit (Quit: ZNC - http://znc.in)
[14:07] * leseb (~leseb@81-64-215-19.rev.numericable.fr) Quit (Quit: ZNC - http://znc.in)
[14:08] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[14:08] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) has joined #ceph
[14:10] * D-Spair (~dphillips@cpe-74-130-79-134.swo.res.rr.com) has joined #ceph
[14:15] * JayJ (~jayj@157.130.21.226) has joined #ceph
[14:16] <danieljh> true: you have to "reweight" your osd
[14:16] <danieljh> there's documentation about the exact command, you'll find it.
[14:17] <true> done, aaand now it's recovering... but i'm afraid about space on misconfigurated osd
[14:19] * jtaguinerd (~jtaguiner@203.215.116.66) has joined #ceph
[14:19] * D-Spair (~dphillips@cpe-74-130-79-134.swo.res.rr.com) Quit (Quit: ZNC - http://znc.in)
[14:19] <danieljh> Your objects should get shuffled around accordingly, if you wait for a bit.
[14:20] <true> i know, i like this cluster, it's kinda my toy)
[14:20] * ganders (~root@200-127-158-54.net.prima.net.ar) has joined #ceph
[14:21] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:22] * rdas (~rdas@110.227.46.51) has joined #ceph
[14:26] * madkiss (~madkiss@2001:6f8:12c3:f00f:a9c4:52c0:defe:4219) Quit (Ping timeout: 480 seconds)
[14:28] * JayJ (~jayj@157.130.21.226) Quit (Remote host closed the connection)
[14:28] * mgarcesMZ (~mgarces@5.206.228.5) Quit (Quit: mgarcesMZ)
[14:29] * Concubidated (~Adium@66.87.74.89) Quit (Read error: Connection reset by peer)
[14:29] * JayJ (~jayj@157.130.21.226) has joined #ceph
[14:29] * Concubidated (~Adium@66-87-72-44.pools.spcsdns.net) has joined #ceph
[14:33] <alfredodeza> cooldharma06: that sounds like a bad one
[14:33] <alfredodeza> I am not sure what is causing that but will make a release today that fixes that
[14:38] * rwheeler (~rwheeler@173.48.207.57) Quit (Quit: Leaving)
[14:40] * markbby (~Adium@168.94.245.4) has joined #ceph
[14:43] * mgarcesMZ (~mgarces@5.206.228.5) has joined #ceph
[14:45] * _nitti (~nitti@c-66-41-30-224.hsd1.mn.comcast.net) has joined #ceph
[14:51] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[14:51] * michalefty (~micha@p20030071CE7925479DF2CAA74F39527E.dip0.t-ipconnect.de) has joined #ceph
[14:54] <alfredodeza> cooldharma06: ping
[14:55] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[14:55] * _nitti (~nitti@c-66-41-30-224.hsd1.mn.comcast.net) Quit (Remote host closed the connection)
[14:57] * bkunal (~bkunal@121.244.87.115) Quit (Ping timeout: 480 seconds)
[14:59] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[14:59] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has left #ceph
[15:01] * cok (~chk@2a02:2350:18:1012:4999:7264:cd26:568d) has joined #ceph
[15:01] * Concubidated1 (~Adium@66-87-75-178.pools.spcsdns.net) has joined #ceph
[15:03] * jtaguinerd1 (~jtaguiner@203.215.116.66) has joined #ceph
[15:03] * jtaguinerd (~jtaguiner@203.215.116.66) Quit (Read error: Connection reset by peer)
[15:05] * Concubidated2 (~Adium@66-87-75-178.pools.spcsdns.net) has joined #ceph
[15:05] * Concubidated1 (~Adium@66-87-75-178.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[15:07] * Concubidated (~Adium@66-87-72-44.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[15:08] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[15:13] * fitzdsl (~rvrignaud@2a04:2500:0:b00:f21f:afff:fe46:48a8) has left #ceph
[15:15] * Concubidated2 (~Adium@66-87-75-178.pools.spcsdns.net) Quit (Quit: Leaving.)
[15:16] * _nitti (~nitti@162.222.47.218) has joined #ceph
[15:18] * b0e1 (~aledermue@213.95.25.82) has joined #ceph
[15:18] * debian112 (~bcolbert@c-24-99-94-44.hsd1.ga.comcast.net) has joined #ceph
[15:18] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[15:20] * zhaochao (~zhaochao@111.204.252.1) has left #ceph
[15:20] * ksingh (~Adium@2001:708:10:10:8c21:7c67:3b0b:40c5) has joined #ceph
[15:21] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[15:22] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[15:23] * b0e (~aledermue@213.95.25.82) Quit (Ping timeout: 480 seconds)
[15:24] * lcavassa (~lcavassa@fwsanpaolo.upprovider.it) has joined #ceph
[15:32] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[15:32] * ashishchandra (~ashish@49.32.0.151) Quit (Quit: Leaving)
[15:33] <ganders> does someone had any experience with micron p420m cards? for journals
[15:34] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[15:37] * rdas (~rdas@110.227.46.51) Quit (Quit: Leaving)
[15:38] * JayJ (~jayj@157.130.21.226) Quit (Quit: Computer has gone to sleep.)
[15:38] * dis is now known as Guest1360
[15:38] * dis (~dis@109.110.67.194) has joined #ceph
[15:40] * Guest1360 (~dis@109.110.67.24) Quit (Ping timeout: 480 seconds)
[15:45] * ade (~abradshaw@193.202.255.218) Quit (Ping timeout: 480 seconds)
[15:45] * gregsfortytwo1 (~Adium@cpe-107-184-64-126.socal.res.rr.com) has joined #ceph
[15:49] * bjornar (~bjornar@ns3.uniweb.no) Quit (Remote host closed the connection)
[15:54] * peedu (~peedu@170.91.235.80.dyn.estpak.ee) has joined #ceph
[15:54] * hyperbaba (~hyperbaba@private.neobee.net) Quit (Ping timeout: 480 seconds)
[15:55] * yanzheng (~zhyan@171.221.139.239) Quit (Quit: This computer has gone to sleep)
[15:57] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:58] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[16:00] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[16:01] * peedu_ (~peedu@185.46.20.35) Quit (Ping timeout: 480 seconds)
[16:01] * gregsfortytwo1 (~Adium@cpe-107-184-64-126.socal.res.rr.com) Quit (Quit: Leaving.)
[16:03] * vbellur (~vijay@122.172.172.11) Quit (Quit: Leaving.)
[16:03] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[16:04] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) Quit (Quit: Leaving.)
[16:05] * RameshN (~rnachimu@121.244.87.117) Quit (Read error: Operation timed out)
[16:06] * JayJ (~jayj@157.130.21.226) has joined #ceph
[16:06] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[16:08] * peedu (~peedu@170.91.235.80.dyn.estpak.ee) Quit (Ping timeout: 480 seconds)
[16:09] * markbby (~Adium@168.94.245.4) has joined #ceph
[16:11] * true (~antrue@2a02:6b8:0:401:14f7:cca2:4631:98ac) Quit (Read error: Connection timed out)
[16:11] * b0e1 (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[16:12] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[16:12] * markbby (~Adium@168.94.245.4) has joined #ceph
[16:12] * JayJ (~jayj@157.130.21.226) Quit (Quit: Computer has gone to sleep.)
[16:12] * true (~antrue@2a02:6b8:0:401:14f7:cca2:4631:98ac) has joined #ceph
[16:21] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) Quit (Read error: Connection reset by peer)
[16:25] * dmsimard_away is now known as dmsimard
[16:27] * jtang_ (~jtang@80.111.83.231) Quit (Ping timeout: 480 seconds)
[16:29] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[16:29] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[16:29] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[16:29] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit ()
[16:33] * zerick (~eocrospom@190.118.30.195) has joined #ceph
[16:35] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[16:38] * markbby (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[16:38] * jtang_ (~jtang@80.111.83.231) has joined #ceph
[16:39] * markbby (~Adium@168.94.245.4) has joined #ceph
[16:42] * jobewan (~jobewan@snapp.centurylink.net) has joined #ceph
[16:45] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[16:46] * lpabon (~lpabon@nat-pool-bos-t.redhat.com) Quit (Quit: ZNC - http://znc.in)
[16:48] * lpabon (~lpabon@nat-pool-bos-t.redhat.com) has joined #ceph
[16:49] * shang (~ShangWu@111-82-144-3.EMOME-IP.hinet.net) has joined #ceph
[16:50] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[16:53] * jtang_ (~jtang@80.111.83.231) Quit (Read error: Connection reset by peer)
[16:58] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) has joined #ceph
[17:02] * vbellur (~vijay@122.172.172.11) has joined #ceph
[17:05] * fitzdsl_ (~Romain@dedibox.fitzdsl.net) has joined #ceph
[17:07] * xarses (~andreww@12.164.168.117) has joined #ceph
[17:08] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:12] * xarses (~andreww@12.164.168.117) Quit (Quit: Leaving)
[17:12] * xarses (~andreww@12.164.168.117) has joined #ceph
[17:13] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[17:14] * jtaguinerd1 (~jtaguiner@203.215.116.66) Quit (Quit: Leaving.)
[17:14] * linuxkidd (~linuxkidd@cpe-066-057-017-151.nc.res.rr.com) Quit (Ping timeout: 480 seconds)
[17:14] * b0e (~aledermue@x2f2a46b.dyn.telefonica.de) has joined #ceph
[17:15] * michalefty (~micha@p20030071CE7925479DF2CAA74F39527E.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[17:16] * zack_dolby (~textual@p843a3d.tokynt01.ap.so-net.ne.jp) has joined #ceph
[17:17] * shang (~ShangWu@111-82-144-3.EMOME-IP.hinet.net) Quit (Quit: Ex-Chat)
[17:22] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[17:22] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[17:22] * linjan (~linjan@176.195.196.165) Quit (Ping timeout: 480 seconds)
[17:24] * cok (~chk@2a02:2350:18:1012:4999:7264:cd26:568d) has left #ceph
[17:31] * ircolle (~Adium@2601:1:a580:145a:4ad:3c8b:b00e:acf4) has joined #ceph
[17:34] * RameshN (~rnachimu@101.222.233.98) has joined #ceph
[17:35] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[17:37] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[17:38] * mgarcesMZ (~mgarces@5.206.228.5) Quit (Quit: mgarcesMZ)
[17:38] * vbellur (~vijay@122.172.172.11) Quit (Quit: Leaving.)
[17:38] * linuxkidd (~linuxkidd@rrcs-70-62-120-189.midsouth.biz.rr.com) has joined #ceph
[17:47] * mathias (~mathias@pd95b4613.dip0.t-ipconnect.de) has joined #ceph
[17:47] * adamcrume (~quassel@50.247.81.99) has joined #ceph
[17:48] <mathias> what is wrong with the command "ceph osd crush add 2 osd.2 1.0 pool=default host=osd03"? (see http://pastebin.com/TEfHiXeN for output)
[17:51] * neurodrone (~neurodron@static-108-29-37-206.nycmny.fios.verizon.net) has joined #ceph
[17:52] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[17:53] * linuxkidd (~linuxkidd@rrcs-70-62-120-189.midsouth.biz.rr.com) Quit (Ping timeout: 480 seconds)
[17:53] * bkunal (~bkunal@113.193.138.179) has joined #ceph
[17:53] * RameshN (~rnachimu@101.222.233.98) Quit (Quit: Quit)
[17:57] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[17:59] * andrew__ (~oftc-webi@32.97.110.56) has joined #ceph
[17:59] <lalatenduM> scuttlemonkey, fedora guys already building ceph for el6 and el7 , so we just need to take the srpms from there and build it for CentOS storage sig
[17:59] <andrew__> hello, have anyone manually attached osd nodes to the cluster?
[18:00] <lalatenduM> scuttlemonkey, e.g. http://koji.fedoraproject.org/koji/buildinfo?buildID=569913 and http://koji.fedoraproject.org/koji/buildinfo?buildID=569912
[18:00] <danieljh> mathias: hmm it probably should be id or name but not both
[18:01] <danieljh> what does ceph --version say?
[18:02] <andrew__> my osd is always down, does anyone know why?
[18:03] * linuxkidd (~linuxkidd@cpe-066-057-017-151.nc.res.rr.com) has joined #ceph
[18:04] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:07] * diegows (~diegows@190.190.5.238) Quit (Read error: Operation timed out)
[18:09] <mathias> danieljh: naa not working either ...
[18:12] * linjan (~linjan@176.195.196.165) has joined #ceph
[18:13] <mathias> ceph osd crush add osd.2 1.0 pool=default host=osd03
[18:13] <mathias> I think I am following "osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]" though
[18:14] * jgornick (~jgornick@2600:3c00::f03c:91ff:fedf:72b4) Quit (Quit: ZNC - http://znc.in)
[18:14] * jgornick (~jgornick@2600:3c00::f03c:91ff:fedf:72b4) has joined #ceph
[18:17] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[18:18] * lcavassa (~lcavassa@fwsanpaolo.upprovider.it) Quit (Quit: Leaving)
[18:18] * sputnik13 (~sputnik13@client64-42.sdsc.edu) has joined #ceph
[18:20] * bandrus (~Adium@216.57.72.205) has joined #ceph
[18:21] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[18:23] * sputnik13 (~sputnik13@client64-42.sdsc.edu) Quit ()
[18:23] * dgurtner (~dgurtner@217.192.177.51) Quit (Ping timeout: 480 seconds)
[18:23] * b0e (~aledermue@x2f2a46b.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[18:24] * ksingh (~Adium@2001:708:10:10:8c21:7c67:3b0b:40c5) has left #ceph
[18:32] * branto (~branto@nat-pool-brq-t.redhat.com) has left #ceph
[18:32] <mathias> how do you typically handle distribution of ceph.conf in a big cluster? Manual scp seems not to scale very well ^^
[18:34] * danieljh (~daniel@0001b4e9.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:35] <steveeJ> mathias: you could use ansible with the synchronization module
[18:38] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[18:38] <mathias> steveeJ: ok so no builtin way to do it. distributing the ceph.conf is only useful when an osd or other service has to restart, right? All runtime changes go effective right away!?
[18:39] <steveeJ> mathias: if you change something using injectargs, the settings will be back to your configuration on daemon restart
[18:39] * vbellur (~vijay@122.167.229.75) has joined #ceph
[18:39] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[18:39] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[18:40] * darkling (~hrm@00012bd0.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:40] <mathias> thx
[18:41] * Sysadmin88 (~IceChat77@176.250.164.108) has joined #ceph
[18:41] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[18:42] <mathias> I dont manage to add a new OSD to CRUSH for some reason: I get "invalid argument" now for "ceph osd crush add osd.2 1.0 pool=default host=osd03"
[18:42] <mathias> not sure this is a syntax problem or something else ...
[18:43] * sputnik13 (~sputnik13@client64-42.sdsc.edu) has joined #ceph
[18:51] * dis (~dis@109.110.67.194) Quit (Ping timeout: 480 seconds)
[18:52] * sputnik13 (~sputnik13@client64-42.sdsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[18:52] * garphy is now known as garphy`aw
[18:53] * sputnik13 (~sputnik13@client64-42.sdsc.edu) has joined #ceph
[18:56] * adamcrume (~quassel@50.247.81.99) Quit (Remote host closed the connection)
[18:58] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[18:59] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has joined #ceph
[19:00] * bkopilov (~bkopilov@213.57.18.113) has joined #ceph
[19:02] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) Quit ()
[19:03] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[19:04] * joshd (~jdurgin@2607:f298:a:607:c428:e9c2:ea79:29e6) has joined #ceph
[19:04] * dgurtner (~dgurtner@249-236.197-178.cust.bluewin.ch) has joined #ceph
[19:04] * markbby1 (~Adium@168.94.245.2) has joined #ceph
[19:07] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[19:08] * alram (~alram@38.122.20.226) has joined #ceph
[19:08] * markbby (~Adium@168.94.245.4) has joined #ceph
[19:09] * markbby1 (~Adium@168.94.245.2) Quit (Remote host closed the connection)
[19:10] <mattronix> hi everyone i setup a ceph cluster for testing 3 nodes a single osd and i am noticing 3mb per second thoughput which is extreamly low
[19:10] <mattronix> i am wondering what is the best way to trouble shoot this
[19:11] <mattronix> or what best practices am i breaking XD
[19:11] * sjusthm (~sam@24-205-54-233.dhcp.gldl.ca.charter.com) has joined #ceph
[19:11] * Nacer (~Nacer@37.161.199.220) has joined #ceph
[19:12] * hedin (~hedin@81.25.179.168) has joined #ceph
[19:12] <andrew__> mattronix how did you set up your osd?
[19:12] <hedin> Hi
[19:12] <andrew__> i am having trouble
[19:12] <mattronix> its a very basic osd setup
[19:13] <andrew__> did you use ceph deploy or manual?
[19:13] <mattronix> its 1 block device of 100gb mounted on /data
[19:13] <mattronix> and then i did a ceph-deploy prepare node:/data
[19:13] <mattronix> and then activate
[19:13] <mattronix> i used ceph-deploy to do everything
[19:13] <andrew__> i am trying to get it to work manually without ceph-deploy
[19:13] <mattronix> first time using ceph
[19:13] <mattronix> ooooo
[19:13] <andrew__> but i am having trouble
[19:13] <mattronix> whats your issue?
[19:13] * blackmen (~Ajit@121.244.87.115) Quit (Quit: Leaving)
[19:14] <andrew__> well.. sudo mount -o user_xattr /dev/{hdd} /var/lib/ceph/osd/ceph-{osd-number}
[19:14] <andrew__> the -o user_xattr did not work for me
[19:14] <andrew__> i removed that part of the argument, i am not sure if that matters
[19:14] <mattronix> ooo
[19:14] <mattronix> i spcifed that in the osd config file
[19:14] <andrew__> and i dont understand ceph osd crush add {id-or-name} {weight} [{bucket-type}={bucket-name} ...]
[19:15] <andrew__> that command does not execute
[19:15] <mattronix> ooo
[19:15] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[19:16] <mattronix> and are you also having low perf issues
[19:16] <mattronix> whats your setup?
[19:16] <andrew__> i set up 3 monitors
[19:16] <hedin> I have been trying to get in contact with inktank for some time, but they _never_ answer the e-mails and they don't return calls... Do you guys know how to communicate with inktank?
[19:16] <andrew__> they appear to be working
[19:16] <andrew__> but when i run the command: ceph osd tree it shows : "0 0 osd.0 down 0"
[19:16] <andrew__> i am doing 3 mon, 3 osd
[19:17] <mattronix> oooo
[19:17] <mattronix> is the deamon running?
[19:17] <andrew__> how do you check?
[19:17] <mattronix> which distro?
[19:17] <andrew__> rhel6
[19:17] <andrew__> firefly
[19:17] <mattronix> hrrm
[19:17] <mattronix> i am using debian firefly
[19:17] <mattronix> for me its apart of a of a service "ceph"
[19:18] <mattronix> for manual you specify it in the config and then start the ceph service
[19:18] <mathias> andrew__: about that command ceph osd crush add: did you create the bucket for the host? I had that problem only 30min ago and then went the ceph-disk way. Later I noticed I forgot the bucket.
[19:18] * Nacer (~Nacer@37.161.199.220) Quit (Read error: Operation timed out)
[19:19] <andrew__> i did not create the bucket
[19:19] <andrew__> i will try that
[19:19] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has joined #ceph
[19:19] <mattronix> bucket is for the rados gateway right?
[19:19] <mathias> naa its for crush map
[19:19] <mattronix> ok
[19:20] <mattronix> mathias: any tips on perf issues?
[19:20] <steveeJ> does anyone have a systemd service file to control ceph daemons?
[19:21] * sputnik13 (~sputnik13@client64-42.sdsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[19:22] * adamcrume (~quassel@2601:9:6680:47:a840:2967:7e24:9028) has joined #ceph
[19:22] * mathias_ (~mathias@pd95b4613.dip0.t-ipconnect.de) has joined #ceph
[19:23] <mathias_> mattronix: try some performance tests without ceph: dd to the data disk, check the journal device performance, too, try copying stuff over the network and see what happens
[19:23] <mathias_> at least like this we can say its hardware or ceph related
[19:23] <mattronix> fair enough
[19:23] * sputnik13 (~sputnik13@client64-42.sdsc.edu) has joined #ceph
[19:25] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[19:25] <mattronix> doing a dd to a file on that local drive
[19:25] <mattronix> that i can compare to my over the net dd
[19:26] <mathias_> its just an idea - I am as new to ceph as you are ^^
[19:27] * mathias (~mathias@pd95b4613.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[19:30] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[19:31] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[19:31] * barnim (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[19:31] * dgurtner (~dgurtner@249-236.197-178.cust.bluewin.ch) Quit (Read error: Connection reset by peer)
[19:33] <mattronix> someting tells me its the disk
[19:33] <mattronix> 8mb ps :P
[19:35] <runfromnowhere> Anyone know anything about MDS tuning for CephFS?
[19:35] <runfromnowhere> I'm wondering if I have a poorly tuned deployment
[19:35] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Ping timeout: 480 seconds)
[19:37] <devicenull> uh oh
[19:37] <devicenull> -106/7634939 objects degraded (-0.001%)
[19:37] <devicenull> I seem to have done something bad
[19:37] <andrew__> mathias: I added a buck on the host: "ceph osd crush add-bucket testbucket rackt", then when i try doing "ceph osd crush add osd.0 1.00 testbucket", do you know what is wrong?
[19:38] * Nacer (~Nacer@2001:41d0:fe82:7200:9539:4e06:ccbd:a6e8) has joined #ceph
[19:39] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Read error: Connection reset by peer)
[19:39] * mathias_ (~mathias@pd95b4613.dip0.t-ipconnect.de) Quit (Quit: Lost terminal)
[19:40] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) Quit (Quit: Leaving.)
[19:44] <mattronix> so is everyone here new to ceph
[19:46] <runfromnowhere> Not entirely :)
[19:47] <runfromnowhere> I'd call myself 'mildly experienced' at this point, but only mildly
[19:48] * simulx2 (~simulx@66-194-114-178.static.twtelecom.net) has joined #ceph
[19:48] * simulx (~simulx@vpn.expressionanalysis.com) Quit (Read error: Connection reset by peer)
[19:49] * swat30 (~swat30@204.13.51.130) Quit (Remote host closed the connection)
[19:49] <andrew__> have you run through the manual steps before of installing ceph?
[19:51] * Eco (~Eco@107.43.163.9) has joined #ceph
[19:51] <devicenull> so, I have a pg that was on an OSD that no longer exists
[19:52] <devicenull> I'm not sure how to recover from this... I mainly just want the cluster state to be clean, the data is gone at this point
[19:52] <devicenull> I did 'ceph pg force_create_pg 17.b2', but now I see this
[19:52] <devicenull> pg 17.b2 is stuck inactive since forever, current state creating, last acting []
[19:53] <devicenull> nm, I had another bad osd I forgot to remove
[19:56] * bkunal (~bkunal@113.193.138.179) Quit (Ping timeout: 480 seconds)
[19:57] * BManojlovic (~steki@95.180.4.243) has joined #ceph
[19:57] * Nacer (~Nacer@2001:41d0:fe82:7200:9539:4e06:ccbd:a6e8) Quit (Remote host closed the connection)
[19:58] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[19:59] * gregsfortytwo1 (~Adium@cpe-107-184-64-126.socal.res.rr.com) has joined #ceph
[20:06] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[20:09] * mathias (~mathias@tmo-109-117.customers.d1-online.com) has joined #ceph
[20:10] <mathias> is there any reason not fo mount cephfs on an OSD, export it with NFSv4 and round-robin DNS clients to the NFS Servers?
[20:11] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Read error: Connection reset by peer)
[20:11] <mathias> s/on an OSD/on the OSDs/
[20:11] <kraken> mathias meant to say: is there any reason not fo mount cephfs on the OSDs, export it with NFSv4 and round-robin DNS clients to the NFS Servers?
[20:13] <gregsfortytwo1> you can deadlock if you run a kernel client on an OSD
[20:13] <gregsfortytwo1> nfs has weaker consistency constraints than cephfs does
[20:13] <gregsfortytwo1> that's about it
[20:17] <mathias> so in general it is ok to mount a single cephfs on multiple different machines, right? why would there be a danger of deadlock?
[20:17] <mathias> I am trying to avoid the additional two machines for NFS server just for "proxying" NFS traffic
[20:17] <gregsfortytwo1> it's not about mounting multiple cephfs; it's if you mount a kernel client on an OSD node
[20:18] <gregsfortytwo1> the causes are complicated, but in short: kernel decides it needs to flush memory; it sends ceph data to OSD, OSD needs to allocate memory to handle message, kernel says "no", deadlock
[20:19] <gregsfortytwo1> nfs I think worked around this problem recently, but it took them a decade and is pretty hacky
[20:19] * Sysadmin88 (~IceChat77@176.250.164.108) Quit (Quit: REALITY.SYS Corrupted: Re-boot universe? (Y/N/Q))
[20:19] <gregsfortytwo1> and if you're using userspace clients it's not a problem
[20:20] <mathias> or just make sure there is enough memory I guess
[20:20] <gregsfortytwo1> well, if you want to make that bet???.but I wouldn't recommend it
[20:20] * rendar (~I@host60-177-dynamic.8-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[20:21] <mathias> ok so ceph-fuse to mount the cephfs, but nfs-kernel-server is fine?
[20:22] * rendar (~I@host60-177-dynamic.8-79-r.retail.telecomitalia.it) has joined #ceph
[20:23] * rkdemon (~rkdemon@rrcs-67-79-20-162.sw.biz.rr.com) has joined #ceph
[20:23] * diegows (~diegows@200.68.116.185) has joined #ceph
[20:23] <rkdemon> Hi
[20:23] <rkdemon> I am installing ceph for the first time and facing this issue with trying to create monitors.
[20:23] <gregsfortytwo1> mathias: uh, I think maybe not, actually ??? but I'm really not sure
[20:24] <rkdemon> [ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[20:24] <rkdemon> ceph is installed .. the ip addresses cluster NW/public NW is set properly.. but I get into this error which I don't understand
[20:24] <rkdemon> any help will be awesome!!
[20:25] <rkdemon> " ceph-deploy mon create-initial" leads to this error on every targetted monitornode
[20:26] <mathias> gregsfortytwo1: ok so people typically buy two more machines for to proxy NFS/SMB?
[20:26] <alfredodeza> rkdemon: can you provide a full paste of the output?
[20:26] <gregsfortytwo1> well, people who are running cephfs in production are brave and don't talk to us much, so I'm not sure
[20:26] <rkdemon> Ok.. comming up with it
[20:26] <andrew__> mathias: I created a bucket and add the osd into the bucket, but the osd is still down.
[20:26] <gregsfortytwo1> but generally speaking, if freeing kernel memory requires allocating userspace memory you can have trouble
[20:27] <gregsfortytwo1> so I'd expect samba or ganesha or other userspace server implementations to work fine, but kernel servers on an OSD node I would not do
[20:27] * nljmo (~nljmo@5ED6C263.cm-7-7d.dynamic.ziggo.nl) Quit (Quit: Textual IRC Client: www.textualapp.com)
[20:27] <mathias> gregsfortytwo1: ok :D so which way would you go to have clients access data that is in ceph via NFS in a redundant and scalable way?
[20:27] <rkdemon> alfredodeza: http://pastebin.com/85zyCFVn
[20:28] <gregsfortytwo1> mathias: I don't know a ton about NFS, but I'd explore ganesha, which I believe has native ceph support at this point
[20:28] <gregsfortytwo1> haven't used it personally, though
[20:28] <alfredodeza> rkdemon: you have hostname problems.
[20:28] <alfredodeza> rkdemon: do you see the big warnings with asterisks ?
[20:29] <alfredodeza> rkdemon: [ceph3][WARNIN] provided hostname must match remote hostname
[20:29] <mathias> gregsfortytwo1: ok thx - an alternative would be RDB and an active/passive NFS cluster
[20:29] * nljmo (~nljmo@5ED6C263.cm-7-7d.dynamic.ziggo.nl) has joined #ceph
[20:29] <alfredodeza> it actually tells you what it found and what it should match:
[20:29] <alfredodeza> [ceph3][WARNIN] provided hostname: ceph3
[20:29] <alfredodeza> [ceph3][WARNIN] remote hostname: CT14007GA018
[20:29] <rkdemon> alfredodeza: O no
[20:30] <rkdemon> I did not realize this and thought it was more sinister!!
[20:30] <rkdemon> Thank you..
[20:30] <rkdemon> I will change the hostnames and see what gives
[20:30] <alfredodeza> no problem
[20:30] <alfredodeza> good luck!
[20:31] <mathias> gregsfortytwo1: kinda interesting that a distributed file system like ceph does not recommend running its distributed file system in production isnt it? :D
[20:32] <mathias> when can we expect that work properly?
[20:35] * LeaChim (~LeaChim@host86-135-182-184.range86-135.btcentralplus.com) has joined #ceph
[20:37] <gregsfortytwo1> we're working on it
[20:38] * bjornar_ (~bjornar@ti0099a430-0158.bb.online.no) has joined #ceph
[20:38] <gregsfortytwo1> there are some use cases that seem pretty stable so far, but the bar for supporting a filesystem is, you know, high
[20:39] <mathias> yeah you can cause a lot of trouble releasing it for production early (data loss etc) - I understand that
[20:39] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[20:44] * dis (~dis@109.110.67.48) has joined #ceph
[20:47] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[20:47] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[20:49] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has left #ceph
[20:50] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[20:55] * sreddy (~oftc-webi@32.97.110.56) has joined #ceph
[20:56] * Eco (~Eco@107.43.163.9) Quit (Ping timeout: 480 seconds)
[20:56] <sreddy> Trying to create a ceph cluster with 3 monitor nodes, with one of the montior nodes as ceph-deploy admin node
[20:59] <sreddy> did 1) ceph-deploy new mon1 mon2 mon3 (2) installed ceph packages on all the nodes (3) and ceph-deploy mon create-initial
[20:59] <sreddy> the nodes are unable to communicate.
[20:59] <sreddy> $ ceph -s 2014-09-03 18:57:17.059069 7f504a435700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication 2014-09-03 18:57:17.059085 7f504a435700 0 librados: client.admin initialization error (2) No such file or directory Error connecting to cluster: ObjectNotFound
[20:59] <sreddy> Not sue why keryring is missing
[21:00] <sreddy> any ideas?
[21:00] <alfredodeza> sreddy: do you have a firewall on?
[21:01] * thomnico (~thomnico@2a01:e35:8b41:120:9456:1a39:bdaf:324a) Quit (Quit: Ex-Chat)
[21:01] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[21:03] <true> check network availability between your mons on 6789 port
[21:05] <mathias> sreddy: do all three commands run through without errors?
[21:05] <sreddy> all the nodes are behind firewall, on the same network and can ping to each other
[21:06] * vbellur (~vijay@122.167.229.75) Quit (Ping timeout: 480 seconds)
[21:06] <mathias> behind firewall AND on the same network? so your talking about host-based firewall?
[21:08] * diegows (~diegows@200.68.116.185) Quit (Ping timeout: 480 seconds)
[21:08] <sreddy> well, I would say they are on different network that is not connected to public network
[21:09] * linjan (~linjan@176.195.196.165) Quit (Ping timeout: 480 seconds)
[21:09] * Eco (~Eco@2602:306:324a:41b0::49) has joined #ceph
[21:11] * sleinen (~Adium@2001:620:1000:3:7ed1:c3ff:fedc:3223) has joined #ceph
[21:13] <sreddy> ceph-deploy mon create-initial exited with errors
[21:13] <sreddy> [ceph_deploy.mon][ERROR ] Some monitors have still not reached quorum: [ceph_deploy.mon][ERROR ] wdc01cpm001ccz020 [ceph_deploy.mon][ERROR ] wdc01cpm002ccz020 [ceph_deploy.mon][ERROR ] wdc01cpm003ccz020
[21:13] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[21:17] * BManojlovic (~steki@95.180.4.243) Quit (Quit: Ja odoh a vi sta 'ocete...)
[21:17] * BManojlovic (~steki@95.180.4.243) has joined #ceph
[21:21] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[21:23] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[21:35] * mathias (~mathias@tmo-109-117.customers.d1-online.com) Quit (Quit: leaving)
[21:42] * JayJ (~jayj@157.130.21.226) has joined #ceph
[21:46] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[21:50] * bjornar_ (~bjornar@ti0099a430-0158.bb.online.no) Quit (Ping timeout: 480 seconds)
[21:51] * BManojlovic (~steki@95.180.4.243) Quit (Ping timeout: 480 seconds)
[21:58] * marrusl (~mark@2604:2000:60e3:8900:c189:aa89:a02d:58b1) Quit (Ping timeout: 480 seconds)
[22:07] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Remote host closed the connection)
[22:12] * marrusl (~mark@cpe-72-229-1-142.nyc.res.rr.com) has joined #ceph
[22:15] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[22:16] * sputnik13 (~sputnik13@client64-42.sdsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[22:17] * BManojlovic (~steki@37.19.107.0) has joined #ceph
[22:17] * Eco (~Eco@2602:306:324a:41b0::49) Quit (Ping timeout: 480 seconds)
[22:18] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[22:18] * ssejourne (~ssejourne@37.187.216.206) Quit (Quit: leaving)
[22:18] * ssejourne (~ssejourne@2001:41d0:52:300::d16) has joined #ceph
[22:20] * Sysadmin88 (~IceChat77@176.250.164.108) has joined #ceph
[22:24] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Remote host closed the connection)
[22:25] <andrew__> 2014-09-03 15:22:03.301685 7f820d30d7a0 -1 filestore(/var/lib/ceph/tmp/mnt.zq_Po1) mkjournal error creating journal on /var/lib/ceph/tmp/mnt.zq_Po1/journal: (2) No such file or directory 2014-09-03 15:22:03.301740 7f820d30d7a0 -1 OSD::mkfs: ObjectStore::mkfs failed with error -2 2014-09-03 15:22:03.301814 7f820d30d7a0 -1 ** ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.zq_Po1: (2) No such file or directory
[22:25] <andrew__> when i run ceph-disk activate /dev/xvdc1
[22:30] * steki (~steki@37.19.108.6) has joined #ceph
[22:31] * BManojlovic (~steki@37.19.107.0) Quit (Ping timeout: 480 seconds)
[22:31] * steki (~steki@37.19.108.6) Quit ()
[22:33] * steki (~steki@37.19.108.6) has joined #ceph
[22:34] * zerick (~eocrospom@190.118.30.195) Quit (Ping timeout: 480 seconds)
[22:36] * marrusl (~mark@cpe-72-229-1-142.nyc.res.rr.com) Quit (Quit: sync && halt)
[22:37] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[22:38] * marrusl (~mark@2604:2000:60e3:8900:99c5:57ab:ba78:1518) has joined #ceph
[22:39] * JayJ (~jayj@157.130.21.226) Quit (Quit: Computer has gone to sleep.)
[22:42] * BManojlovic (~steki@37.19.108.6) has joined #ceph
[22:44] * steki (~steki@37.19.108.6) Quit (Read error: Connection reset by peer)
[22:47] * BManojlovic (~steki@37.19.108.6) Quit (Read error: Connection reset by peer)
[22:47] * BManojlovic (~steki@37.19.108.6) has joined #ceph
[22:50] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[22:54] * steki (~steki@net251-186-245-109.mbb.telenor.rs) has joined #ceph
[22:55] * steki (~steki@net251-186-245-109.mbb.telenor.rs) Quit ()
[22:55] * zerick (~eocrospom@190.118.30.195) has joined #ceph
[22:56] * steki (~steki@37.19.108.6) has joined #ceph
[22:57] * BManojlovic (~steki@37.19.108.6) Quit (Read error: Connection reset by peer)
[22:57] * rkdemon (~rkdemon@rrcs-67-79-20-162.sw.biz.rr.com) Quit (Ping timeout: 480 seconds)
[23:03] * danieljh (~daniel@0001b4e9.user.oftc.net) has joined #ceph
[23:05] * sreddy slaps andrew__ around a bit with a large fishbot
[23:10] * JayJ (~jayj@157.130.21.226) has joined #ceph
[23:14] * Eco (~Eco@107.36.174.192) has joined #ceph
[23:17] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[23:20] * alram (~alram@38.122.20.226) Quit (Quit: Lost terminal)
[23:20] * alram (~alram@38.122.20.226) has joined #ceph
[23:22] * sleinen (~Adium@2001:620:1000:3:7ed1:c3ff:fedc:3223) Quit (Quit: Leaving.)
[23:25] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[23:26] * xarses (~andreww@12.164.168.117) has joined #ceph
[23:27] * ganders (~root@200-127-158-54.net.prima.net.ar) Quit (Quit: WeeChat 0.4.1)
[23:30] * Eco (~Eco@107.36.174.192) Quit (Ping timeout: 480 seconds)
[23:32] * JayJ (~jayj@157.130.21.226) Quit (Quit: Computer has gone to sleep.)
[23:34] * qhartman (~qhartman@den.direwolfdigital.com) has joined #ceph
[23:35] * steveeJ (~junky@HSI-KBW-085-216-022-246.hsi.kabelbw.de) Quit (Quit: Leaving)
[23:39] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[23:39] <devicenull> how would I go about troubleshooting ceph-disk prepare issues
[23:40] <devicenull> this is what I see https://gist.githubusercontent.com/devicenull/9db7371c19516d48fb2c/raw/22c4e09e0e496cd2113003d20e738bf6874c2ae7/gistfile1.txt
[23:41] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has left #ceph
[23:42] <devicenull> oh nm
[23:42] <devicenull> I removed a bunch of OSD's but forgot to wipe their keys
[23:44] * JayJ (~jayj@157.130.21.226) has joined #ceph
[23:45] * diegows (~diegows@host131.181-1-236.telecom.net.ar) has joined #ceph
[23:47] * steki (~steki@37.19.108.6) Quit (Ping timeout: 480 seconds)
[23:51] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[23:54] <sreddy> I see following process running on the monitor nodes that have not joined the cluster: python /usr/sbin/ceph-create-keys -i <hostname>
[23:54] <sreddy> The same process is not seen in another healthy cluster
[23:54] * nolan (~nolan@2001:470:1:41:a800:ff:fe3e:ad08) Quit (Quit: ZNC - http://znc.sourceforge.net)
[23:55] <sreddy> anyone know why the ceph-create-keys process is not exiting?

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.