#ceph IRC Log

Index

IRC Log for 2015-07-15

Timestamps are in GMT/BST.

[0:00] <loth> gleam: When i run rbd export it appears to write empty space as well
[0:00] * Dinnerbone (~Snowcat4@5NZAAE3BD.tor-irc.dnsbl.oftc.net) Quit ()
[0:01] <loth> hmm, qemu-img info says disk size 3.6g, i guess thats doing what it should then
[0:02] <gleam> if you want a diff from the original image you can do that with export-diff
[0:02] <gleam> dunno if that's what ou want instead
[0:03] <loth> so my virtual size is 15gb and disk size is 3.6gb, would it use 15 or 3.6 if i imported this into a different cluster?
[0:03] <gleam> good question!
[0:03] <gleam> i have no idea
[0:04] * alram_ (~alram@206.169.83.146) Quit (Quit: leaving)
[0:04] * bitserker (~toni@188.87.126.67) has joined #ceph
[0:05] <loth> i guess one way to find out :)
[0:06] * Volture (~darkman@broadband-5-228-133-59.nationalcablenetworks.ru) Quit (Quit: Konversation terminated!)
[0:10] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[0:10] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[0:11] * davidbitton (~davidbitt@ool-44c4506f.dyn.optonline.net) has joined #ceph
[0:16] * kutija|away (~kutija@95.180.90.38) Quit (Ping timeout: 480 seconds)
[0:18] <loth> gleam: looks like it imports it thinly
[0:19] <gleam> neat
[0:20] * kutija (~kutija@95.180.90.38) has joined #ceph
[0:20] * bene2 (~ben@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[0:21] * jbautista- (~wushudoin@transit-86-181-132-209.redhat.com) has joined #ceph
[0:24] * kevinperks (~Adium@2606:a000:80ad:1300:187b:9ae7:8c8e:1e98) Quit (Quit: Leaving.)
[0:26] * Plesioth (~jacoo@mail.calyx.com) has joined #ceph
[0:28] * Destreyf_ (~quassel@email.newagecomputers.info) has joined #ceph
[0:28] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) Quit (Quit: Leaving.)
[0:28] * wushudoin_ (~wushudoin@38.140.108.2) Quit (Ping timeout: 480 seconds)
[0:30] * jbautista- (~wushudoin@transit-86-181-132-209.redhat.com) Quit (Ping timeout: 480 seconds)
[0:30] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[0:31] * danieagle (~Daniel@177.9.73.76) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[0:32] <ska> Can I have multiple RGW's for a cluster, zone, or Region?
[0:32] <lurbs> You can have as many as you like, and can load balance across them.
[0:33] <ska> Cool.. ty
[0:33] <lurbs> They don't hold state, and just talk HTTP(S) out the front.
[0:33] * zaitcev (~zaitcev@2001:558:6001:10:61d7:f51f:def8:4b0f) Quit (Quit: Bye)
[0:34] * Destreyf (~quassel@50.21.192.142) Quit (Ping timeout: 480 seconds)
[0:34] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Ping timeout: 480 seconds)
[0:36] * kutija_ (~kutija@95.180.90.38) has joined #ceph
[0:37] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[0:37] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[0:37] * stiopa (~stiopa@cpc73828-dals21-2-0-cust630.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[0:39] * jbautista- (~wushudoin@38.140.108.2) has joined #ceph
[0:41] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[0:41] <ska> Here is my new Ceph model diagram: http://yuml.me/edit/fb220bac
[0:41] <ska> Most everything depends on Mon and OSD's.
[0:42] <ska> I still have issues with Pools <-> PG containment and relationship.
[0:42] <ska> Can a PG move from One Pool to another?
[0:43] * kutija (~kutija@95.180.90.38) Quit (Ping timeout: 480 seconds)
[0:45] * rendar (~I@host192-180-dynamic.8-79-r.retail.telecomitalia.it) Quit ()
[0:46] <lurbs> No, each pool has a defined set of PGs.
[0:53] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) has joined #ceph
[0:55] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[0:55] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[0:56] * Plesioth (~jacoo@7R2AACNFA.tor-irc.dnsbl.oftc.net) Quit ()
[0:56] * Zeis (~MatthewH1@tor-exit.xshells.net) has joined #ceph
[0:58] * fsimonce (~simon@host249-48-dynamic.53-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[1:00] * primechuck (~primechuc@173-17-128-216.client.mchsi.com) has joined #ceph
[1:00] * lcurtis (~lcurtis@47.19.105.250) Quit (Remote host closed the connection)
[1:01] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:04] * rlrevell (~leer@184.52.129.221) has joined #ceph
[1:10] * CheKoLyN (~saguilar@bender.parc.xerox.com) has joined #ceph
[1:10] * CheKoLyN (~saguilar@bender.parc.xerox.com) Quit ()
[1:12] * Nacer (~Nacer@2001:41d0:fe82:7200:4d8f:441c:ad3b:b9e4) Quit (Remote host closed the connection)
[1:14] * bitserker (~toni@188.87.126.67) Quit (Ping timeout: 480 seconds)
[1:15] * dopesong (~dopesong@78-60-74-130.static.zebra.lt) has joined #ceph
[1:21] * kutija_ is now known as kutija|away
[1:23] * dopesong (~dopesong@78-60-74-130.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[1:24] * tsugano (~tsugano@pw126254143069.8.panda-world.ne.jp) has joined #ceph
[1:25] <davidbitton> when issuing the command, "ceph-deploy rgw create node1", rgw fails to start on the node
[1:25] * oms101 (~oms101@p20030057EA526700EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:25] <davidbitton> the start script i looked for sections in ceph.conf that start with "client.radosgw." which doesn't exist
[1:26] * Zeis (~MatthewH1@7R2AACNGB.tor-irc.dnsbl.oftc.net) Quit ()
[1:26] <davidbitton> script looks for...
[1:26] * mLegion (~Chaos_Lla@jaffer.tor-exit.calyxinstitute.org) has joined #ceph
[1:27] <davidbitton> what should ceph-deploy do when creating an rgw?
[1:28] * tsugano (~tsugano@pw126254143069.8.panda-world.ne.jp) Quit ()
[1:29] <off_rhoden> davidbitton: that's the "old style" or rgw deployment (the client.radosgw sections). It should be looking in /var/lib/ceph/, as in: https://github.com/ceph/ceph/blob/master/src/init-radosgw#L59
[1:29] <off_rhoden> the init script should be looking for both styles
[1:29] <off_rhoden> starting with Hammer, ceph-deploy should put that all in place for you.
[1:30] <davidbitton> should, but doesn't.
[1:30] <off_rhoden> davidbitton: what version of ceph do you have installed on the node? what OS?
[1:30] <davidbitton> because when ceph-deploy issues the systemctl start, rgw fails
[1:31] <davidbitton> ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
[1:31] <davidbitton> Linux ceph-server-1 3.10.0-229.7.2.el7.x86_64 #1 SMP Tue Jun 23 22:06:11 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[1:31] <davidbitton> CentOS 7.1
[1:32] <davidbitton> this was all by following the quickstart
[1:33] * oms101 (~oms101@p20030057EA753000EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[1:34] * moore (~moore@64.202.160.88) Quit (Remote host closed the connection)
[1:34] <off_rhoden> hmm, it looks like the backport of this is not in 0.94.2. Though it should have been. I'll have to look into what happened (the fix should have been there)
[1:34] <off_rhoden> but in the meantime...
[1:35] <off_rhoden> can you wget https://raw.githubusercontent.com/ceph/ceph/master/src/init-radosgw and place it at /etc/init.d/ceph-radosgw and try that?
[1:37] * davidbitton (~davidbitt@ool-44c4506f.dyn.optonline.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[1:37] * davidbitton (~davidbitt@ool-44c4506f.dyn.optonline.net) has joined #ceph
[1:41] <davidbitton> off_rhoden: ok, it started
[1:41] <off_rhoden> davidbitton: looks like I was wrong - the fix is slated for 0.94.3. http://tracker.ceph.com/issues/11735
[1:41] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[1:41] <off_rhoden> but, glad to hear it started!
[1:41] <davidbitton> also, i found a need for a couple changes to the quick start docs
[1:42] <davidbitton> when it says, "sudo mkdir /var/local/osd0", that could use a -p if the target machine doesn't have a /var/local folder
[1:43] * ndru (~jawsome@00020819.user.oftc.net) has joined #ceph
[1:44] <off_rhoden> I never knew the quickstart would have had you using directories as OSDs. :) Or if I did know, I forgot.
[1:44] <davidbitton> they should be bare drives, yes?
[1:45] <davidbitton> more like /mnt/osd0 then?
[1:45] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) has joined #ceph
[1:47] <off_rhoden> normally you yet ceph, ceph-disk, ceph-deploy handle it all by just referencing the block devices directly. Like /dev/sdb, /dev/sdc, etc. It will format/partition the drives as needed, and take care of mounting them as well
[1:47] * cdelatte (~cdelatte@165.166.241.32) has joined #ceph
[1:47] <off_rhoden> s/yet/let
[1:47] * cdelatte (~cdelatte@165.166.241.32) Quit ()
[1:48] <davidbitton> yeah, definitely not a "quickstart" thing
[1:49] <davidbitton> off_rhoden: hmm, a "ceph-deploy mon add ceph-server-2" is stuck at [ceph-server-2][WARNIN] 2015-07-14 23:46:34.955610 7fa81ed7e700 0 monclient: hunting for new mon
[1:51] <off_rhoden> davidbitton: :/ I've seen a few reports of that happening lately. It needs to be looked at (broken 'mon add' functionality) and is on my plate for this week. It is unusual for someone to be adding new monitors like that when just getting started, however.
[1:52] <off_rhoden> You should deploy however many you need at the beginning (usually 1 or 3) and create them with 'mon create_initial'. But yeah, something is up with 'mon add'/
[1:56] * mLegion (~Chaos_Lla@5NZAAE3FK.tor-irc.dnsbl.oftc.net) Quit ()
[1:56] * Schaap (~starcoder@relay-h.tor-exit.network) has joined #ceph
[1:59] * CheKoLyN (~saguilar@c-24-23-216-79.hsd1.ca.comcast.net) has joined #ceph
[2:05] * snakamoto (~Adium@192.16.26.2) Quit (Quit: Leaving.)
[2:06] * shakamunyi (~shakamuny@209.66.74.34) Quit (Ping timeout: 480 seconds)
[2:09] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) Quit (Quit: Leaving)
[2:09] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[2:09] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[2:09] * jaank (~quassel@98.215.50.223) has joined #ceph
[2:12] <davidbitton> i'm just trying to follow along with the quickstart. i'm looking see if ceph is a better alternative for me vs. glusterfs
[2:15] <davidbitton> off_rhoden: check this out, http://pastebin.com/uybYDeJi
[2:15] <davidbitton> off_rhoden: ceph-deploy never made it to ceph-server-3
[2:15] <off_rhoden> davidbitton: mon add only takes on hostname
[2:15] <off_rhoden> takes "one"
[2:16] <off_rhoden> you can only add one at a time
[2:16] * snakamoto (~Adium@192.16.26.2) has joined #ceph
[2:16] <davidbitton> ok, but the quickstart feels otherwise
[2:16] <off_rhoden> this came up recently too. :) I enforced that at the CLI, but haven't packaged a new release of ceph-deploy to cover that.
[2:16] <off_rhoden> davidbitton: understood. you are not the first. hopefully you may be the last.
[2:17] * LeaChim (~LeaChim@host81-157-90-38.range81-157.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:17] * snakamoto1 (~Adium@192.16.26.2) has joined #ceph
[2:17] <davidbitton> ok, the mon add for ceph-server-3 came up quickly
[2:19] <off_rhoden> davidbitton: interesting. and it worked?
[2:19] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[2:19] <davidbitton> well, health check needs a little maintenance
[2:19] <davidbitton> HEALTH_WARN clock skew detected on mon.ceph-server-2, mon.ceph-server-3
[2:20] <davidbitton> i have ntp.conf set for the local NTP server at work; i'm at home.
[2:20] <off_rhoden> I wonder why the first mon add hung at "hunting for new mon" but the later commands worked.
[2:20] <off_rhoden> like i said, lots of reports of failed "adds". gotta look at it this week.
[2:24] * snakamoto (~Adium@192.16.26.2) Quit (Ping timeout: 480 seconds)
[2:25] * xarses (~xarses@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:26] * Schaap (~starcoder@5NZAAE3GU.tor-irc.dnsbl.oftc.net) Quit ()
[2:26] * SurfMaths (~Spikey@heaven.tor.ninja) has joined #ceph
[2:26] * primechuck (~primechuc@173-17-128-216.client.mchsi.com) Quit (Remote host closed the connection)
[2:37] * arbrandes (~arbrandes@179.210.13.90) Quit (Remote host closed the connection)
[2:40] * primechuck (~primechuc@173-17-128-216.client.mchsi.com) has joined #ceph
[2:41] * jbautista- (~wushudoin@38.140.108.2) Quit (Ping timeout: 480 seconds)
[2:41] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[2:48] * sankarsh_ (~sankarsha@106.206.129.121) has joined #ceph
[2:54] * chr1s_ (~chr1s@aftr-88-217-180-158.dynamic.mnet-online.de) has joined #ceph
[2:56] * SurfMaths (~Spikey@5NZAAE3HP.tor-irc.dnsbl.oftc.net) Quit ()
[2:56] * tuhnis (~Plesioth@relay-d.tor-exit.network) has joined #ceph
[2:59] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[2:59] * fam_away is now known as fam
[3:01] * chr1s (~chr1s@2001:a61:8d:7401:2cab:1bdf:b66b:11fe) Quit (Ping timeout: 480 seconds)
[3:11] * snakamoto1 (~Adium@192.16.26.2) Quit (Quit: Leaving.)
[3:14] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[3:16] * DV (~veillard@2001:41d0:1:d478::1) Quit (Remote host closed the connection)
[3:17] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[3:17] * nsoffer (~nsoffer@bzq-109-65-255-114.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[3:26] * tuhnis (~Plesioth@9S0AAB8TV.tor-irc.dnsbl.oftc.net) Quit ()
[3:26] * AGaW (~geegeegee@tor-exit2-readme.puckey.org) has joined #ceph
[3:28] * yguang11 (~yguang11@2001:4998:effd:600:f811:203d:13b3:2ed2) Quit (Remote host closed the connection)
[3:28] * primechuck (~primechuc@173-17-128-216.client.mchsi.com) Quit (Remote host closed the connection)
[3:36] * shyu (~Shanzhi@119.254.120.66) has joined #ceph
[3:36] * fam is now known as fam_away
[3:37] * shohn (~shohn@dslb-092-078-028-056.092.078.pools.vodafone-ip.de) has joined #ceph
[3:37] * fam_away is now known as fam
[3:38] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) has joined #ceph
[3:41] * shohn_afk (~shohn@dslb-178-012-178-005.178.012.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[3:43] * CheKoLyN (~saguilar@c-24-23-216-79.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:45] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Ping timeout: 480 seconds)
[3:48] * reed (~reed@2607:f298:a:607:60d2:1929:a61f:3090) Quit (Ping timeout: 480 seconds)
[3:48] * fam is now known as fam_away
[3:48] * jks (~jks@178.155.151.121) Quit (Ping timeout: 480 seconds)
[3:49] * fam_away is now known as fam
[3:56] * AGaW (~geegeegee@5NZAAE3JI.tor-irc.dnsbl.oftc.net) Quit ()
[3:56] * Kyso_ (~Freddy@176.10.99.205) has joined #ceph
[3:57] * zhaochao (~zhaochao@111.161.77.233) has joined #ceph
[3:59] * baotiao (~baotiao@218.30.116.3) has joined #ceph
[4:00] * shang (~ShangWu@175.41.48.77) has joined #ceph
[4:01] * baotiao (~baotiao@218.30.116.3) Quit ()
[4:09] * i_m (~ivan.miro@83.149.35.245) has joined #ceph
[4:15] * flisky (~Thunderbi@106.39.60.34) has joined #ceph
[4:18] * kefu (~kefu@114.92.106.47) has joined #ceph
[4:26] * Kyso_ (~Freddy@5NZAAE3KH.tor-irc.dnsbl.oftc.net) Quit ()
[4:26] * RaidSoft (~w2k@9S0AAB8W3.tor-irc.dnsbl.oftc.net) has joined #ceph
[4:29] * derjohn_mobi (~aj@pd95cf11c.dip0.t-ipconnect.de) has joined #ceph
[4:33] * CheKoLyN (~saguilar@c-24-23-216-79.hsd1.ca.comcast.net) has joined #ceph
[4:36] * snakamoto (~Adium@192.16.26.2) has joined #ceph
[4:36] * derjohn_mob (~aj@pd95cf11c.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[4:37] * snakamoto1 (~Adium@192.16.26.2) has joined #ceph
[4:42] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[4:43] * CheKoLyN (~saguilar@c-24-23-216-79.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:44] * snakamoto (~Adium@192.16.26.2) Quit (Ping timeout: 480 seconds)
[4:48] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:56] * RaidSoft (~w2k@9S0AAB8W3.tor-irc.dnsbl.oftc.net) Quit ()
[4:56] * `Jin (~clarjon1@marcuse-2.nos-oignons.net) has joined #ceph
[5:00] * sjm (~sjm@pool-100-1-115-73.nwrknj.fios.verizon.net) has joined #ceph
[5:09] * haomaiwang (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) Quit (Quit: Leaving...)
[5:09] * haomaiwang (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) has joined #ceph
[5:13] * davidbitton (~davidbitt@ool-44c4506f.dyn.optonline.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[5:15] * kefu_ (~kefu@183.194.250.141) has joined #ceph
[5:16] * davidz1 (~davidz@2605:e000:1313:8003:edf6:f0ec:801c:46e7) Quit (Read error: Connection reset by peer)
[5:16] * davidz (~davidz@2605:e000:1313:8003:3467:2a51:6731:3ef7) has joined #ceph
[5:20] * kefu (~kefu@114.92.106.47) Quit (Ping timeout: 480 seconds)
[5:22] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[5:24] * kefu_ is now known as kefu
[5:26] * `Jin (~clarjon1@5NZAAE3MJ.tor-irc.dnsbl.oftc.net) Quit ()
[5:26] * Quatroking (~Mousey@176.10.99.206) has joined #ceph
[5:29] * yguang11 (~yguang11@c-50-131-146-113.hsd1.ca.comcast.net) has joined #ceph
[5:34] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[5:37] * baotiao (~baotiao@218.30.116.10) has joined #ceph
[5:40] * overclk (~overclk@117.202.110.202) has joined #ceph
[5:41] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[5:51] -helix.oftc.net- *** Looking up your hostname...
[5:51] -helix.oftc.net- *** Checking Ident
[5:51] -helix.oftc.net- *** Couldn't look up your hostname
[5:51] -helix.oftc.net- *** No Ident response
[5:51] -helix.oftc.net- *** Looking up your hostname...
[5:51] -helix.oftc.net- *** Checking Ident
[5:51] -helix.oftc.net- *** Couldn't look up your hostname
[5:51] -helix.oftc.net- *** No Ident response
[5:51] -helix.oftc.net- *** Looking up your hostname...
[5:51] -helix.oftc.net- *** Checking Ident
[5:51] -helix.oftc.net- *** Couldn't look up your hostname
[5:51] -helix.oftc.net- *** No Ident response
[5:51] -helix.oftc.net- *** Looking up your hostname...
[5:51] -helix.oftc.net- *** Checking Ident
[5:51] -helix.oftc.net- *** Couldn't look up your hostname
[5:51] -helix.oftc.net- *** No Ident response
[5:51] * CephLogBot (~PircBot@92.63.168.213) has joined #ceph
[5:51] * Topic is 'CDS Schedule Posted: http://goo.gl/i72wN8 || http://ceph.com/get || dev channel #ceph-devel || test lab channel #sepia'
[5:51] * Set by scuttlemonkey!~scuttle@nat-pool-rdu-t.redhat.com on Mon Mar 02 21:13:33 CET 2015
[5:53] * kefu_ (~kefu@183.194.250.141) has joined #ceph
[5:53] * Vacuum_ (~Vacuum@88.130.210.41) has joined #ceph
[5:56] * Quatroking (~Mousey@5NZAAE3NC.tor-irc.dnsbl.oftc.net) Quit ()
[5:56] * Frostshifter (~Borf@orion.enn.lu) has joined #ceph
[5:56] * kefu__ (~kefu@114.92.106.47) has joined #ceph
[5:56] * yguang11 (~yguang11@c-50-131-146-113.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[5:57] * kefu (~kefu@183.194.250.141) Quit (Ping timeout: 480 seconds)
[5:57] * overclk (~overclk@117.202.110.202) Quit (Remote host closed the connection)
[5:57] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[6:00] * Vacuum__ (~Vacuum@88.130.199.246) Quit (Ping timeout: 480 seconds)
[6:02] * rakesh (~rakesh@121.244.87.124) has joined #ceph
[6:03] * kefu__ (~kefu@114.92.106.47) Quit (Max SendQ exceeded)
[6:03] * kefu_ (~kefu@183.194.250.141) Quit (Ping timeout: 480 seconds)
[6:03] * kefu (~kefu@114.92.106.47) has joined #ceph
[6:11] * rlrevell1 (~leer@184.52.129.221) has joined #ceph
[6:11] * rlrevell (~leer@184.52.129.221) Quit (Read error: Connection reset by peer)
[6:15] * snakamoto1 (~Adium@192.16.26.2) Quit (Quit: Leaving.)
[6:18] * overclk (~overclk@117.202.110.202) has joined #ceph
[6:20] * yguang11 (~yguang11@c-50-131-146-113.hsd1.ca.comcast.net) has joined #ceph
[6:22] * yguang11_ (~yguang11@2001:4998:effd:7804::10ec) has joined #ceph
[6:23] * snakamoto (~Adium@192.16.26.2) has joined #ceph
[6:23] * rlrevell1 (~leer@184.52.129.221) Quit (Read error: Connection reset by peer)
[6:23] * rlrevell (~leer@184.52.129.221) has joined #ceph
[6:26] * overclk (~overclk@117.202.110.202) Quit (Remote host closed the connection)
[6:26] * Frostshifter (~Borf@5NZAAE3N4.tor-irc.dnsbl.oftc.net) Quit ()
[6:26] * Kizzi (~shishi@195.169.125.226) has joined #ceph
[6:27] * kefu is now known as kefu|afk
[6:28] * ndru (~jawsome@00020819.user.oftc.net) Quit (Quit: brb)
[6:28] * yguang11 (~yguang11@c-50-131-146-113.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[6:28] * ndru (~jawsome@00020819.user.oftc.net) has joined #ceph
[6:33] * snakamoto (~Adium@192.16.26.2) Quit (Quit: Leaving.)
[6:40] * kefu|afk (~kefu@114.92.106.47) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:40] * rlrevell (~leer@184.52.129.221) Quit (Read error: Connection reset by peer)
[6:41] * overclk (~overclk@117.202.110.202) has joined #ceph
[6:45] * overclk (~overclk@117.202.110.202) Quit (Read error: Connection reset by peer)
[6:45] * overclk (~overclk@117.202.110.202) has joined #ceph
[6:56] * Kizzi (~shishi@9S0AAB8Z8.tor-irc.dnsbl.oftc.net) Quit ()
[6:56] * rikai1 (~Popz@207.201.223.196) has joined #ceph
[7:00] * kefu (~kefu@114.92.106.47) has joined #ceph
[7:03] * dopesong_ (~dopesong@lb1.mailer.data.lt) has joined #ceph
[7:12] * sankarsh_ (~sankarsha@106.206.129.121) Quit (Ping timeout: 480 seconds)
[7:14] * rlrevell (~leer@184.52.129.221) has joined #ceph
[7:17] * kefu (~kefu@114.92.106.47) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[7:21] * rakesh (~rakesh@121.244.87.124) Quit (Remote host closed the connection)
[7:22] * kefu (~kefu@183.194.250.141) has joined #ceph
[7:26] * rikai1 (~Popz@7R2AACNRK.tor-irc.dnsbl.oftc.net) Quit ()
[7:26] * Zeis (~MJXII@tor-exit0-readme.dfri.se) has joined #ceph
[7:30] * kefu (~kefu@183.194.250.141) Quit (Ping timeout: 480 seconds)
[7:31] * yguang11_ (~yguang11@2001:4998:effd:7804::10ec) Quit (Ping timeout: 480 seconds)
[7:32] * nils_ (~nils@doomstreet.collins.kg) has joined #ceph
[7:37] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:39] * rakesh (~rakesh@121.244.87.124) has joined #ceph
[7:41] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:48] * rakesh (~rakesh@121.244.87.124) Quit (Quit: Leaving)
[7:48] * rlrevell (~leer@184.52.129.221) Quit (Read error: Connection reset by peer)
[7:49] * rakesh (~rakesh@121.244.87.124) has joined #ceph
[7:54] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[7:55] * rlrevell (~leer@184.52.129.221) has joined #ceph
[7:55] * dopesong_ (~dopesong@lb1.mailer.data.lt) Quit (Remote host closed the connection)
[7:56] * Zeis (~MJXII@7R2AACNSD.tor-irc.dnsbl.oftc.net) Quit ()
[7:56] * phyphor (~richardus@relay-h.tor-exit.network) has joined #ceph
[7:59] * oro (~oro@84-73-73-158.dclient.hispeed.ch) has joined #ceph
[7:59] * oro_ (~oro@84-73-73-158.dclient.hispeed.ch) has joined #ceph
[8:01] * An_T_oine (~Antoine@192.93.37.4) has joined #ceph
[8:07] * barnim (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[8:13] * oro (~oro@84-73-73-158.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[8:14] * oro_ (~oro@84-73-73-158.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[8:15] * dopesong (~dopesong@lb1.mailer.data.lt) has joined #ceph
[8:19] * dopesong (~dopesong@lb1.mailer.data.lt) Quit ()
[8:24] * dopesong (~dopesong@88-119-94-55.static.zebra.lt) has joined #ceph
[8:24] * primechuck (~primechuc@173-17-128-216.client.mchsi.com) has joined #ceph
[8:24] * rlrevell (~leer@184.52.129.221) Quit (Read error: Connection reset by peer)
[8:25] * Sysadmin88 (~IceChat77@94.4.22.12) Quit (Quit: Oops. My brain just hit a bad sector)
[8:26] * rlrevell (~leer@184.52.129.221) has joined #ceph
[8:26] * phyphor (~richardus@5NZAAE3RS.tor-irc.dnsbl.oftc.net) Quit ()
[8:26] * Uniju1 (~PierreW@drew010-relay01.drew-phillips.com) has joined #ceph
[8:31] * oro (~oro@84-73-73-158.dclient.hispeed.ch) has joined #ceph
[8:31] * overclk (~overclk@117.202.110.202) Quit (Remote host closed the connection)
[8:32] * overclk (~overclk@117.202.110.202) has joined #ceph
[8:33] * oro_ (~oro@84-73-73-158.dclient.hispeed.ch) has joined #ceph
[8:36] * shohn (~shohn@dslb-092-078-028-056.092.078.pools.vodafone-ip.de) Quit (Quit: Leaving.)
[8:37] * shohn (~shohn@dslb-092-078-028-056.092.078.pools.vodafone-ip.de) has joined #ceph
[8:39] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[8:41] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[8:42] * kefu (~kefu@114.92.106.47) has joined #ceph
[8:43] * calvinx (~calvin@101.100.172.246) has joined #ceph
[8:44] * nardial (~ls@dslb-088-076-092-027.088.076.pools.vodafone-ip.de) has joined #ceph
[8:48] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[8:49] * Miouge_ (~Miouge@94.136.92.20) has joined #ceph
[8:54] * Miouge (~Miouge@94.136.92.20) Quit (Ping timeout: 480 seconds)
[8:54] * Miouge_ is now known as Miouge
[8:54] * adrian (~abradshaw@tmo-098-193.customers.d1-online.com) has joined #ceph
[8:55] * adrian is now known as Guest183
[8:56] * Uniju1 (~PierreW@5NZAAE3S3.tor-irc.dnsbl.oftc.net) Quit ()
[8:58] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[8:58] <Be-El> hi
[8:59] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Remote host closed the connection)
[9:04] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[9:14] * branto (~branto@213.175.37.10) has joined #ceph
[9:18] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[9:22] * primechuck (~primechuc@173-17-128-216.client.mchsi.com) Quit (Remote host closed the connection)
[9:22] * kefu (~kefu@114.92.106.47) Quit (Max SendQ exceeded)
[9:23] * kefu (~kefu@114.92.106.47) has joined #ceph
[9:23] * bitserker (~toni@88.87.194.130) has joined #ceph
[9:25] <Be-El> what's the best correlation between inodes and inode_max (mds_cache_size) in mds perf dump?
[9:26] <Be-El> should an mds be configured to store all inodes in its cache?
[9:26] * Maza (~Sliker@ns365892.ip-94-23-6.eu) has joined #ceph
[9:33] * b0e (~aledermue@213.95.25.82) has joined #ceph
[9:35] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:37] * fsimonce (~simon@host249-48-dynamic.53-79-r.retail.telecomitalia.it) has joined #ceph
[9:38] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[9:40] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[9:44] * bandrus (~brian@198.8.80.83) has joined #ceph
[9:45] * dgurtner (~dgurtner@178.197.231.188) has joined #ceph
[9:45] * kefu_ (~kefu@183.194.250.141) has joined #ceph
[9:50] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[9:51] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[9:52] * kefu (~kefu@114.92.106.47) Quit (Ping timeout: 480 seconds)
[9:52] * bandrus (~brian@198.8.80.83) Quit (Ping timeout: 480 seconds)
[9:53] * dis (~dis@109.110.66.238) Quit (Read error: Connection reset by peer)
[9:53] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[9:55] * overclk (~overclk@117.202.110.202) Quit (Remote host closed the connection)
[9:56] * Maza (~Sliker@5NZAAE3VE.tor-irc.dnsbl.oftc.net) Quit ()
[9:57] * scuttlemonkey (~scuttle@nat-pool-rdu-t.redhat.com) Quit (Quit: Coyote finally caught me)
[9:58] * sleinen (~Adium@vpn-ho-d-134.switch.ch) has joined #ceph
[10:00] * jspray (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[10:02] * dis (~dis@109.110.66.238) has joined #ceph
[10:03] * sleinen (~Adium@vpn-ho-d-134.switch.ch) Quit (Read error: Connection reset by peer)
[10:05] * kefu_ (~kefu@183.194.250.141) Quit (Max SendQ exceeded)
[10:05] * kefu (~kefu@183.194.250.141) has joined #ceph
[10:07] * pdrakeweb (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) Quit (Ping timeout: 480 seconds)
[10:11] * vbellur (~vijay@122.171.181.56) Quit (Ping timeout: 480 seconds)
[10:18] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:22] * leseb- (~leseb@81-64-215-19.rev.numericable.fr) Quit (Ping timeout: 480 seconds)
[10:22] * rlrevell (~leer@184.52.129.221) Quit (Read error: Connection reset by peer)
[10:22] * Destreyf_ (~quassel@email.newagecomputers.info) Quit (Read error: Connection reset by peer)
[10:23] * jaank (~quassel@98.215.50.223) Quit (Read error: Connection reset by peer)
[10:23] * Destreyf (~quassel@email.newagecomputers.info) has joined #ceph
[10:23] * brutuscat (~brutuscat@17.Red-83-47-123.dynamicIP.rima-tde.net) has joined #ceph
[10:24] * nardial (~ls@dslb-088-076-092-027.088.076.pools.vodafone-ip.de) Quit (Quit: Leaving)
[10:25] * karnan (~karnan@121.244.87.117) has joined #ceph
[10:26] * anadrom (~geegeegee@politkovskaja.torservers.net) has joined #ceph
[10:27] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) has joined #ceph
[10:27] * kefu (~kefu@183.194.250.141) Quit (Read error: Connection reset by peer)
[10:27] * kefu (~kefu@183.194.250.141) has joined #ceph
[10:33] * jspray (~jspray@summerhall-meraki1.fluency.net.uk) has joined #ceph
[10:34] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) has joined #ceph
[10:38] * Kioob`Taff (~plug-oliv@2a01:e35:2e8a:1e0::42:10) has joined #ceph
[10:41] * bitserker1 (~toni@88.87.194.130) has joined #ceph
[10:41] * overclk (~overclk@117.202.110.202) has joined #ceph
[10:42] * treenerd (~treenerd@85.193.140.98) has joined #ceph
[10:45] * overclk (~overclk@117.202.110.202) Quit (Remote host closed the connection)
[10:47] * bitserker (~toni@88.87.194.130) Quit (Ping timeout: 480 seconds)
[10:48] * kawa2014 (~kawa@89.184.114.246) Quit (Remote host closed the connection)
[10:49] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[10:50] * dopesong_ (~dopesong@lb1.mailer.data.lt) has joined #ceph
[10:56] * anadrom (~geegeegee@9S0AAB87X.tor-irc.dnsbl.oftc.net) Quit ()
[10:56] * Snowcat4 (~tZ@chomsky.torservers.net) has joined #ceph
[10:56] * dopesong (~dopesong@88-119-94-55.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[11:00] * kefu_ (~kefu@183.194.250.141) has joined #ceph
[11:02] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[11:03] * kefu (~kefu@183.194.250.141) Quit (Ping timeout: 480 seconds)
[11:06] * kefu (~kefu@114.92.106.47) has joined #ceph
[11:09] * fmanana (~fdmanana@bl13-157-51.dsl.telepac.pt) has joined #ceph
[11:12] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[11:13] * kefu_ (~kefu@183.194.250.141) Quit (Ping timeout: 480 seconds)
[11:18] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) has joined #ceph
[11:19] * kefu (~kefu@114.92.106.47) Quit (Max SendQ exceeded)
[11:20] * kefu (~kefu@114.92.106.47) has joined #ceph
[11:22] * vbellur (~vijay@121.244.87.124) has joined #ceph
[11:26] * Snowcat4 (~tZ@5NZAAE3YN.tor-irc.dnsbl.oftc.net) Quit ()
[11:26] * Azru (~pico@46.36.36.127) has joined #ceph
[11:28] * overclk (~overclk@117.202.110.202) has joined #ceph
[11:31] * overclk_ (~overclk@117.202.110.202) has joined #ceph
[11:31] * overclk (~overclk@117.202.110.202) Quit (Read error: Connection reset by peer)
[11:33] * RMar04 (~RMar04@5.153.255.226) has joined #ceph
[11:33] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[11:34] * shohn is now known as shohn_afk
[11:34] * treenerd (~treenerd@85.193.140.98) Quit (Quit: Verlassend)
[11:34] <RMar04> Morning, having some very strange issues with a cache tier today. After enabling a new tier (which has been working before), all 5 of my SSD's, one per host died simultaneously with the following : ./include/interval_set.h: 385: FAILED assert(_size >= 0). Has anyone seen this before?
[11:36] <RMar04> 'void interval_set<T>::erase(T, T) [with T = snapid_t]' thread 7f4deccac700 << looks like it might be snapshot related
[11:37] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[11:37] * rakesh (~rakesh@121.244.87.124) Quit (Ping timeout: 480 seconds)
[11:37] * overclk (~overclk@117.202.110.202) has joined #ceph
[11:38] * overclk_ (~overclk@117.202.110.202) Quit (Read error: Connection reset by peer)
[11:43] * Miouge_ (~Miouge@94.136.92.20) has joined #ceph
[11:46] * rakesh (~rakesh@121.244.87.117) has joined #ceph
[11:47] * rakesh (~rakesh@121.244.87.117) Quit ()
[11:47] * rakesh (~rakesh@121.244.87.117) has joined #ceph
[11:48] * Miouge (~Miouge@94.136.92.20) Quit (Ping timeout: 480 seconds)
[11:48] * Miouge_ is now known as Miouge
[11:56] * Azru (~pico@9S0AAB89R.tor-irc.dnsbl.oftc.net) Quit ()
[11:56] * csharp (~w2k@static-108-45-93-76.washdc.fios.verizon.net) has joined #ceph
[12:08] * shylesh (~shylesh@121.244.87.124) has joined #ceph
[12:14] * overclk (~overclk@117.202.110.202) Quit (Remote host closed the connection)
[12:15] * overclk (~overclk@117.202.110.202) has joined #ceph
[12:18] * Debesis (~0x@64.160.140.82.mobile.mezon.lt) has joined #ceph
[12:19] * rdas (~rdas@121.244.87.116) has joined #ceph
[12:23] * kefu is now known as kefu|afk
[12:23] * kefu|afk (~kefu@114.92.106.47) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[12:24] * rakesh (~rakesh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[12:26] * csharp (~w2k@7R2AACN1F.tor-irc.dnsbl.oftc.net) Quit ()
[12:26] * KapiteinKoffie (~FNugget@lumumba.torservers.net) has joined #ceph
[12:26] * rendar (~I@host87-186-dynamic.17-79-r.retail.telecomitalia.it) has joined #ceph
[12:31] * kefu (~kefu@114.92.106.47) has joined #ceph
[12:35] * rakesh (~rakesh@121.244.87.124) has joined #ceph
[12:36] * jks (~jks@178.155.151.121) has joined #ceph
[12:39] * rakesh (~rakesh@121.244.87.124) Quit ()
[12:40] * karnan (~karnan@121.244.87.117) Quit (Ping timeout: 480 seconds)
[12:41] * karnan (~karnan@121.244.87.117) has joined #ceph
[12:46] * rakesh (~rakesh@121.244.87.124) has joined #ceph
[12:47] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[12:55] * fam is now known as fam_away
[12:56] * KapiteinKoffie (~FNugget@9S0AAB9BE.tor-irc.dnsbl.oftc.net) Quit ()
[12:56] * Chaos_Llama (~AotC@176.10.99.207) has joined #ceph
[12:58] * fam_away is now known as fam
[12:59] * kutija (~kutija@daikatana.services.mint.rs) has joined #ceph
[12:59] * fam is now known as fam_away
[13:00] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) has joined #ceph
[13:01] * joao (~joao@249.38.136.95.rev.vodafone.pt) Quit (Quit: Leaving)
[13:01] * kefu (~kefu@114.92.106.47) Quit (Max SendQ exceeded)
[13:01] * joao (~joao@249.38.136.95.rev.vodafone.pt) has joined #ceph
[13:01] * ChanServ sets mode +o joao
[13:02] * kefu (~kefu@114.92.106.47) has joined #ceph
[13:02] * jspray (~jspray@summerhall-meraki1.fluency.net.uk) Quit (Ping timeout: 480 seconds)
[13:03] * zhaochao (~zhaochao@111.161.77.233) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 38.1.0/20150711212448])
[13:03] * overclk (~overclk@117.202.110.202) Quit (Remote host closed the connection)
[13:05] * kutija|away (~kutija@95.180.90.38) Quit (Ping timeout: 480 seconds)
[13:06] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[13:09] * shang (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[13:10] * flisky (~Thunderbi@106.39.60.34) Quit (Ping timeout: 480 seconds)
[13:11] * shohn_afk is now known as shohn
[13:15] * fam_away is now known as fam
[13:16] * rlrevell (~leer@184.52.129.221) has joined #ceph
[13:17] * kutija_ (~kutija@daikatana.services.mint.rs) has joined #ceph
[13:19] * kutija__ (~kutija@95.180.90.38) has joined #ceph
[13:21] * jcsp (~jspray@149.254.235.64) has joined #ceph
[13:21] * kutija (~kutija@daikatana.services.mint.rs) Quit (Read error: Connection reset by peer)
[13:22] * kutija (~kutija@daikatana.services.mint.rs) has joined #ceph
[13:25] * overclk (~overclk@117.202.110.202) has joined #ceph
[13:25] * kutija_ (~kutija@daikatana.services.mint.rs) Quit (Ping timeout: 480 seconds)
[13:26] * Chaos_Llama (~AotC@9S0AAB9B4.tor-irc.dnsbl.oftc.net) Quit ()
[13:26] * Sami345 (~ylmson@relay-a.tor-exit.network) has joined #ceph
[13:26] * kefu (~kefu@114.92.106.47) Quit (Max SendQ exceeded)
[13:27] * davidz1 (~davidz@2605:e000:1313:8003:3467:2a51:6731:3ef7) has joined #ceph
[13:27] * kutija__ (~kutija@95.180.90.38) Quit (Ping timeout: 480 seconds)
[13:28] * davidz (~davidz@2605:e000:1313:8003:3467:2a51:6731:3ef7) Quit (Read error: Connection reset by peer)
[13:28] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[13:29] * kefu (~kefu@114.92.106.47) has joined #ceph
[13:34] * oro_ (~oro@84-73-73-158.dclient.hispeed.ch) Quit (Remote host closed the connection)
[13:34] * oro (~oro@84-73-73-158.dclient.hispeed.ch) Quit (Remote host closed the connection)
[13:34] * oro (~oro@84-73-73-158.dclient.hispeed.ch) has joined #ceph
[13:35] * RzH2000 (~administr@155.48.45.31.customer.cdi.no) has joined #ceph
[13:37] <RzH2000> Hi everybody. I have too many pgs (6272) 12 incomplete and 5 down+incomplete. This blocks mds from starting, the pgs are in meta end data pools.
[13:39] <RzH2000> However, the data seems to be resident on non-participating pgs. Moving with ceph-objectstore-tool does not remove incomplete status
[13:40] <RzH2000> Suggestions?
[13:45] * linjan (~linjan@176.195.234.139) Quit (Ping timeout: 480 seconds)
[13:47] * rlrevell (~leer@184.52.129.221) Quit (Read error: Connection reset by peer)
[13:54] * linjan (~linjan@176.195.234.139) has joined #ceph
[13:56] * Sami345 (~ylmson@5NZAAE33Y.tor-irc.dnsbl.oftc.net) Quit ()
[13:56] * Mraedis (~DJComet@tor-exit.ethanro.se) has joined #ceph
[13:56] * pdrakeweb (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) has joined #ceph
[13:58] * kefu (~kefu@114.92.106.47) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[14:04] * pdrakeweb (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) Quit (Quit: Leaving...)
[14:05] <tuxcrafter> how can i list the runing config of ceph
[14:05] <tuxcrafter> i want to be sure the osd pool default size went to 1
[14:05] <tuxcrafter> after changing ceph.conf and restarting one node
[14:05] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has joined #ceph
[14:05] * baotiao (~baotiao@218.30.116.10) Quit (Quit: baotiao)
[14:06] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has left #ceph
[14:07] <jcsp> tuxcrafter: ceph daemon mon.<your mon id> config show
[14:07] <jcsp> to query an individual daemon's config
[14:08] <jcsp> to reliably pick up the new setting you will need to make sure that all your mons have it
[14:08] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[14:09] <RzH2000> I suggest the docuemntation, like ceph osd pool get {pool-name} {key} from http://ceph.com/docs/master/rados/operations/pools/
[14:09] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[14:14] <zenpac> Does anyone have a JSON sample that contains all the states of PG's as in https://marcosmamorim.files.wordpress.com/2014/07/calamari_workbench_with_pg_state_filtering.png ?
[14:15] * alkaid (~alkaid@2001:da8:d800:741:ae9e:17ff:fe3f:9be7) has joined #ceph
[14:16] <alkaid> is any one here ? need help :-(
[14:16] * jcsp (~jspray@149.254.235.64) Quit (Ping timeout: 480 seconds)
[14:18] * ganders (~root@190.2.42.21) has joined #ceph
[14:18] * kefu (~kefu@183.194.250.141) has joined #ceph
[14:21] <zenpac> Please state the nature and details of your situation, with any relevant information. If someone can help, they will respond.
[14:22] * kefu_ (~kefu@114.92.106.47) has joined #ceph
[14:26] * Mraedis (~DJComet@7R2AACN44.tor-irc.dnsbl.oftc.net) Quit ()
[14:26] * tuhnis (~Dinnerbon@enjolras.gtor.org) has joined #ceph
[14:29] <BranchPredictor> http://paste2.org/ca16O9G2
[14:29] * owasserm_ (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) has joined #ceph
[14:29] <alkaid> well, I want to know how ceph kernel client works on kernel 2.6.32. Though ceph didn't recommend such an old kernel.
[14:29] <BranchPredictor> can someone explain wtf is happening here and how to revert it?
[14:29] * owasserm_ (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) Quit ()
[14:29] * kefu (~kefu@183.194.250.141) Quit (Ping timeout: 480 seconds)
[14:30] * pdrakeweb (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) has joined #ceph
[14:30] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) has joined #ceph
[14:32] * jcsp (~jspray@summerhall-meraki1.fluency.net.uk) has joined #ceph
[14:32] * jcsp (~jspray@summerhall-meraki1.fluency.net.uk) Quit ()
[14:32] * jcsp (~jspray@summerhall-meraki1.fluency.net.uk) has joined #ceph
[14:33] * yanzheng (~zhyan@182.139.207.212) Quit (Quit: This computer has gone to sleep)
[14:33] * kefu_ (~kefu@114.92.106.47) Quit (Max SendQ exceeded)
[14:35] <theanalyst> jcsp: what happens when one of the mons ends up having a config different than the others?
[14:36] * kefu (~kefu@114.92.106.47) has joined #ceph
[14:36] <jcsp> theanalyst: whichever mon is the leader, their config is what's used
[14:37] <jcsp> so if the leader changed, you'd spontaneously get different config settings -- to remain sane, update them all at the same time :-)
[14:37] <jcsp> config is completely local to a daemon
[14:38] * fam is now known as fam_away
[14:39] <BranchPredictor> ok, I think I got it
[14:40] * kefu is now known as kefu|afk
[14:40] <BranchPredictor> I changed pg_num and pgp_num on a pool while it had objects in it, then removed all of those objects
[14:40] * max2222 (~max@asa01.comparegroup.eu) has joined #ceph
[14:40] <theanalyst> jcsp: thanks! yeah once an into a wierd situation because one of the mons had a config unset while others had.. (this was caused by our orchestration tool)
[14:40] * kefu|afk (~kefu@114.92.106.47) Quit ()
[14:40] <tuxcrafter> okay i still got 10 incomplete pgs
[14:41] <tuxcrafter> i want to remove them and lose the data
[14:41] * scuttle|afk (~scuttle@nat-pool-rdu-t.redhat.com) has joined #ceph
[14:41] * scuttle|afk is now known as scuttlemonkey
[14:41] * scuttlemonkey (~scuttle@nat-pool-rdu-t.redhat.com) has left #ceph
[14:41] <tuxcrafter> i removed two rbd pools put it did not make a change
[14:41] * scuttlemonkey (~scuttle@nat-pool-rdu-t.redhat.com) has joined #ceph
[14:41] * ChanServ sets mode +o scuttlemonkey
[14:42] <tuxcrafter> aslo is the "rbd" a default pool that is needed for ceph?
[14:42] <tuxcrafter> like data and metadata
[14:46] * yanzheng (~zhyan@182.139.207.212) has joined #ceph
[14:46] * spinoshi (~spinoshi@static-94-32-127-224.clienti.tiscali.it) has joined #ceph
[14:46] * spinoshi (~spinoshi@static-94-32-127-224.clienti.tiscali.it) Quit ()
[14:47] * rakesh (~rakesh@121.244.87.124) Quit (Quit: Leaving)
[14:48] <max2222> Hi! I'm new with ceph (using it for a week now). It all worked fine until today. Currently I'm unable to map an rbd. is this something that someone here could help me with please? doing `rbd ls` lists the only image we have, but doing `rbd map backup -p rbd` just sits there for over an hour. We have 2 hosts with osd's and one other host as rbd client on which I run the map. no other hosts have the image mapped.
[14:48] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[14:49] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:49] * dneary (~dneary@AGrenoble-651-1-459-149.w82-122.abo.wanadoo.fr) has joined #ceph
[14:50] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[14:51] * rwheeler (~rwheeler@pool-173-48-214-9.bstnma.fios.verizon.net) Quit (Quit: Leaving)
[14:51] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[14:51] <zenpac> Will clusters eventually support multiple CephFS's ?
[14:52] <jcsp> zenpac: probably. I'm keen on implementing that when we have time.
[14:52] <tuxcrafter> jcsp: thx that worked
[14:53] * kefu (~kefu@183.194.250.141) has joined #ceph
[14:53] <zenpac> jcsp: are you one of the developers?
[14:54] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) has joined #ceph
[14:54] <jcsp> zenpac: yep
[14:55] <zenpac> jcsp: thanks.. I'm building a representational model of Cephfs for our Zenoss Ceph zenpack, and I try to make the model as forward-looking as possible.
[14:56] * tuhnis (~Dinnerbon@5NZAAE36F.tor-irc.dnsbl.oftc.net) Quit ()
[14:56] * danielsj (~dux0r@89.105.194.85) has joined #ceph
[14:56] <tuxcrafter> http://paste.debian.net/282565/
[14:56] <zenpac> jcsp: for example: http://yuml.me/edit/f7d9ff6a represents the model for monitoring purposes. Its much simpler than reality.
[14:56] <tuxcrafter> should i rebuild the whole cluster to get a OK state again
[14:57] <tuxcrafter> and loose all data or can i tell ceph to destroy or recreate the bad pgs and loose only those parts
[14:57] * yanzheng (~zhyan@182.139.207.212) Quit (Quit: This computer has gone to sleep)
[14:57] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[14:58] <zenpac> tuxcrafter: is your OSD 5 damaged?
[14:58] * kevinperks (~Adium@2606:a000:80ad:1300:94bc:1bfb:5248:54d4) has joined #ceph
[14:59] <tuxcrafter> zenpac: it was
[14:59] <via> radosgw-admin shows the one bucket i have to store about 400k objects taking up about 1.2 GB of data, which seems bout right
[15:00] <via> but rados df says .rgw.buckets has almost 500G of data
[15:00] <tuxcrafter> zenpac: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-July/003027.html
[15:00] <tuxcrafter> zenpac: i set the pool and min size to 1 now
[15:00] <via> a radosgw gc process doesn't decrease that much
[15:01] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[15:02] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[15:04] <jcsp> zenpac: interesting. are the dotted lines meant to be "uses" relationships? If so you would also have a relationship between MDS and OSDs/mons
[15:05] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[15:06] <jcsp> also you might want a self-relationship on the Pool entity to represent cache tiering
[15:06] * danieagle (~Daniel@187.34.2.170) has joined #ceph
[15:06] <jcsp> depends how complicated you want to get
[15:06] <jcsp> not sure what CephDevice is, a zenoss thing?
[15:07] * dis (~dis@109.110.66.238) Quit (Read error: Connection reset by peer)
[15:08] <jcsp> with "CephFS" you might be conflating the logical filesystem and the physical filesystem client?
[15:08] <jcsp> or I'm just unsure which you mean
[15:09] <jcsp> ceph-devel mailing list might be interested in giving opinions on this too
[15:09] <tuxcrafter> jcsp: could you have a look at my pastebin and email as well?
[15:10] <tuxcrafter> i read the docu http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/
[15:10] <jcsp> tuxcrafter: sorry, I'm not the right person for debugging damaged PGs
[15:10] * kefu is now known as kefu|afk
[15:10] <tuxcrafter> but it doesnt help much
[15:10] * overclk (~overclk@117.202.110.202) Quit (Remote host closed the connection)
[15:10] <tuxcrafter> jcsp: okay thx
[15:11] * kefu|afk (~kefu@183.194.250.141) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[15:14] * alkaid (~alkaid@2001:da8:d800:741:ae9e:17ff:fe3f:9be7) Quit (Quit: Leaving)
[15:18] <max2222> anyone have an idea what to do when 'rbd map' hangs for ever?
[15:18] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[15:18] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Remote host closed the connection)
[15:22] * primechuck (~primechuc@173-17-128-216.client.mchsi.com) has joined #ceph
[15:23] * avib (~Ceph@al.secure.elitehosts.com) Quit (Ping timeout: 480 seconds)
[15:23] * bene (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[15:24] * yanzheng (~zhyan@182.139.207.212) has joined #ceph
[15:25] <zenpac> jcsp: good point. Yes the dotted lines are relationship lines. Like 'has' or 'contains' .. We distinguish between containing and non-containing relationsships. Solid likes are for containing-rels..
[15:25] <zenpac> jcsp: I was thinking about MDS-OSD-Mon relations too.. Its a valid point
[15:26] <zenpac> And I should have those as well!
[15:26] * danielsj (~dux0r@5NZAAE378.tor-irc.dnsbl.oftc.net) Quit ()
[15:26] * Eric (~Nijikokun@anonymous6.sec.nl) has joined #ceph
[15:27] <zenpac> I wanted to put the MDS in the same group as OSD/Mon for organizational reasons, so I forgot those other rels..
[15:27] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[15:28] * kefu (~kefu@183.194.250.141) has joined #ceph
[15:29] * kefu is now known as kefu|afk
[15:30] * primechuck (~primechuc@173-17-128-216.client.mchsi.com) Quit (Remote host closed the connection)
[15:34] * DV (~veillard@2001:41d0:1:d478::1) Quit (Quit: Leaving)
[15:35] * oro_ (~oro@84-73-73-158.dclient.hispeed.ch) has joined #ceph
[15:38] * kefu|afk (~kefu@183.194.250.141) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[15:40] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[15:40] * capri_oner (~capri@212.218.127.222) has joined #ceph
[15:41] <tuxcrafter> i want to safely shutdown a ceph cluster and I currenly use
[15:41] <tuxcrafter> ceph osd set noout
[15:42] <tuxcrafter> then turn off all nodes
[15:42] <tuxcrafter> but when i watch the process i often see rebalancing and rebuilding going on
[15:42] <tuxcrafter> also when i turn the nodes back on, it start to rebuild
[15:42] <tuxcrafter> am i doing this right
[15:42] <tuxcrafter> ceph osd set noout
[15:42] * yanzheng (~zhyan@182.139.207.212) Quit (Quit: This computer has gone to sleep)
[15:42] <tuxcrafter> http://ceph.com/docs/argonaut/init/stop-cluster/
[15:42] <tuxcrafter> this was the documentation i could find
[15:43] <RzH2000> Have you considered ceph osd set nodown?
[15:45] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[15:46] <BranchPredictor> tuxcrafter: http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/
[15:46] * shaunm (~shaunm@74.215.76.114) Quit (Quit: Ex-Chat)
[15:47] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[15:47] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[15:47] * capri_on (~capri@212.218.127.222) Quit (Ping timeout: 480 seconds)
[15:47] * kawa2014 (~kawa@2001:67c:1560:8007::aac:c1a6) has joined #ceph
[15:48] <tuxcrafter> BranchPredictor: yes i read that page and just posted it, do you know what section may be relevant
[15:49] * kanagaraj (~kanagaraj@27.7.35.147) has joined #ceph
[15:49] <BranchPredictor> oh, didn't notice that, sorry
[15:50] <tuxcrafter> BranchPredictor: no problem, thx for trying to help
[15:50] <BranchPredictor> anyway, try ceph pg repair
[15:50] <tuxcrafter> i got an other question, i use ssd for journalling and someone asked me to redo some benchmark tests with write cache enabled
[15:50] <BranchPredictor> or http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/#unfound-objects
[15:51] <tuxcrafter> but how can I do this without destroying ceph data
[15:51] <tuxcrafter> would be intresting to have an benchmark partition of some sorts in place
[15:51] <zenpac> jcsp: The major part of our zenpack model is to figure out what is related to what, make objects for those, and then link them all together. Containment is sometimes a nuisance, but it helps on cleanup. We create "Impact" models based on the overall model..
[15:52] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[15:52] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[15:53] <tuxcrafter> BranchPredictor: yes i tried looking at the unfound-objects, but I have incomplete objects and mark_unfound_lost works for unfound
[15:54] <tuxcrafter> i could not find the documentation around mark_unfound_lost to see if there is an option for incomplete objects
[15:56] <Be-El> tuxcrafter: if you have incomplete pgs, you should first check whether any complete instance still exists
[15:56] * Eric (~Nijikokun@7R2AACN8L.tor-irc.dnsbl.oftc.net) Quit ()
[15:56] * Frymaster (~raindog@tortest.ip-eend.nl) has joined #ceph
[15:56] <Be-El> tuxcrafter: ceph pg <id> query list the known instances in the peers section; each peer has its own state
[15:59] <jcsp> zenpac: for modelling impact, you probably want to include the relations between a filesystem and its pools, so that you can infer that an unavailable or damaged PG does or doesn't affect an FS
[15:59] <jcsp> same for RBD
[15:59] * kevinperks (~Adium@2606:a000:80ad:1300:94bc:1bfb:5248:54d4) Quit (Quit: Leaving.)
[15:59] * kevinperks (~Adium@2606:a000:80ad:1300:94bc:1bfb:5248:54d4) has joined #ceph
[16:00] <jcsp> +possibly the relation between a pool and a crush rule, so that you can infer that a particular OSD being offline does or doesn't affect a pool (though that's arguably redundant if you're already tracking the relation between a PG and its pool)
[16:00] <jcsp> not sure how zenoss handles data, but you may want to think twice about including PGs as first class parts of your model, as there can be millions
[16:01] <jcsp> (mostly the operational health information makes sense at a pool level, only need to know which PGs are bad once you're working at the ceph level to fix something)
[16:02] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) Quit (Ping timeout: 480 seconds)
[16:02] * shohn is now known as shohn_afk
[16:03] * kefu (~kefu@183.194.250.141) has joined #ceph
[16:04] * bene2 (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[16:05] * kutija_ (~kutija@95.180.90.38) has joined #ceph
[16:05] * bene (~ben@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[16:06] * dyasny (~dyasny@104.158.33.70) Quit (Ping timeout: 480 seconds)
[16:06] * kefu_ (~kefu@114.92.106.47) has joined #ceph
[16:07] * kefu (~kefu@183.194.250.141) Quit (Read error: No route to host)
[16:07] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[16:08] * kefu (~kefu@114.92.106.47) has joined #ceph
[16:09] * kefu_ (~kefu@114.92.106.47) Quit (Read error: Connection reset by peer)
[16:10] * cooldharma06 (~chatzilla@14.139.180.40) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 21.0/20130515140136])
[16:10] * kevinperks (~Adium@2606:a000:80ad:1300:94bc:1bfb:5248:54d4) Quit (Quit: Leaving.)
[16:10] * sage (~quassel@2607:f298:6050:709d:f964:59d1:2cb7:7df8) Quit (Remote host closed the connection)
[16:10] <zenpac> jcsp: So it sounds like I should only track Pool objects, and forget PG's.? Perhaps some Pool-Crush rules?
[16:11] * sage (~quassel@2607:f298:6050:709d:1dca:9138:6630:167a) has joined #ceph
[16:11] * ChanServ sets mode +o sage
[16:11] * kutija (~kutija@daikatana.services.mint.rs) Quit (Ping timeout: 480 seconds)
[16:11] * kevinperks (~Adium@cpe-75-177-32-14.triad.res.rr.com) has joined #ceph
[16:12] <zenpac> Even trying to process the PG data could be expensive when you have 10^5 + PGs...
[16:13] <zenpac> If I could get detailed health info from the Calamari API, I might be able to avoid a lot of excess data collections.
[16:13] <zenpac> I meant "detailed Pool health" info above.
[16:13] * Guest183 (~abradshaw@tmo-098-193.customers.d1-online.com) Quit (Quit: Too sexy for his shirt)
[16:14] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[16:15] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[16:16] * championofcyrodi (~championo@50-205-35-98-static.hfc.comcastbusiness.net) has joined #ceph
[16:19] <tuxcrafter> Be-El: http://paste.debian.net/282590/ both peer states are inactive
[16:19] * vata1 (~vata@207.96.182.162) has joined #ceph
[16:20] * kefu is now known as kefu|afk
[16:20] <Be-El> tuxcrafter: peer 3 contains the data objects at least
[16:21] <Be-El> tuxcrafter: or some of them
[16:21] * kefu|afk (~kefu@114.92.106.47) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[16:21] * kanagaraj (~kanagaraj@27.7.35.147) Quit (Quit: Leaving)
[16:23] <[arx]> hm, i have 54 OSDs running on drives that can do 180MB/s sequence reads/writes, along with ssd journals that can do 130MB/s sequence reads/writes. yet when i try to benchmark the cluster with fio or rados bench, i top out at around 10MB/s.
[16:23] <[arx]> the cluster is on a 10Gb network, which i can do 10Gbps with iperf. not sure where the bottleneck is.
[16:24] <Be-El> [arx]: which io size do you use for benchmarking?
[16:24] <[arx]> 4k, 8k, 16k, and 1M
[16:24] <Be-El> and 1M benchmarks also have only 10MB/s?
[16:25] <[arx]> yea :{
[16:26] * Frymaster (~raindog@5NZAAE4BD.tor-irc.dnsbl.oftc.net) Quit ()
[16:26] <Be-El> does atop/htop show any anomaly on the osd hosts?
[16:27] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[16:29] * jbautista- (~wushudoin@2601:646:8201:7769:2ab2:bdff:fe0b:a6ee) has joined #ceph
[16:29] <[arx]> they all seem pretty idle to me.
[16:30] <Be-El> [arx]: some weeks ago there was a thread on the mailing list describing a similar problem. but i don't remember neither the cause nor the solution
[16:30] <[arx]> i'll go look for it, thanks.
[16:32] * georgem (~Adium@fwnat.oicr.on.ca) Quit ()
[16:32] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[16:36] * kefu (~kefu@114.92.106.47) has joined #ceph
[16:40] * tupper (~tcole@173.38.117.65) has joined #ceph
[16:40] * zenpac (~zenpac3@66.55.33.66) Quit (Quit: Leaving)
[16:41] * yguang11 (~yguang11@c-50-131-146-113.hsd1.ca.comcast.net) has joined #ceph
[16:42] * daviddcc (~dcasier@LAubervilliers-656-1-16-164.w217-128.abo.wanadoo.fr) has joined #ceph
[16:43] * branto (~branto@213.175.37.10) Quit (Ping timeout: 480 seconds)
[16:43] * shohn_afk is now known as shohn
[16:44] * moore (~moore@64.202.160.88) has joined #ceph
[16:45] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[16:45] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[16:46] * dneary (~dneary@AGrenoble-651-1-459-149.w82-122.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[16:46] * yguang11 (~yguang11@c-50-131-146-113.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[16:47] <markl> morning
[16:49] * kevinperks (~Adium@cpe-75-177-32-14.triad.res.rr.com) Quit (Quit: Leaving.)
[16:52] * kefu (~kefu@114.92.106.47) Quit (Max SendQ exceeded)
[16:52] * kevinperks (~Adium@2606:a000:80ad:1300:b572:3fdc:db4b:2ce0) has joined #ceph
[16:53] * kefu (~kefu@114.92.106.47) has joined #ceph
[16:54] * shaunm (~shaunm@74.215.76.114) has joined #ceph
[16:56] * visored (~rogst@relay-a.tor-exit.network) has joined #ceph
[16:56] * overclk (~overclk@117.202.110.202) has joined #ceph
[16:59] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) Quit (Quit: Leaving.)
[17:00] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[17:01] * kefu_ (~kefu@183.194.250.141) has joined #ceph
[17:04] * dopesong_ (~dopesong@lb1.mailer.data.lt) Quit (Ping timeout: 480 seconds)
[17:04] <tuxcrafter> Be-El: any advice what to do next
[17:04] <Be-El> tuxcrafter: no, sorry. the last time i had an incomplete pg i had at least one full copy
[17:06] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) has joined #ceph
[17:06] <tuxcrafter> Be-El: im okay destroying the date
[17:06] <tuxcrafter> data
[17:06] <tuxcrafter> but i dont know how to do that so that the cluster get into an healthy status again
[17:07] * kefu (~kefu@114.92.106.47) Quit (Ping timeout: 480 seconds)
[17:08] <Amto_res> Hello,
[17:09] <Amto_res> The versioning system is present in 0.94.2-1trusty Version (Hammer)?
[17:09] * adrian15b (~kvirc@btactic.ddns.jazztel.es) has joined #ceph
[17:09] * CheKoLyN (~saguilar@bender.parc.xerox.com) has joined #ceph
[17:10] <adrian15b> How about reducing pgp_num ? Can anyone complete http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-May/001307.html ? I am about to decrease pgp_num but I'm not sure if the suggestion for pg_num and pgp_num to be equal is important or not. Thank you.
[17:11] * daviddcc (~dcasier@LAubervilliers-656-1-16-164.w217-128.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[17:11] <doppelgrau> tuxcrafter: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-September/033625.html < ceph pg force_create_pg <pgid>
[17:13] * shylesh (~shylesh@121.244.87.124) Quit (Remote host closed the connection)
[17:14] * branto (~branto@ip-213-220-232-132.net.upcbroadband.cz) has joined #ceph
[17:16] * jordanP (~jordan@213.215.2.194) has joined #ceph
[17:17] * Kioob`Taff (~plug-oliv@2a01:e35:2e8a:1e0::42:10) Quit (Quit: Leaving.)
[17:19] * flisky (~Thunderbi@123.151.175.167) has joined #ceph
[17:20] <adrian15b> Let me ask the question in another way. Having decreased pgp_num to a lower value... Will it work as well (in performance) as if the pool had been created with that lower value at the beginning?
[17:20] * Hemanth (~Hemanth@121.244.87.117) Quit (Ping timeout: 480 seconds)
[17:21] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) Quit (Quit: Ex-Chat)
[17:22] * linuxkidd (~linuxkidd@166.170.29.33) has joined #ceph
[17:22] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[17:23] <jcsp> adrian15b: no. If you decrease pgp_num, all that happens is some data moves around. You still consume all the same resources, just less uniformly across OSDs
[17:23] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[17:24] <jcsp> zenpac: (from an hour ago, scrolling back) well, it's always ideal to have max amount of information, but unless your tool really does anything with individual PGs, your life will be easier speaking in terms of pools
[17:25] <jcsp> I suppose the downside is that you lose the ability to notice that for example if 3 PGs are failing in one pool, they might all be on the same OSD
[17:26] * visored (~rogst@9S0AAB9M2.tor-irc.dnsbl.oftc.net) Quit ()
[17:26] * Pulec (~Quatrokin@r1.geoca.st) has joined #ceph
[17:28] * shohn is now known as shohn_afk
[17:29] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[17:29] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) has joined #ceph
[17:30] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) has joined #ceph
[17:31] * RMar04 (~RMar04@5.153.255.226) has left #ceph
[17:31] <adrian15b> jcsp: So if I decrease pgp_num I won't be able to derecetase PGs per OSD from 341 to less than 300 (as suggested) then ?
[17:32] * zenpac (~zenpac3@66.55.33.66) has joined #ceph
[17:35] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:37] * yguang11 (~yguang11@66.228.162.44) has joined #ceph
[17:37] * dneary (~dneary@AGrenoble-651-1-459-149.w82-122.abo.wanadoo.fr) has joined #ceph
[17:38] * zenpac (~zenpac3@66.55.33.66) has left #ceph
[17:38] * masteroman (~ivan@93-139-166-7.adsl.net.t-com.hr) has joined #ceph
[17:38] * zenpac (~zenpac3@66.55.33.66) has joined #ceph
[17:39] * masteroman (~ivan@93-139-166-7.adsl.net.t-com.hr) Quit ()
[17:40] <jcsp> adrian15b: you cannot decrease the number of PGs, no.
[17:40] * zenpac (~zenpac3@66.55.33.66) Quit ()
[17:40] * zenpac (~zenpac3@66.55.33.66) has joined #ceph
[17:41] <adrian15b> jcsp: Just to be clear. I'm not interested in decreasing pool's number of PGs but the OSD's number of PGs. I can't do it then anyways, isn't it ?
[17:42] <jcsp> the PGs have to go somewhere. The number of PGs on each OSD is the total divided by the number of OSDs.
[17:42] <jcsp> you can add OSDs.
[17:43] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[17:43] <jcsp> you can change the threshold that generates the warning about too many per OSD
[17:43] <adrian15b> jcsp: Well, I prefer to gain performance than to hide the warning
[17:43] <adrian15b> jcsp: When you says "Total of PGs" you mean pg_num not pgp_num, isn't it ?
[17:44] <jcsp> pg_num is the number of PGs, yes. pgp_num is not the number of anything, it is a setting that controls placement of PGs.
[17:44] <jcsp> seriously, there is not a loophole here, you cannot decrease the number of PGs in a pool.
[17:45] <adrian15b> Ok, as I cannot add new OSDs (budget) I think I will re-create the pool now that we are at the beginning.
[17:45] * kawa2014 (~kawa@2001:67c:1560:8007::aac:c1a6) Quit (Ping timeout: 480 seconds)
[17:45] <adrian15b> jcsp: Thank you very much!
[17:45] <jcsp> np
[17:45] <adrian15b> So, I guess that: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-May/001307.html advice was not a good advice after all.
[17:47] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[17:48] <georgem> adrian15b: you could create a new pool with less pgs, then copy the objects and delete the old pool, and rename the new one (never tried this though)
[17:48] <SpaceDump> ppl, calamari, http 500. Any fixes for that? (seem to be some sort of known issue at least)
[17:50] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:51] * zenpac (~zenpac3@66.55.33.66) has left #ceph
[17:51] * zenpac (~zenpac3@66.55.33.66) has joined #ceph
[17:51] <adrian15b> georgem: It might make sense yes. I would probably copy the data manually not object y object, but yes.
[17:52] <adrian15b> georgem: I think I have plenty of space for creating a new pool. I guess I will have to be careful with free space and that's it.
[17:54] * kefu_ (~kefu@183.194.250.141) Quit (Read error: Connection reset by peer)
[17:55] * kefu (~kefu@183.194.250.141) has joined #ceph
[17:56] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[17:56] * Pulec (~Quatrokin@5NZAAE4GL.tor-irc.dnsbl.oftc.net) Quit ()
[17:56] * puvo (~jakekosbe@176.10.99.208) has joined #ceph
[17:57] * kevinperks (~Adium@2606:a000:80ad:1300:b572:3fdc:db4b:2ce0) Quit (Quit: Leaving.)
[17:59] * vbellur (~vijay@122.171.181.56) has joined #ceph
[17:59] * kevinperks (~Adium@2606:a000:80ad:1300:d98:44fb:a84a:9976) has joined #ceph
[18:02] * kefu (~kefu@183.194.250.141) Quit (Read error: Connection reset by peer)
[18:02] * kefu (~kefu@183.194.250.141) has joined #ceph
[18:04] * logan (~a@63.143.49.103) Quit (Ping timeout: 480 seconds)
[18:05] * xarses_ (~xarses@12.164.168.117) has joined #ceph
[18:07] * daviddcc (~dcasier@LAubervilliers-656-1-16-164.w217-128.abo.wanadoo.fr) has joined #ceph
[18:09] <georgem> adrian15b: this is where I read about it: http://cephnotes.ksperis.com/blog/2015/04/15/ceph-pool-migration
[18:09] * calvinx (~calvin@101.100.172.246) Quit (Quit: calvinx)
[18:09] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[18:10] <zenpac> jcsp: It sounds like i really need to focus on the Pool level health. I'm not quite sure how to get an accurate picture though.
[18:11] * adeel_ (~adeel@fw1.ridgeway.scc-zip.net) has joined #ceph
[18:11] * Nacer (~Nacer@2001:41d0:fe82:7200:536:514e:3984:11bc) has joined #ceph
[18:12] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) Quit (Quit: Leaving.)
[18:12] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[18:12] <zenpac> Even If I just try to scan for statistics on PG's, millions of records could cause a performance hit on our servers.
[18:14] <jcsp> recently ls-by-pool, ls-by-state etc were added
[18:14] <jcsp> to help with this, but I'm not sure if anyone actually added the "PG counts by state for each pool" output format
[18:16] <jcsp> with a lot of this stuff, it's not a given that there will be an existing command that does what you want, but that doesn't mean you can't add one in mon/PGMonitor.cc
[18:16] * kefu (~kefu@183.194.250.141) Quit (Max SendQ exceeded)
[18:17] * adeel (~adeel@2602:ffc1:1:face:c114:570:7cfa:60a0) Quit (Ping timeout: 480 seconds)
[18:17] * kefu (~kefu@183.194.250.141) has joined #ceph
[18:17] * flisky (~Thunderbi@123.151.175.167) Quit (Ping timeout: 480 seconds)
[18:18] * An_T_oine (~Antoine@192.93.37.4) Quit (Ping timeout: 480 seconds)
[18:19] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[18:19] * mgolub (~Mikolaj@91.225.201.69) has joined #ceph
[18:20] * jordanP (~jordan@213.215.2.194) Quit (Remote host closed the connection)
[18:26] * puvo (~jakekosbe@5NZAAE4H9.tor-irc.dnsbl.oftc.net) Quit ()
[18:26] * Knuckx (~Inuyasha@tortest.ip-eend.nl) has joined #ceph
[18:27] <adrian15b> I'm looking at: http://ceph.com/docs/master/rados/operations/pools/ but I cannot find a way to get the type (erasure or replicated) of a pool. Is there any command for that purpose? Thank you.
[18:27] <Amto_res> adrian15b: ceph osd dump |grep replic
[18:28] <adrian15b> Amto_res: Thank you
[18:30] <zenpac> jcsp: For MDS, since they really rely on Mon/OSD, perhaps I should not have direct relation to CephCluster..
[18:30] <adrian15b> georgem: That document implies that the new pool is erasure type and not replicated type. Just in case it helps you in the future.
[18:33] * jcsp (~jspray@summerhall-meraki1.fluency.net.uk) Quit (Ping timeout: 480 seconds)
[18:35] * bitserker1 (~toni@88.87.194.130) Quit (Ping timeout: 480 seconds)
[18:36] * thomnico (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) has joined #ceph
[18:38] * nardial (~ls@dslb-088-076-092-027.088.076.pools.vodafone-ip.de) has joined #ceph
[18:38] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) has joined #ceph
[18:41] * dis (~dis@109.110.66.238) has joined #ceph
[18:42] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) Quit ()
[18:47] * nardial (~ls@dslb-088-076-092-027.088.076.pools.vodafone-ip.de) Quit (Quit: Leaving)
[18:49] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[18:49] * kefu (~kefu@183.194.250.141) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:50] * reed (~reed@2607:f298:a:607:20ec:8c35:e216:1068) has joined #ceph
[18:50] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[18:52] * kefu (~kefu@114.92.106.47) has joined #ceph
[18:52] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[18:52] * kevinperks (~Adium@2606:a000:80ad:1300:d98:44fb:a84a:9976) Quit (Quit: Leaving.)
[18:54] * kevinperks (~Adium@2606:a000:80ad:1300:3d2d:3777:f85:d2a7) has joined #ceph
[18:55] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) has joined #ceph
[18:56] * Knuckx (~Inuyasha@7R2AACOHA.tor-irc.dnsbl.oftc.net) Quit ()
[18:56] * SquallSeeD31 (~anadrom@5NZAAE4KZ.tor-irc.dnsbl.oftc.net) has joined #ceph
[18:57] * daviddcc (~dcasier@LAubervilliers-656-1-16-164.w217-128.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[18:57] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) Quit ()
[18:59] * nhm (~nhm@184-97-242-33.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[19:00] * An_T_oine (~Antoine@ARennes-655-1-174-163.w92-139.abo.wanadoo.fr) has joined #ceph
[19:02] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[19:05] * overclk (~overclk@117.202.110.202) Quit (Remote host closed the connection)
[19:05] * An__T__oine (~Antoine@ARennes-655-1-189-192.w2-13.abo.wanadoo.fr) has joined #ceph
[19:05] * overclk (~overclk@117.202.110.202) has joined #ceph
[19:06] * overclk (~overclk@117.202.110.202) Quit ()
[19:07] * oro (~oro@84-73-73-158.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:08] * oro_ (~oro@84-73-73-158.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:09] * thomnico (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) Quit (Quit: Ex-Chat)
[19:10] * An_T_oine (~Antoine@ARennes-655-1-174-163.w92-139.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[19:10] * logan (~a@63.143.49.103) has joined #ceph
[19:10] * branto (~branto@ip-213-220-232-132.net.upcbroadband.cz) has left #ceph
[19:12] * kefu is now known as kefu|afk
[19:12] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[19:12] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[19:15] <zenpac> Can I have a pool with an RBD image in it as well as other data? Are rbd-pools exclusive to RBD?
[19:17] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[19:18] * snakamoto (~Adium@192.16.26.2) has joined #ceph
[19:25] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[19:25] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[19:26] * SquallSeeD31 (~anadrom@5NZAAE4KZ.tor-irc.dnsbl.oftc.net) Quit ()
[19:26] * curtis864 (~dux0r@89.105.194.83) has joined #ceph
[19:26] * stiopa (~stiopa@cpc73828-dals21-2-0-cust630.20-2.cable.virginm.net) has joined #ceph
[19:26] * loth (~tyler.wil@23.111.254.159) Quit (Read error: Connection reset by peer)
[19:27] * brutuscat (~brutuscat@17.Red-83-47-123.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[19:28] * Kupo1 (~tyler.wil@23.111.254.159) has joined #ceph
[19:29] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) has joined #ceph
[19:33] * Concubidated (~Adium@66.87.64.214) has joined #ceph
[19:35] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[19:36] * kefu|afk (~kefu@114.92.106.47) Quit (Max SendQ exceeded)
[19:37] * kefu (~kefu@114.92.106.47) has joined #ceph
[19:39] <gleam> you can put whatever oyu want in any pool
[19:39] * jschmid (~jxs@ip9234d57a.dynamic.kabel-deutschland.de) has joined #ceph
[19:39] * bitserker (~toni@188.87.126.67) has joined #ceph
[19:40] <zenpac> Can a CephFS filesystem have more than one meta_data pool? more than one data pool?
[19:41] <via> you can, but the mds's will only use the one that has been set as the main pool for the fs
[19:41] * nils_ (~nils@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[19:42] <zenpac> What about data-pools?
[19:42] <via> if you're asking about from the perspective of cephfs and mds's, same thing
[19:42] <via> you configure the mds to use a metadata nd data pool
[19:43] * jschmid (~jxs@ip9234d57a.dynamic.kabel-deutschland.de) Quit ()
[19:44] <zenpac> Ok.. The command to create a cephfs only allows (seems) one mds-pool and one data-pool per fs.
[19:44] <tuxcrafter> doppelgrau: thx
[19:44] <tuxcrafter> doppelgrau: 3.c 0 0 0 0 0 0 0 creating 2015-07-15 19:36:30.303414 0'0 0:0 [] -1 [] -1 0'0 0.000000 0'0 0.000000
[19:45] <tuxcrafter> 3.c 0 0 0 0 0 0 0 creating 2015-07-15 19:36:30.303414 0'0 0:0 [] -1 [] -1 0'0 0.000000 0'0 0.000000
[19:45] <tuxcrafter> i created the pgs but now they are stuck in creating without an osd
[19:50] * kefu (~kefu@114.92.106.47) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:50] <zenpac> via, I see in the Hadoop/Ceph docs that you can have multiple data pools.
[19:51] <via> i'm not sure what you're saying.
[19:52] <via> what i said still holds, you assign a data and metadata pool to the fs
[19:52] <via> you can make as many pools as you want, but only one data and one metadata pool belong to an fs
[19:52] <via> and you can call them whatever you want
[19:54] <zenpac> via: I thought you could only have one CephFS per cluster right now....
[19:55] <mongo> that is correct, one CephFS per cluster right now
[19:55] <via> right, nothing i've said has suggested otherwise
[19:55] <zenpac> Ok.. I'm seeing things clearer now.. Ty
[19:56] <via> perhaps you should become more familiar with ceph before trying to architect monitoring of it to such a fine detail?
[19:56] * curtis864 (~dux0r@5NZAAE4MI.tor-irc.dnsbl.oftc.net) Quit ()
[19:56] * Malcovent (~bildramer@9S0AAB9V1.tor-irc.dnsbl.oftc.net) has joined #ceph
[19:57] <doppelgrau> tuxcrafter: strange, but I???m out of Ideas now. Don???t rember having read about such a problem. The crush-Rules are allright and enough OSDs up?!
[19:59] <zenpac> via: http://ceph.com/docs/master/cephfs/hadoop/ indicates multiple data pools for an fs.. Is this unique to Hadoop?
[20:00] * wenjunhuang (~wenjunhua@111.161.63.110) Quit (Remote host closed the connection)
[20:00] * wenjunhuang (~wenjunhua@111.161.63.110) has joined #ceph
[20:01] * rlrevell (~leer@vbo1.inmotionhosting.com) has left #ceph
[20:01] <via> i'd imagine hadoop plugin is a different beast
[20:01] <zenpac> via, I just realized that those pools are for Hadoop replication, not CephFS's...
[20:02] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[20:06] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[20:12] * rlrevell (~leer@vbo1.inmotionhosting.com) has left #ceph
[20:26] * Malcovent (~bildramer@9S0AAB9V1.tor-irc.dnsbl.oftc.net) Quit ()
[20:26] * raindog (~djidis__@relay-a.tor-exit.network) has joined #ceph
[20:32] * vikhyat (~vumrao@49.248.200.167) has joined #ceph
[20:32] <mtanski> jsyk you can have multiple pools in cephfs, you can change which pool data goes into via a xattr
[20:32] * JeffroMart (~pand2@c-69-243-28-212.hsd1.md.comcast.net) has joined #ceph
[20:33] <mtanski> Also, there???s an outstanding request to support more then one concurent filesystem in Ceph. I belive the work is not hard, just nobody cared enough to do it so far
[20:37] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[20:37] <zenpac> mtanski: Thanks.. AFAIK, there is only one primary-MDS for a cephFS with backup-MDS servers.
[20:38] <zenpac> Thats not related to your statement, I'm just trying to verify.
[20:38] <theanalyst> I've a cluster in warn state because of a slow request
[20:38] <mtanski> yeah, that???s what???s there today
[20:38] <JeffroMart> Hey guys, I have a quick question, I was trying to give CEPH a try, however I ran into an issue trying to deploy the mon systems, and it seems like there is an issue with running CentOS 7.1? When I try and run ceph-deploy mon create-initial it is giving me: [WARNIN] The service command supports only basic LSB actions (start, stop, restart, try-restart, reload, force-reload, status). For other
[20:38] <JeffroMart> actions, please try to use systemctl. Should I be using 7.0 instead, or is there a way to fix this?
[20:41] <mongo> You can ignore that warning, at least on other systems, but I use ubuntu due to the modern kernel and btrfs support so take that suggestion with a grain of salt. That message is just saying it is using classic init vs systemd scripts.
[20:46] <JeffroMart> Hum, it seems to fail with a few errors tho:
[20:46] <JeffroMart> [rnd-mon-03][ERROR ] RuntimeError: command returned non-zero exit status: 2
[20:46] <JeffroMart> [ceph_deploy.mon][ERROR ] Failed to execute command: /usr/sbin/service ceph -c /etc/ceph/ceph.conf start mon.rnd-mon-03
[20:46] <JeffroMart> [ceph_deploy][ERROR ] GenericError: Failed to create 3 monitors
[20:48] * sleinen (~Adium@194.230.159.135) has joined #ceph
[20:49] * dneary (~dneary@AGrenoble-651-1-459-149.w82-122.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[20:49] * vikhyat (~vumrao@49.248.200.167) Quit (Quit: Leaving)
[20:51] <mongo> what release are you targeting for ceph?
[20:52] * rotbeard (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) has joined #ceph
[20:53] <mongo> try adding --release hammer to the ceph-deploy command
[20:54] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[20:56] * raindog (~djidis__@5NZAAE4PH.tor-irc.dnsbl.oftc.net) Quit ()
[20:56] * Concubidated (~Adium@66.87.64.214) Quit (Read error: Connection reset by peer)
[20:56] * verbalins (~Kaervan@marylou.nos-oignons.net) has joined #ceph
[20:58] <JeffroMart> ok, let me try that, but that's what I have defined in the ceph.repo file
[20:59] * jclm (~jclm@ip24-253-98-109.lv.lv.cox.net) Quit (Quit: Leaving.)
[21:00] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[21:00] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[21:01] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[21:02] * nhm (~nhm@172.56.6.192) has joined #ceph
[21:02] * ChanServ sets mode +o nhm
[21:02] * sleinen (~Adium@194.230.159.135) Quit (Ping timeout: 480 seconds)
[21:05] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[21:08] * dgurtner (~dgurtner@178.197.231.188) Quit (Ping timeout: 480 seconds)
[21:10] * linuxkidd (~linuxkidd@166.170.29.33) Quit (Quit: Leaving)
[21:11] * xarses_ is now known as xarses
[21:12] * i_m (~ivan.miro@83.149.35.245) Quit (Ping timeout: 480 seconds)
[21:13] * dgurtner (~dgurtner@178.197.225.200) has joined #ceph
[21:14] <xarses> Hi, I'm looking for help understanding how the cephx auth keys are generated as I want to pre-create the keys my cluster uses.
[21:14] * yghannam (~yghannam@0001f8aa.user.oftc.net) Quit (Quit: Leaving)
[21:15] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[21:16] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[21:19] * jclm (~jclm@ip24-253-98-109.lv.lv.cox.net) has joined #ceph
[21:19] * yghannam (~yghannam@0001f8aa.user.oftc.net) has joined #ceph
[21:23] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[21:26] * verbalins (~Kaervan@9S0AAB9YM.tor-irc.dnsbl.oftc.net) Quit ()
[21:26] * straterra (~PierreW@edwardsnowden0.torservers.net) has joined #ceph
[21:28] <cholcombe> is there a windows driver for rbd? As far as I'm aware the answer is no but maybe that has changed
[21:29] * jclm (~jclm@ip24-253-98-109.lv.lv.cox.net) Quit (Quit: Leaving.)
[21:33] * dgurtner (~dgurtner@178.197.225.200) Quit (Ping timeout: 480 seconds)
[21:34] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[21:34] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Remote host closed the connection)
[21:34] * rendar (~I@host87-186-dynamic.17-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[21:36] * tupper (~tcole@173.38.117.65) Quit (Ping timeout: 480 seconds)
[21:37] * rendar (~I@host87-186-dynamic.17-79-r.retail.telecomitalia.it) has joined #ceph
[21:43] * ganders (~root@190.2.42.21) Quit (Quit: WeeChat 0.4.2)
[21:48] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[21:50] * dopesong (~dopesong@78-60-74-130.static.zebra.lt) has joined #ceph
[21:54] * georgem (~Adium@fwnat.oicr.on.ca) has left #ceph
[21:56] * straterra (~PierreW@5NZAAE4RJ.tor-irc.dnsbl.oftc.net) Quit ()
[21:56] * Izanagi (~Popz@176.10.99.205) has joined #ceph
[21:58] * RzH2000 (~administr@155.48.45.31.customer.cdi.no) Quit (Quit: Ex-Chat)
[21:59] * rwheeler (~rwheeler@pool-173-48-214-9.bstnma.fios.verizon.net) has joined #ceph
[22:01] * bene2 (~ben@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[22:01] * mgolub (~Mikolaj@91.225.201.69) Quit (Quit: away)
[22:02] * snakamoto (~Adium@192.16.26.2) Quit (Quit: Leaving.)
[22:03] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[22:06] * nhm (~nhm@172.56.6.192) Quit (Ping timeout: 480 seconds)
[22:10] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[22:10] * adrian15b (~kvirc@btactic.ddns.jazztel.es) Quit (Ping timeout: 480 seconds)
[22:11] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[22:14] * rotbeard (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) Quit (Quit: Leaving)
[22:17] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[22:22] * An__T__oine (~Antoine@ARennes-655-1-189-192.w2-13.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[22:23] * dalgaaf (uid15138@id-15138.charlton.irccloud.com) has joined #ceph
[22:26] * snakamoto (~Adium@192.16.26.2) has joined #ceph
[22:26] * Izanagi (~Popz@7R2AACOP6.tor-irc.dnsbl.oftc.net) Quit ()
[22:26] * ItsCriminalAFK (~Kayla@193.11.137.126) has joined #ceph
[22:34] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[22:35] * qhartman (~qhartman@den.direwolfdigital.com) has joined #ceph
[22:37] * moore (~moore@64.202.160.88) Quit (Remote host closed the connection)
[22:43] * arbrandes (~arbrandes@201-29-243-179.user.veloxzone.com.br) has joined #ceph
[22:47] * dopesong (~dopesong@78-60-74-130.static.zebra.lt) Quit (Remote host closed the connection)
[22:49] * leseb_ (~leseb@81-64-215-19.rev.numericable.fr) has joined #ceph
[22:51] * kevinperks (~Adium@2606:a000:80ad:1300:3d2d:3777:f85:d2a7) Quit (Quit: Leaving.)
[22:51] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[22:54] * tdb (~tdb@myrtle.kent.ac.uk) Quit (Ping timeout: 480 seconds)
[22:54] * kevinperks (~Adium@cpe-75-177-32-14.triad.res.rr.com) has joined #ceph
[22:56] * ItsCriminalAFK (~Kayla@7R2AACOQ6.tor-irc.dnsbl.oftc.net) Quit ()
[22:56] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[22:58] * nhm (~nhm@184-97-242-33.mpls.qwest.net) has joined #ceph
[22:58] * ChanServ sets mode +o nhm
[22:58] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) Quit (Ping timeout: 480 seconds)
[23:03] <mongo> nope, there was just an initial port of librados, but no rbd
[23:03] <mongo> (not offical port of librados btw)
[23:03] * dneary (~dneary@AGrenoble-651-1-459-149.w82-122.abo.wanadoo.fr) has joined #ceph
[23:04] * kevinperks (~Adium@cpe-75-177-32-14.triad.res.rr.com) Quit (Quit: Leaving.)
[23:08] * tdb (~tdb@myrtle.kent.ac.uk) has joined #ceph
[23:10] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Quit: Leaving.)
[23:11] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[23:11] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[23:11] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:15] * adrian15b (~kvirc@10.Red-88-16-101.dynamicIP.rima-tde.net) has joined #ceph
[23:19] * arbrandes (~arbrandes@201-29-243-179.user.veloxzone.com.br) Quit (Quit: Leaving)
[23:22] * shohn_afk (~shohn@dslb-092-078-028-056.092.078.pools.vodafone-ip.de) Quit (Quit: Leaving.)
[23:23] * pdrakeweb (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) Quit (Quit: Leaving...)
[23:25] * vata1 (~vata@207.96.182.162) Quit (Quit: Leaving.)
[23:25] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[23:26] * Aramande_ (~Aramande_@tor-exit.xshells.net) has joined #ceph
[23:27] * kevinperks (~Adium@cpe-75-177-32-14.triad.res.rr.com) has joined #ceph
[23:31] * sjm (~sjm@pool-100-1-115-73.nwrknj.fios.verizon.net) has left #ceph
[23:39] * kevinperks (~Adium@cpe-75-177-32-14.triad.res.rr.com) Quit (Quit: Leaving.)
[23:40] * kevinperks (~Adium@2606:a000:80ad:1300:1d29:336e:1898:68a3) has joined #ceph
[23:43] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[23:43] * nsoffer (~nsoffer@bzq-109-65-255-114.red.bezeqint.net) has joined #ceph
[23:44] * dneary (~dneary@AGrenoble-651-1-459-149.w82-122.abo.wanadoo.fr) Quit (Quit: Exeunt dneary)
[23:47] * jdillaman (~jdillaman@166.170.32.73) has joined #ceph
[23:49] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[23:55] * bitserker (~toni@188.87.126.67) Quit (Quit: Leaving.)
[23:56] * Aramande_ (~Aramande_@5NZAAE4VM.tor-irc.dnsbl.oftc.net) Quit ()
[23:58] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Quit: Leaving.)
[23:59] * kevinperks (~Adium@2606:a000:80ad:1300:1d29:336e:1898:68a3) Quit (Quit: Leaving.)
[23:59] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[23:59] * kevinperks (~Adium@cpe-75-177-32-14.triad.res.rr.com) has joined #ceph
[23:59] * jdillaman (~jdillaman@166.170.32.73) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.