#ceph IRC Log

Index

IRC Log for 2015-04-16

Timestamps are in GMT/BST.

[0:01] * sbfox (~Adium@72.2.49.50) has joined #ceph
[0:08] * xcezzz1 (~Adium@pool-100-3-14-19.tampfl.fios.verizon.net) has joined #ceph
[0:08] * xcezzz (~Adium@pool-100-3-14-19.tampfl.fios.verizon.net) Quit (Read error: Connection reset by peer)
[0:10] * ircolle (~Adium@2601:1:a580:1735:2dba:e8d5:cdbd:97c0) Quit (Quit: Leaving.)
[0:13] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[0:17] * sav is now known as tmh_
[0:18] * alram (~alram@38.96.12.2) Quit (Quit: leaving)
[0:19] <skullone> is anyone using ceph-deploy to deploy RGWs?
[0:20] * Drezil (~Kyso_@5NZAABPPQ.tor-irc.dnsbl.oftc.net) Quit ()
[0:20] * notmyname1 (~Inverness@tor-exit1-readme.dfri.se) has joined #ceph
[0:24] <flaf> skullone: personally I have deployed a RGW with puppet (so manual install via puppet).
[0:24] <skullone> the apache method, or civitweb?
[0:24] * georgem (~Adium@fwnat.oicr.on.ca) has left #ceph
[0:25] <flaf> It was for firefly and it used apache2 + fastcgi, but...
[0:26] <flaf> it seems to me that, with hammer, civetweb is a better way.
[0:26] <skullone> thats what im reading, but it doesnt deploy ;(
[0:27] <flaf> Ah maybe. But in fact, the installation is simplified because no need to apache.
[0:29] <flaf> Maybe I'm wrong but it just 1. installation of the packages, 2. put the ceph.conf file and 3. start radosgw.
[0:29] <flaf> s/it/it's/
[0:31] <flaf> (of course the ceph radosgw account must be create before.
[0:31] <flaf> )
[0:33] <flaf> skullone: it's probably dirty but if you use ceph-deploy and then you remove apache and just adapt the ceph.conf file, it should work.
[0:34] <flaf> There is just 2 or 3 lines to change in ceph.conf to use civetweb.
[0:36] <gleam> how have people found civetweb performance vs apache in hammer? when I tried with giant I was able to fairly easily overload civetweb and start getting timeouts and errors, which never happened with apache/fastcgi.
[0:38] <flaf> gleam, no idea. If you read this thread, civetweb seems to be ok http://www.mail-archive.com/ceph-users%40lists.ceph.com/msg17264.html
[0:39] <flaf> gleam: sorry better link http://www.mail-archive.com/ceph-users%40lists.ceph.com/msg17298.html
[0:39] <gleam> that's the one i'mr eading now
[0:40] <gleam> looks good, i'll give it a test soon
[0:40] <flaf> Yes it's hopeful.
[0:43] <flaf> but if you have errors and timeouts with hammer/civetweb, I suppose that dev's would be glad to have your feedback.
[0:43] <gleam> yeah, i'll see if i can reproduce it
[0:44] <gleam> it was 6 months ago i think
[0:44] <gleam> around the time giant was first released
[0:44] <flaf> In any case, it seems that the civetweb method will be the "official" method now.
[0:45] <flaf> (it's my personal feeling)
[0:50] * notmyname1 (~Inverness@98EAABAIV.tor-irc.dnsbl.oftc.net) Quit ()
[0:50] * AotC (~Zombiekil@195.169.125.226) has joined #ceph
[0:58] * p66kumar (~p66kumar@74.119.205.248) Quit (Quit: p66kumar)
[1:00] <skullone> im not sure how to manually install radosgw / civet
[1:00] <skullone> is it really just a couple lines in the config?
[1:00] <skullone> ie, radosgw is part of the general ceph install?
[1:01] * oro (~oro@66-194-8-225.static.twtelecom.net) Quit (Ping timeout: 480 seconds)
[1:04] <skullone> just spun up another fresh ceph cluster, and trying to 'ceph-reploy rgw create nodename' and its still failing
[1:04] <skullone> first, with a path not found:
[1:04] <skullone> http://pastebin.com/R3AkA6DS
[1:05] <skullone> rather: http://pastebin.com/PPNrNdEQ
[1:05] <skullone> and if i make the path for it, i get a new error:
[1:05] <skullone> http://pastebin.com/u2gEuNA1
[1:06] <skullone> thi was getting a key error earlier, but thats seem to have gone away
[1:07] <flaf> here is simple config that worked for me with firefly
[1:07] <flaf> [client.radosgw.gw1]
[1:07] <flaf> host = ceph-radosgw1
[1:07] <flaf> rgw dns name = ostore
[1:07] <flaf> rgw socket path = /var/run/ceph/ceph.radosgw.gw1.fastcgi.sock
[1:08] <flaf> keyring = /etc/ceph/ceph.client.radosgw.gw1.keyring
[1:08] <flaf> log file = /var/log/radosgw/client.radosgw.gw1.log
[1:08] <flaf>
[1:08] <flaf> you can forget "rgw dns name"
[1:08] <flaf> and...
[1:09] <flaf> http://docs.ceph.com/docs/master/radosgw/config/#add-a-gateway-configuration-to-ceph
[1:09] <flaf> so, rgw socket path becomes ""
[1:10] <flaf> and you should add -> rgw frontends = civetweb port=80
[1:11] <flaf> I use ubuntu 14.04.
[1:11] <flaf> And of course the "radosgw.gw1" ceph account must be created before in the cluster with correct rights.
[1:13] <flaf> To summarize, something like that:
[1:13] <flaf>
[1:13] <flaf> [client.radosgw.gw1]
[1:13] <flaf> host = ceph-radosgw1
[1:13] <flaf> keyring = /etc/ceph/ceph.client.radosgw.gw1.keyring
[1:13] <flaf> rgw socket path = ""
[1:13] <flaf> log file = /var/log/radosgw/client.radosgw.gw1.log
[1:13] <flaf> rgw frontends = civetweb port=80
[1:13] <flaf> rgw print continue = false
[1:13] <flaf>
[1:14] <flaf> (and the keyring must exist in the radosgw host of course)
[1:14] <flaf> In fact, radosgw is a ceph client.
[1:15] <flaf> Ok?
[1:15] <flaf> Do you have created the radosgw ceph account?
[1:16] <skullone> im following this: http://docs.ceph.com/docs/master/start/quick-ceph-deploy/
[1:16] <skullone> which just seems to jump right in to 'ceph-deploy rgw create'
[1:17] <flaf> Ah, I'm sorry I'm totally unqualified with ceph-deploy etc. I have never used.
[1:18] <flaf> if you run "ceph auth list", do you see the radosgw account?
[1:20] * AotC (~Zombiekil@2WVAABM1H.tor-irc.dnsbl.oftc.net) Quit ()
[1:20] * Salamander_ (~Bobby@tor-exit.server7.tvdw.eu) has joined #ceph
[1:25] <flaf> This is why I don't like ceph-deploy because it's to "opaque" for me. this is not a critic about ceph-deploy, it's not suitable for me.
[1:26] <flaf> (maybe I'm wrong...)
[1:27] * oms101 (~oms101@p20030057EA002A00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:27] * shyu (~Shanzhi@119.254.196.66) has joined #ceph
[1:36] * oms101 (~oms101@p20030057EA4D1200EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[1:42] * huats (~quassel@stuart.objectif-libre.com) Quit (Ping timeout: 480 seconds)
[1:43] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) has joined #ceph
[1:44] * boichev2 (~boichev@213.169.56.130) Quit (Read error: No route to host)
[1:45] * puffy (~puffy@64.191.206.83) Quit (Quit: Leaving.)
[1:46] * huats (~quassel@stuart.objectif-libre.com) has joined #ceph
[1:46] * alram (~alram@38.96.12.2) has joined #ceph
[1:50] * Salamander_ (~Bobby@2WVAABM2L.tor-irc.dnsbl.oftc.net) Quit ()
[1:50] * legion (~Shadow386@edwardsnowden2.torservers.net) has joined #ceph
[1:53] <dmick> flaf: there's a *lot* of logging saved in a file for you to look at to see what it did
[1:54] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Ping timeout: 480 seconds)
[1:54] <flaf> dmick: ah ok, the log is in the current workdir?
[1:55] * sjmtest (uid32746@id-32746.uxbridge.irccloud.com) Quit (Quit: Connection closed for inactivity)
[1:55] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[1:55] <flaf> But, don't use ceph-deploy is not a problem for me. I use the commands indicated in the "manual" install of the online doc and it's ok for me like that.
[1:56] * oro (~oro@209.249.118.54) has joined #ceph
[1:58] <dmick> ok, just saying.
[1:58] <flaf> yes, thx for the info. ;)
[1:58] * alram (~alram@38.96.12.2) Quit (Ping timeout: 480 seconds)
[2:00] <flaf> it could be useful because ceph-deploy is necessarily updated. This is not (resp will not be) not always the case with my personal scripts.
[2:05] * shang (~ShangWu@175.41.48.77) has joined #ceph
[2:07] * OutOfNoWhere (~rpb@199.68.195.102) has joined #ceph
[2:10] * rendar (~I@host38-179-dynamic.23-79-r.retail.telecomitalia.it) Quit ()
[2:13] * oro (~oro@209.249.118.54) Quit (Ping timeout: 480 seconds)
[2:17] * vjujjuri (~chatzilla@204.14.239.54) has joined #ceph
[2:20] * legion (~Shadow386@98EAABALB.tor-irc.dnsbl.oftc.net) Quit ()
[2:20] * Gibri (~eXeler0n@marylou.nos-oignons.net) has joined #ceph
[2:24] * mivaho_ (~quassel@2001:983:eeb4:1:c0de:69ff:fe2f:5599) has joined #ceph
[2:25] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[2:27] * mivaho (~quassel@2001:983:eeb4:1:c0de:69ff:fe2f:5599) Quit (Ping timeout: 480 seconds)
[2:30] * bandrus (~brian@128.sub-70-211-79.myvzw.com) Quit (Ping timeout: 480 seconds)
[2:33] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) has joined #ceph
[2:34] * debian112 (~bcolbert@24.126.201.64) Quit (Quit: Leaving.)
[2:34] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[2:36] * daniel2_ (~daniel@12.164.168.117) Quit (Remote host closed the connection)
[2:36] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[2:38] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) has joined #ceph
[2:38] * ChanServ sets mode +o elder
[2:40] * yanzheng (~zhyan@171.216.94.165) has joined #ceph
[2:43] * LeaChim (~LeaChim@host86-143-18-67.range86-143.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:46] * steveeJ (~junky@virthost3.stefanjunker.de) Quit (Quit: Leaving)
[2:50] * Gibri (~eXeler0n@1GLAABC2P.tor-irc.dnsbl.oftc.net) Quit ()
[2:53] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[2:55] * sjmtest (uid32746@id-32746.uxbridge.irccloud.com) has joined #ceph
[2:58] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[2:58] * vjujjuri (~chatzilla@204.14.239.54) Quit (Ping timeout: 480 seconds)
[3:02] * alram (~alram@38.96.12.2) has joined #ceph
[3:04] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) Quit (Remote host closed the connection)
[3:05] * JarekO (~jowsie@hC35A6AF2.cli.nitronet.pl) Quit (Read error: Connection reset by peer)
[3:05] * JarekO (~jowsie@hC35A6AF2.cli.nitronet.pl) has joined #ceph
[3:09] * steveeJ (~steveeJ@virthost3.stefanjunker.de) has joined #ceph
[3:10] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) has joined #ceph
[3:11] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:12] * sherlocked (~watson@14.139.82.6) has joined #ceph
[3:20] * MatthewH12 (~slowriot@5.9.158.75) has joined #ceph
[3:25] * capri_on (~capri@212.218.127.222) Quit (Read error: Connection reset by peer)
[3:32] * sbfox (~Adium@72.2.49.50) Quit (Ping timeout: 480 seconds)
[3:33] * root4 (~root@p57B2F509.dip0.t-ipconnect.de) has joined #ceph
[3:36] * kefu (~kefu@114.92.111.70) has joined #ceph
[3:40] * root3 (~root@p5DDE6F1C.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[3:40] * CheKoLyN (~saguilar@c-24-23-216-79.hsd1.ca.comcast.net) has joined #ceph
[3:42] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[3:42] <litwol> damn.... Creating osd inside a linux container inside a mounted directory which is created on a host zfs pool causes Seg fault in the ceph-osd command
[3:42] <litwol> saddd :(
[3:42] <litwol> i'm just experimenting .. trying to learn ceph. used LXC to create separate sandboxes.
[3:45] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) has joined #ceph
[3:45] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[3:46] * alram (~alram@38.96.12.2) Quit (Ping timeout: 480 seconds)
[3:47] * cholcombe (~chris@pool-108-42-125-114.snfcca.fios.verizon.net) Quit (Remote host closed the connection)
[3:47] * debian112 (~bcolbert@24.126.201.64) Quit (Quit: Leaving.)
[3:50] * MatthewH12 (~slowriot@3OZAAA6QM.tor-irc.dnsbl.oftc.net) Quit ()
[3:53] * zhaochao (~zhaochao@111.161.77.236) has joined #ceph
[4:00] * CheKoLyN (~saguilar@c-24-23-216-79.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[4:00] * zack_dolby (~textual@nfmv001018238.uqw.ppp.infoweb.ne.jp) has joined #ceph
[4:02] <skullone> still cant seem to get rgw working
[4:17] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) has joined #ceph
[4:17] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) Quit ()
[4:18] * shang (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[4:20] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) has joined #ceph
[4:20] * Maza (~VampiricP@176.10.99.206) has joined #ceph
[4:21] * bkopilov (~bkopilov@bzq-109-65-10-75.red.bezeqint.net) Quit (Read error: No route to host)
[4:32] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) Quit (Remote host closed the connection)
[4:50] * Maza (~VampiricP@3OZAAA6RO.tor-irc.dnsbl.oftc.net) Quit ()
[4:50] * Grum (~VampiricP@67.ip-92-222-38.eu) has joined #ceph
[4:53] * sherlocked (~watson@14.139.82.6) Quit (Ping timeout: 480 seconds)
[5:17] * puffy (~puffy@50.185.218.255) has joined #ceph
[5:17] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) Quit (Quit: p66kumar)
[5:18] * puffy (~puffy@50.185.218.255) Quit ()
[5:19] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) has joined #ceph
[5:20] * Grum (~VampiricP@5NZAABP30.tor-irc.dnsbl.oftc.net) Quit ()
[5:20] * mrapple (~click@195.169.125.226) has joined #ceph
[5:27] * Vacuum (~vovo@88.130.204.52) has joined #ceph
[5:31] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) Quit (Quit: Leaving.)
[5:34] * Vacuum_ (~vovo@i59F793DB.versanet.de) Quit (Ping timeout: 480 seconds)
[5:37] * haomaiwang (~haomaiwan@61.185.255.226) Quit (Remote host closed the connection)
[5:40] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[5:46] * Mika_c (~quassel@122.146.93.152) has joined #ceph
[5:50] * mrapple (~click@5NZAABP5H.tor-irc.dnsbl.oftc.net) Quit ()
[5:50] * Atomizer (~Shnaw@79.98.107.90) has joined #ceph
[5:52] * kefu (~kefu@114.92.111.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[5:54] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[5:56] * B_Rake (~B_Rake@2605:a601:5b9:dd01:e45d:971e:beb8:b041) has joined #ceph
[5:58] * B_Rake (~B_Rake@2605:a601:5b9:dd01:e45d:971e:beb8:b041) Quit (Read error: Connection reset by peer)
[5:58] * reed (~reed@2602:244:b653:6830:2c27:126:1cc:4840) Quit (Quit: Ex-Chat)
[5:58] * B_Rake (~B_Rake@2605:a601:5b9:dd01:e45d:971e:beb8:b041) has joined #ceph
[6:02] * B_Rake (~B_Rake@2605:a601:5b9:dd01:e45d:971e:beb8:b041) Quit (Remote host closed the connection)
[6:03] * kefu|afk (~kefu@114.92.111.70) has joined #ceph
[6:03] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[6:05] * vjujjuri (~chatzilla@static-50-53-42-60.bvtn.or.frontiernet.net) has joined #ceph
[6:11] * oro (~oro@209.249.118.62) has joined #ceph
[6:13] * kefu|afk (~kefu@114.92.111.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:15] * tmh_ (~e@0001baae.user.oftc.net) Quit (Quit: Caught deadly signal: SIGSLEEP)
[6:16] * vbellur (~vijay@122.166.94.20) Quit (Quit: Leaving.)
[6:16] * vbellur (~vijay@122.166.94.20) has joined #ceph
[6:20] * Atomizer (~Shnaw@2FBAABGWG.tor-irc.dnsbl.oftc.net) Quit ()
[6:20] * matx (~Revo84@exit1.ipredator.se) has joined #ceph
[6:25] * vbellur (~vijay@122.166.94.20) Quit (Ping timeout: 480 seconds)
[6:25] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) Quit (Quit: p66kumar)
[6:31] <nigwil> ceph.com is not responding
[6:33] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) has joined #ceph
[6:35] * sjmtest (uid32746@id-32746.uxbridge.irccloud.com) Quit (Quit: Connection closed for inactivity)
[6:37] * sherlocked (~watson@14.139.82.6) has joined #ceph
[6:46] * daniel2_ (~daniel@12.0.207.18) has joined #ceph
[6:50] * matx (~Revo84@2WVAABNBU.tor-irc.dnsbl.oftc.net) Quit ()
[6:50] * Dysgalt (~Pettis@85.25.9.11) has joined #ceph
[6:51] * sherlocked (~watson@14.139.82.6) Quit (Ping timeout: 480 seconds)
[6:53] * rdas (~rdas@110.227.45.170) has joined #ceph
[6:55] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[6:56] <nigwil> we need a little forensics cloud so that rogue(unusual) VMs can be observed and dissected in isolation with supporting stethoscope tools already there to poke about inside the patient...
[6:56] <nigwil> oops - wrong chan
[7:04] * dvanders (~dvanders@dvanders-hpi5.cern.ch) Quit (Read error: Connection reset by peer)
[7:08] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[7:09] * tmh_ (~e@0001baae.user.oftc.net) has joined #ceph
[7:14] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[7:18] * vjujjuri (~chatzilla@static-50-53-42-60.bvtn.or.frontiernet.net) Quit (Quit: ChatZilla 0.9.91.1 [Firefox 36.0.1/20150305021524])
[7:20] * vbellur (~vijay@121.244.87.117) has joined #ceph
[7:20] * Dysgalt (~Pettis@2WVAABNC0.tor-irc.dnsbl.oftc.net) Quit ()
[7:20] * Sophie1 (~geegeegee@5.9.158.75) has joined #ceph
[7:23] * vjujjuri (~chatzilla@static-50-53-42-60.bvtn.or.frontiernet.net) has joined #ceph
[7:33] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[7:33] * Hemanth (~Hemanth@121.244.87.117) Quit ()
[7:33] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[7:37] * sherlocked (~watson@14.139.82.6) has joined #ceph
[7:39] * rdas (~rdas@110.227.45.170) Quit (Quit: Leaving)
[7:39] * karnan (~karnan@121.244.87.117) has joined #ceph
[7:41] * sherlocked (~watson@14.139.82.6) Quit ()
[7:50] * Sophie1 (~geegeegee@2WVAABNEI.tor-irc.dnsbl.oftc.net) Quit ()
[7:50] * Azru (~raindog@torsrvu.snydernet.net) has joined #ceph
[7:53] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) has joined #ceph
[7:53] * jchien (~oftc-webi@mail.4free.com.tw) has joined #ceph
[7:56] * jchien (~oftc-webi@mail.4free.com.tw) Quit ()
[7:57] * daniel2_ (~daniel@12.0.207.18) Quit (Read error: Connection reset by peer)
[7:59] * raso (~raso@deb-multimedia.org) Quit (Quit: WeeChat 1.0.1)
[8:01] * raso (~raso@deb-multimedia.org) has joined #ceph
[8:02] * kefu (~kefu@114.92.111.70) has joined #ceph
[8:07] * daniel2_ (~daniel@12.0.207.18) has joined #ceph
[8:08] * achieva (ZISN2.9G@foresee.postech.ac.kr) has joined #ceph
[8:08] <achieva> hi
[8:09] <achieva> how can i run ceph-osd daemon eternally?
[8:09] * daniel2_ (~daniel@12.0.207.18) Quit (Remote host closed the connection)
[8:09] <achieva> i want to run ceph-osd if it is failed by some internal error (e.g, SIGABRT)
[8:10] <achieva> now, i just use pid tracking script to run it, but is there any other methods to do it?
[8:15] * oro (~oro@209.249.118.62) Quit (Ping timeout: 480 seconds)
[8:18] * alram (~alram@38.96.12.2) has joined #ceph
[8:19] * alram (~alram@38.96.12.2) Quit ()
[8:20] * Azru (~raindog@98EAABAR2.tor-irc.dnsbl.oftc.net) Quit ()
[8:28] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[8:30] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[8:30] * pdrakeweb (~pdrakeweb@oh-71-50-38-193.dhcp.embarqhsd.net) has joined #ceph
[8:33] * Mika_c (~quassel@122.146.93.152) Quit (Ping timeout: 480 seconds)
[8:34] * rdas (~rdas@121.244.87.116) has joined #ceph
[8:35] * pdrakewe_ (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) Quit (Ping timeout: 480 seconds)
[8:35] * kefu (~kefu@114.92.111.70) Quit (Max SendQ exceeded)
[8:36] * kefu (~kefu@114.92.111.70) has joined #ceph
[8:38] <singler_> so ceph.com still down?
[8:39] <T1w> again?
[8:39] <T1w> seems to be a bit unstable
[8:39] * daniel2_ (~daniel@12.0.207.18) has joined #ceph
[8:40] * Sysadmin88 (~IceChat77@054527d3.skybroadband.com) Quit (Quit: It's a dud! It's a dud! It's a du...)
[8:40] * kefu (~kefu@114.92.111.70) Quit (Max SendQ exceeded)
[8:41] * kefu (~kefu@114.92.111.70) has joined #ceph
[8:46] * vjujjuri (~chatzilla@static-50-53-42-60.bvtn.or.frontiernet.net) Quit (Ping timeout: 480 seconds)
[8:50] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[8:50] * Tarazed1 (~ylmson@98EAABAT8.tor-irc.dnsbl.oftc.net) has joined #ceph
[8:54] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) has joined #ceph
[9:01] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) Quit (Quit: p66kumar)
[9:03] * vpol (~vpol@000131a0.user.oftc.net) Quit (Quit: vpol)
[9:08] * vpol (~vpol@000131a0.user.oftc.net) has joined #ceph
[9:09] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[9:09] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[9:10] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Ping timeout: 480 seconds)
[9:10] * sankarshan (~sankarsha@121.244.87.117) Quit (Ping timeout: 480 seconds)
[9:10] * rdas (~rdas@121.244.87.116) Quit (Ping timeout: 480 seconds)
[9:10] * karnan (~karnan@121.244.87.117) Quit (Ping timeout: 480 seconds)
[9:10] * amote (~amote@121.244.87.116) Quit (Ping timeout: 480 seconds)
[9:10] * lalatenduM (~lalatendu@121.244.87.117) Quit (Ping timeout: 480 seconds)
[9:10] * Hemanth (~Hemanth@121.244.87.117) Quit (Ping timeout: 480 seconds)
[9:10] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[9:13] * analbeard (~shw@support.memset.com) has joined #ceph
[9:15] * cok (~chk@2a02:2350:18:1010:dc57:c407:57cf:5e64) has joined #ceph
[9:15] * glzhao_ (~glzhao@203.90.249.185) has joined #ceph
[9:20] * Tarazed1 (~ylmson@98EAABAT8.tor-irc.dnsbl.oftc.net) Quit ()
[9:20] * T1w (~jens@node3.survey-it.dk) Quit (Quit: Leaving)
[9:20] * Coestar (~MKoR@tor-exit.server9.tvdw.eu) has joined #ceph
[9:22] * glzhao__ (~glzhao@203.90.249.185) Quit (Ping timeout: 480 seconds)
[9:24] * shohn (~shohn@ipservice-092-208-212-030.092.208.pools.vodafone-ip.de) has joined #ceph
[9:26] * b0e (~aledermue@213.95.25.82) has joined #ceph
[9:26] * fireD (~fireD@93.159.78.191) has joined #ceph
[9:27] * vpol (~vpol@000131a0.user.oftc.net) Quit (Quit: vpol)
[9:27] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[9:28] * karnan (~karnan@121.244.87.117) has joined #ceph
[9:28] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[9:28] * vbellur (~vijay@121.244.87.117) has joined #ceph
[9:28] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[9:28] * sankarshan (~sankarsha@121.244.87.117) has joined #ceph
[9:28] * rdas (~rdas@121.244.87.116) has joined #ceph
[9:28] * kanagaraj_ (~kanagaraj@121.244.87.117) has joined #ceph
[9:28] * vpol (~vpol@000131a0.user.oftc.net) has joined #ceph
[9:29] * oro (~oro@209.249.118.62) has joined #ceph
[9:30] * amote_ (~amote@121.244.87.116) has joined #ceph
[9:38] * shohn (~shohn@ipservice-092-208-212-030.092.208.pools.vodafone-ip.de) Quit (Quit: Leaving.)
[9:39] * capri (~capri@212.218.127.222) has joined #ceph
[9:39] * fsimonce (~simon@host178-188-dynamic.26-79-r.retail.telecomitalia.it) has joined #ceph
[9:43] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) has joined #ceph
[9:43] * tmh_ (~e@0001baae.user.oftc.net) Quit (Ping timeout: 480 seconds)
[9:45] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) Quit (Read error: Connection reset by peer)
[9:45] * ksperis (~ksperis@46.218.42.103) has joined #ceph
[9:45] * DV_ (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[9:46] * tmh_ (~e@0001baae.user.oftc.net) has joined #ceph
[9:46] * vpol (~vpol@000131a0.user.oftc.net) Quit (Quit: vpol)
[9:47] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) has joined #ceph
[9:49] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[9:49] * t4nk068 (~oftc-webi@mail.4free.com.tw) has joined #ceph
[9:49] * oliver1 (~oliver@92.39.19.242.fixip.bitel.net) has joined #ceph
[9:49] * tmh_ (~e@0001baae.user.oftc.net) Quit (Read error: Connection reset by peer)
[9:49] * shohn (~shohn@ipservice-092-208-212-030.092.208.pools.vodafone-ip.de) has joined #ceph
[9:50] * Coestar (~MKoR@2WVAABNKQ.tor-irc.dnsbl.oftc.net) Quit ()
[9:50] * tmh_ (~e@0001baae.user.oftc.net) has joined #ceph
[9:50] * VampiricPadraig (~utugi____@tor-exit.server9.tvdw.eu) has joined #ceph
[9:53] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[9:56] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[9:56] * t4nk068 (~oftc-webi@mail.4free.com.tw) Quit (Remote host closed the connection)
[9:58] * mivaho_ (~quassel@2001:983:eeb4:1:c0de:69ff:fe2f:5599) Quit (Quit: Going)
[9:58] * mivaho (~quassel@2001:983:eeb4:1:c0de:69ff:fe2f:5599) has joined #ceph
[10:00] * dgurtner (~dgurtner@178.197.231.228) has joined #ceph
[10:01] * nhm (~nhm@65-128-146-154.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[10:04] * dalgaaf (uid15138@id-15138.charlton.irccloud.com) has joined #ceph
[10:04] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[10:04] * nhm (~nhm@184-97-175-198.mpls.qwest.net) has joined #ceph
[10:04] * ChanServ sets mode +o nhm
[10:10] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[10:12] * tmh_ (~e@0001baae.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:12] * jordanP (~jordan@213.215.2.194) has joined #ceph
[10:13] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:15] * ajazdzewski (~ajazdzews@lpz-66.sprd.net) has joined #ceph
[10:15] * ajazdzewski (~ajazdzews@lpz-66.sprd.net) Quit ()
[10:15] * ajazdzewski (~ajazdzews@lpz-66.sprd.net) has joined #ceph
[10:20] * VampiricPadraig (~utugi____@2WVAABNMO.tor-irc.dnsbl.oftc.net) Quit ()
[10:20] * MJXII (~Diablodoc@162.247.72.216) has joined #ceph
[10:23] <flaf> Now, I realize I often visit ceph.com ;)
[10:25] <Amto_res> Hello, I have a problem on my RGW (0.87.1-1 trusty) when excute rsyslog is to rotate the logs, the log for "client.radosgw.gateway.log" will be put out of date and remains empty .. Only a restart of RGW allow to revive writes logs .. Do you have any idea?
[10:25] * bitserker (~toni@88.87.194.130) has joined #ceph
[10:27] * overclk (~overclk@121.244.87.117) has joined #ceph
[10:29] * vpol (~vpol@000131a0.user.oftc.net) has joined #ceph
[10:32] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[10:33] * rendar (~I@host135-119-dynamic.57-82-r.retail.telecomitalia.it) has joined #ceph
[10:35] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[10:38] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) has joined #ceph
[10:40] * erice (~eric@c-50-134-164-169.hsd1.co.comcast.net) Quit (Read error: Connection reset by peer)
[10:40] * erice (~eric@c-50-134-164-169.hsd1.co.comcast.net) has joined #ceph
[10:41] * sage (~quassel@cpe-76-95-230-100.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[10:44] * WWW201504 (~oftc-webi@61.135.169.73) has joined #ceph
[10:46] * shohn (~shohn@ipservice-092-208-212-030.092.208.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[10:49] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[10:50] * MJXII (~Diablodoc@2WVAABNOH.tor-irc.dnsbl.oftc.net) Quit ()
[10:50] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[10:52] * florz (nobody@2001:1a50:503c::2) Quit (Read error: No route to host)
[10:54] * florz (nobody@2001:1a50:503c::2) has joined #ceph
[10:56] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[11:02] * sage (~quassel@cpe-76-95-230-100.socal.res.rr.com) has joined #ceph
[11:02] * ChanServ sets mode +o sage
[11:07] * zack_dolby (~textual@nfmv001018238.uqw.ppp.infoweb.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[11:07] <T1w> hmpf.. ceph.com is still down
[11:09] * RayTracer (~RayTracer@153.19.7.39) has joined #ceph
[11:11] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[11:13] * Nats_ (~natscogs@114.31.195.238) Quit (Read error: Connection reset by peer)
[11:13] * Nats_ (~natscogs@114.31.195.238) has joined #ceph
[11:19] * sage (~quassel@cpe-76-95-230-100.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[11:20] <Bosse> T1w: if you need the ceph repos, you could use eu.ceph.com
[11:20] * GuntherDW1 (~Harryhy@95.130.15.96) has joined #ceph
[11:20] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[11:21] <T1w> Bosse: thanks, but it's not /that/ important - I was just trying to use the PG calc.. :)
[11:27] * Mika_c (~quassel@122.146.93.152) has joined #ceph
[11:27] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[11:35] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[11:36] * bitserker (~toni@88.87.194.130) Quit (Ping timeout: 480 seconds)
[11:40] * overclk (~overclk@121.244.87.117) Quit (Ping timeout: 480 seconds)
[11:41] * overclk (~overclk@121.244.87.117) has joined #ceph
[11:41] * bitserker (~toni@88.87.194.130) has joined #ceph
[11:43] * hoo (~hoo@firewall.netconomy.net) Quit (Remote host closed the connection)
[11:48] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[11:50] * GuntherDW1 (~Harryhy@98EAABAZ4.tor-irc.dnsbl.oftc.net) Quit ()
[11:50] * Defaultti1 (~datagutt@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[11:51] * overclk (~overclk@121.244.87.117) Quit (Ping timeout: 480 seconds)
[11:55] * zviratko (~zviratko@241-73-239-109.cust.centrio.cz) has joined #ceph
[11:57] <flaf> Hi, I would like to decrease the number of PGs in my pool metadata of cephfs (Hammer). But I know it's impossible. But currently the pool is small. If I create a new pool, if I copy the metadata in my new pool (rados cpppol...), if I remove the medatadat pool and I rename my new pool to "metadata", could it succeed? (without destroy my cephfs)
[11:58] * Mika_c (~quassel@122.146.93.152) Quit (Remote host closed the connection)
[12:00] * overclk (~overclk@121.244.87.124) has joined #ceph
[12:02] * kefu (~kefu@114.92.111.70) Quit (Max SendQ exceeded)
[12:03] * kefu (~kefu@114.92.111.70) has joined #ceph
[12:06] * evilrob0_ (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) has joined #ceph
[12:08] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit ()
[12:09] * rendar (~I@host135-119-dynamic.57-82-r.retail.telecomitalia.it) Quit (Read error: Connection reset by peer)
[12:11] * jamespage (~jamespage@culvain.gromper.net) Quit (Quit: Coyote finally caught me)
[12:11] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[12:12] * evilrob00 (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[12:15] * shang (~ShangWu@175.41.48.77) has joined #ceph
[12:19] * rendar (~I@host135-119-dynamic.57-82-r.retail.telecomitalia.it) has joined #ceph
[12:20] * Defaultti1 (~datagutt@5NZAABQQJ.tor-irc.dnsbl.oftc.net) Quit ()
[12:23] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[12:23] * hoo (~hoo@firewall.netconomy.net) has joined #ceph
[12:25] * jakekosberg (~kalleeen@178-175-128-50.ip.as43289.net) has joined #ceph
[12:26] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[12:36] * kefu (~kefu@114.92.111.70) Quit (Max SendQ exceeded)
[12:37] * kefu (~kefu@114.92.111.70) has joined #ceph
[12:37] <T1w> does anyone know what outstanding problems remain with CephFS in the recent release of Hammer?
[12:38] <T1w> .. that might be a showstopper for production-use?
[12:38] * phoenix42 (~phoenix42@14.139.219.162) has joined #ceph
[12:46] <fghaas> T1w: you'll have to specify what you're trying to do with it
[12:48] * kefu (~kefu@114.92.111.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[12:51] * cok (~chk@2a02:2350:18:1010:dc57:c407:57cf:5e64) Quit (Quit: Leaving.)
[12:55] * jakekosberg (~kalleeen@425AAAMBB.tor-irc.dnsbl.oftc.net) Quit ()
[12:58] * qstion (~qstion@37.157.144.44) has joined #ceph
[13:00] <flaf> fghaas: I'm have the same question of T1w. In my case, it's for data directories of moodle web servers (~30 moodles and 2 moodles share the same cephfs directory, so 15 cephfs directories with ~500GB on each directory).
[13:01] <fghaas> do you need to use snapshots? do you need POSIX ACLs? do you have thousands of file in a single directory?
[13:02] <flaf> for the mount options noacl,noatime is OK. snapshots? I thought it doesn't exist with cephfs.
[13:03] <flaf> In a single directoy I hope a max of ~500 files in the same directory. (not sure at 100%)
[13:07] <T1w> fghaas: just use it as a regular filesystem for multiple clients that needs read access to the same data while a few other clients might be creating new data in new files
[13:07] <T1w> no acls
[13:07] <T1w> yes I might have 10k files in one directory
[13:08] <T1w> and yes, snapshots might be used at some future point for possible rollback purposes
[13:08] <joelm> ceph.com down... again
[13:08] <fghaas> T1w: well if you can live with the idea that there is no fsck or repair, and that in case your fs breaks you can bring everything from a backup
[13:08] <T1w> joelm: alas for several hours now..
[13:08] <fghaas> and 10k files in one directory is pretty much poison for cephfs
[13:08] <T1w> fghaas: backup != replication
[13:08] <T1w> for me anyway.. :)
[13:09] <fghaas> T1w: dude, I don't need a lecture on that :)
[13:09] <T1w> ;)
[13:09] <T1w> I'm just saying that I will - for the time beeing - be taking regular backups.. :)
[13:09] <T1w> to other storage
[13:10] <fghaas> but yeah, go read up on how the ceph mds caches metadata for all files in a directory, and then make up your mind whether having 10k files in one directory is a grand idea for cephfs
[13:10] <T1w> the thing with 10k files in a single directory might be an issue
[13:10] <T1w> yes, thanks for that pointer
[13:10] * hellertime (~Adium@72.246.0.14) has joined #ceph
[13:10] <T1w> well.. I assume that there might be some overhead if a mds disappears and a new one takes over
[13:11] * phoenix42 (~phoenix42@14.139.219.162) Quit (Ping timeout: 480 seconds)
[13:11] * fireD (~fireD@93.159.78.191) Quit (Ping timeout: 480 seconds)
[13:11] <T1w> the only time a dirlisting would be made is when backups are run
[13:12] <T1w> regular access will 99,9% of the time be done via absolute paths requests
[13:13] * shyu (~Shanzhi@119.254.196.66) Quit (Remote host closed the connection)
[13:17] * zhaochao (~zhaochao@111.161.77.236) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 31.6.0/20150331233809])
[13:18] <Tetard> l~g
[13:19] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has joined #ceph
[13:19] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has left #ceph
[13:27] <T1w> is the cephfs prone to breakage or is it something that /should/ only occur if the ceph cluster experiences something catastropic?
[13:30] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[13:31] * fireD (~fireD@178.160.23.116) has joined #ceph
[13:34] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) has joined #ceph
[13:34] * WWW201504 (~oftc-webi@61.135.169.73) Quit (Quit: Page closed)
[13:36] <joelm> hmm, started to get a bunch of lcoking in cephfs - followed by 'failing to respond to capability release'
[13:38] * dopesong (~oftc-webi@lb0.mailer.data.lt) has joined #ceph
[13:38] * fireD_ (~fireD@178.160.116.203) has joined #ceph
[13:38] * dopesong (~oftc-webi@lb0.mailer.data.lt) Quit ()
[13:40] * fireD (~fireD@178.160.23.116) Quit (Ping timeout: 480 seconds)
[13:40] * dopesong (~dopesong@lb0.mailer.data.lt) has joined #ceph
[13:41] <dopesong> Hey guys. Why site is down? :)
[13:42] <qstion> yes it is.
[13:45] <qstion> couple of days ago there was a question in this chat
[13:45] <qstion> is redhat actively playing any role in ceph after accuiring it?
[13:45] <joelm> not sure why the docs/blog/packages all in one tbh - seems schoolboy
[13:46] <joelm> really doesn't give a great impression in my book
[13:46] <joelm> but not sure of the underlying causes
[13:47] <qstion> i believe thats legacy infrastructure and no one really had time/resources to make it better
[13:48] * overclk (~overclk@121.244.87.124) Quit (Ping timeout: 480 seconds)
[13:49] <T1w> a mirror would be nice
[13:49] <qstion> T1w: just my thoughts
[13:49] <T1w> http://www.slideshare.net/Inktank_Ceph/cdl-ceph-fs-1x is a interesting read even though it's a bit old by now
[13:55] * Zombiekiller (~Keiya@ns2.wat2doo.com) has joined #ceph
[13:57] * overclk (~overclk@121.244.87.117) has joined #ceph
[13:59] <stalob> when trying to add an OSD , i got an issue, i've gone once i started the OSD On the monitor it's look like fine as i got none error message..;but when checking the status...osd 0 up 0 in
[13:59] <stalob> what i could have doen wrong,
[13:59] <stalob> ?
[14:00] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[14:04] * ganders (~root@190.2.42.21) has joined #ceph
[14:04] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[14:07] * OutOfNoWhere (~rpb@199.68.195.102) Quit (Ping timeout: 480 seconds)
[14:08] * analbeard (~shw@support.memset.com) Quit (Remote host closed the connection)
[14:08] * stralob (~oftc-webi@br167-098.ifremer.fr) has joined #ceph
[14:08] * analbeard (~shw@support.memset.com) has joined #ceph
[14:08] * stralob (~oftc-webi@br167-098.ifremer.fr) Quit ()
[14:10] * yanzheng (~zhyan@171.216.94.165) Quit (Quit: ??????)
[14:19] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) has joined #ceph
[14:22] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[14:22] * phoenix42 (~phoenix42@122.252.249.67) has joined #ceph
[14:23] <joelm> T1w: there is for packages, eu.ceph.com - but not (afaik) for docs
[14:25] * Zombiekiller (~Keiya@2WVAABNYS.tor-irc.dnsbl.oftc.net) Quit ()
[14:25] * Kakeru (~spate@e4-10.rana.at) has joined #ceph
[14:25] * kanagaraj_ (~kanagaraj@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:26] * glzhao_ (~glzhao@203.90.249.185) Quit (Read error: Connection reset by peer)
[14:27] * glzhao_ (~glzhao@203.90.249.185) has joined #ceph
[14:30] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[14:31] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[14:31] <T1w> fghaas: would this be a problematic fs to be backed by ceph?
[14:31] <T1w> data-file-store/version/6xxxx/699xx/69992/<files>
[14:32] <T1w> data-file-store/version/6xxxx/699xx/69993/<files>
[14:32] <T1w> etc etc etc
[14:32] <T1w> where 6xxxx holds at most 100 directories like 699xx
[14:32] <T1w> and 699xx holds at most 100 other directories
[14:33] <T1w> the number of actual files in each 3rd level directory is low, but might peak to a few thousand
[14:34] <fghaas> "a few thousand" in one directory clearly ain't goo
[14:34] <fghaas> good
[14:35] <T1w> mkay
[14:37] * yghannam (~yghannam@0001f8aa.user.oftc.net) has joined #ceph
[14:38] * bene (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[14:38] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[14:39] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Ping timeout: 480 seconds)
[14:39] <T1w> dang.. 2470 3rd level directoryes with more than 1000 files
[14:40] <T1w> highest has 9936 files in it
[14:40] <T1w> seems like I should wait for support for multiple active mds
[14:44] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:48] * tupper_ (~tcole@rtp-isp-nat1.cisco.com) Quit (Read error: Connection reset by peer)
[14:51] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[14:52] * RayTracer (~RayTracer@153.19.7.39) Quit (Quit: Leaving...)
[14:54] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[14:54] * danieagle (~Daniel@177.138.223.106) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[14:55] * Kakeru (~spate@5NZAABQWA.tor-irc.dnsbl.oftc.net) Quit ()
[14:55] * Helleshin (~Throlkim@enjolras.gtor.org) has joined #ceph
[14:59] * phoenix42 (~phoenix42@122.252.249.67) Quit (Ping timeout: 480 seconds)
[15:00] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) Quit (Remote host closed the connection)
[15:00] * mastamind (~mm@193.171.234.167) has joined #ceph
[15:01] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) has joined #ceph
[15:02] * mastamind (~mm@193.171.234.167) Quit ()
[15:03] * YmrDtnJu (~mm@193.171.234.167) has joined #ceph
[15:04] * YmrDtnJu (~mm@193.171.234.167) Quit ()
[15:04] * YmrDtnJu (~YmrDtnJu@193.171.234.167) has joined #ceph
[15:07] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[15:08] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) has joined #ceph
[15:12] * tupper (~tcole@cpe-65-190-150-124.nc.res.rr.com) has joined #ceph
[15:12] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[15:13] <georgem> ceph.com down again???
[15:13] * frednass (~fred@dn-infra-12.lionnois.univ-lorraine.fr) Quit (Quit: Leaving.)
[15:14] * frednass (~fred@dn-infra-12.lionnois.univ-lorraine.fr) has joined #ceph
[15:15] * dupont-y (~dupont-y@familledupont.org) has joined #ceph
[15:15] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) Quit (Remote host closed the connection)
[15:16] <zviratko> georgem: s/again/still mostly/
[15:17] * linuxkidd (~linuxkidd@vpngac.ccur.com) has joined #ceph
[15:18] * DV_ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[15:19] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:20] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[15:23] * tupper (~tcole@cpe-65-190-150-124.nc.res.rr.com) Quit (Ping timeout: 480 seconds)
[15:25] * Helleshin (~Throlkim@2WVAABN1E.tor-irc.dnsbl.oftc.net) Quit ()
[15:25] * harold (~hamiller@71-94-227-43.dhcp.mdfd.or.charter.com) has joined #ceph
[15:25] * harold (~hamiller@71-94-227-43.dhcp.mdfd.or.charter.com) Quit ()
[15:28] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) has joined #ceph
[15:29] * thomnico (~thomnico@AToulouse-654-1-278-245.w90-5.abo.wanadoo.fr) has joined #ceph
[15:29] * fireD (~fireD@93-142-231-194.adsl.net.t-com.hr) has joined #ceph
[15:30] * danieagle (~Daniel@177.138.223.106) has joined #ceph
[15:30] <stalob> everything is down with me , ceph.com and the OSD of my node
[15:31] * fireD_ (~fireD@178.160.116.203) Quit (Ping timeout: 480 seconds)
[15:31] <kaisan> meh. ik heb 'm als verkeerd format geimporteerd zie ik.
[15:31] <georgem> stalob: is ceph.com running on your OSD? :)
[15:31] <kaisan> wce
[15:32] * tupper (~tcole@rtp-isp-nat-pool1-1.cisco.com) has joined #ceph
[15:34] * linuxkidd_ (~linuxkidd@mobile-166-173-251-043.mycingular.net) has joined #ceph
[15:35] * Hemanth (~Hemanth@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:36] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[15:39] * The1w (~jens@node3.survey-it.dk) has joined #ceph
[15:39] * dupont-y (~dupont-y@familledupont.org) Quit (Read error: No route to host)
[15:40] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[15:41] * linuxkidd (~linuxkidd@vpngac.ccur.com) Quit (Ping timeout: 480 seconds)
[15:44] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[15:45] * vbellur (~vijay@122.172.194.57) has joined #ceph
[15:45] <Tetard> ceph.com works over v6
[15:46] * lucas1 (~Thunderbi@218.76.52.64) Quit ()
[15:50] <The1w> interesting..
[15:50] <The1w> makes you wonder what's wrong
[15:56] <Tetard> actually, no.
[15:57] <Tetard> they both answer to TCP, but that's it - backend hung / wedged
[15:58] * amote_ (~amote@121.244.87.116) Quit (Quit: Leaving)
[16:01] * The1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[16:03] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[16:04] * bene2 (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[16:06] * dgurtner_ (~dgurtner@178.197.231.105) has joined #ceph
[16:08] * dgurtner (~dgurtner@178.197.231.228) Quit (Ping timeout: 480 seconds)
[16:09] * bkopilov (~bkopilov@bzq-109-65-10-75.red.bezeqint.net) has joined #ceph
[16:11] * haomaiwang (~haomaiwan@123.138.40.137) has joined #ceph
[16:11] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[16:11] * bene (~ben@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[16:15] * haomaiwang (~haomaiwan@123.138.40.137) Quit (Remote host closed the connection)
[16:15] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[16:17] <jcsp1> T1w: even with a single MDS, you may find a benefit from setting "mds bal frag = true" in your configuration
[16:18] <jcsp1> and setting "mds bal split size" to something less than the number of files you have in one directory (by default it's 10000)
[16:18] <jcsp1> it would actually be really interesting to know if that makes much of a difference with your workload
[16:18] <jcsp1> and/or breaks horribly :-)
[16:19] <jcsp1> (the setting with make the MDS fragment large directories into multiple objects so that it doesn't have to hold them all in cache at once)
[16:21] <joelm> what about nested dirs? How are they handled?
[16:21] * joelm has lots in ubuntu/debian mirrors
[16:22] <jcsp1> not sure what you mean by "nested dirs"? all directories have a parent
[16:25] * storage (~Malcovent@jaures.gtor.org) has joined #ceph
[16:27] <joelm> multiple levels of dirs, rather than one dir with lots of files
[16:27] <joelm> not sure how that maps in ceph, whether it even makes a difference
[16:28] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:28] * dgurtner (~dgurtner@178.197.231.76) has joined #ceph
[16:29] * sherlocked (~watson@14.139.82.6) has joined #ceph
[16:29] * sherlocked (~watson@14.139.82.6) Quit ()
[16:29] <jcsp1> it doesn't really make a difference, each directory is handled separately. Obviously if your individual directories are smaller then Ceph doesn't have to fragment them
[16:30] * dgurtner_ (~dgurtner@178.197.231.105) Quit (Ping timeout: 480 seconds)
[16:31] <jcsp1> though if the files have a natural grouping/locality of access, then putting related files in subdirectories will get you more efficient caching
[16:32] <jcsp1> as when ceph fragments a directory it's doing it just by the dentry name
[16:33] <joelm> right, cool, makes sense
[16:33] <joelm> just checking :)
[16:36] * PerlStalker (~PerlStalk@162.220.127.20) has joined #ceph
[16:38] * vjujjuri (~chatzilla@static-50-53-42-60.bvtn.or.frontiernet.net) has joined #ceph
[16:40] * phoenix42 (~phoenix42@122.252.249.67) has joined #ceph
[16:42] <flaf> Is there a problem to install radosgw directly on a OSDs server?
[16:43] <joelm> no, can't see why there would be
[16:44] <flaf> I don't know it seems to me read something about that but I'm not sure.
[16:48] * oro (~oro@209.249.118.62) Quit (Ping timeout: 480 seconds)
[16:49] * vjujjuri_ (~chatzilla@204.14.239.105) has joined #ceph
[16:49] * tupper (~tcole@rtp-isp-nat-pool1-1.cisco.com) Quit (Ping timeout: 480 seconds)
[16:52] <fxmulder> ceph.com doesn't seem to be responding on port 80
[16:55] * storage (~Malcovent@2WVAABN5Z.tor-irc.dnsbl.oftc.net) Quit ()
[16:55] * vjujjuri (~chatzilla@static-50-53-42-60.bvtn.or.frontiernet.net) Quit (Ping timeout: 480 seconds)
[16:55] * xanax` (~cyphase@hessel2.torservers.net) has joined #ceph
[16:58] * vbellur (~vijay@122.172.194.57) Quit (Remote host closed the connection)
[16:59] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) has joined #ceph
[17:01] <skullone> irc.
[17:03] * dopesong (~dopesong@lb0.mailer.data.lt) Quit (Read error: Connection reset by peer)
[17:03] * sage (~quassel@2605:e000:854d:de00:230:48ff:fed3:6786) has joined #ceph
[17:03] * ChanServ sets mode +o sage
[17:05] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[17:05] * vbellur (~vijay@122.172.194.57) has joined #ceph
[17:05] * analbeard (~shw@support.memset.com) has left #ceph
[17:09] * tupper_ (~tcole@rtp-isp-nat-pool1-2.cisco.com) has joined #ceph
[17:11] <fxmulder> ddos on ceph.com? wtf would do that
[17:12] <fxmulder> unless it shares space with someone shady
[17:13] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[17:15] * phoenix42_ (~phoenix42@122.252.249.67) has joined #ceph
[17:16] <joelm> maybe an op should set /topic on the outage
[17:16] <joelm> stop people joining and asking the same questions :)
[17:18] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has left #ceph
[17:19] * phoenix42 (~phoenix42@122.252.249.67) Quit (Ping timeout: 480 seconds)
[17:21] <joelm> flaf: afair there was issues with using ceph rbd on osd hosts, but not sure that's still an issue. Perhaps?
[17:22] <joelm> I know originally it was due to rbd based kernel hosting, nmeaning you couldn't do and 'all in one' style hosts (storage and hypervisor) but with librbd that's not an issue
[17:23] <joelm> this was a couple of years back mind, stuff moves quick
[17:25] * xanax` (~cyphase@2WVAABN7L.tor-irc.dnsbl.oftc.net) Quit ()
[17:25] <flaf> Ah maybe I thought about that and I have mixed in my head. :)
[17:29] * ircolle (~Adium@2601:1:a580:1735:b530:d8dd:b9e2:8fa2) has joined #ceph
[17:31] * dopesong (~dopesong@lb0.mailer.data.lt) has joined #ceph
[17:31] <skullone> fxmulder: ohh, its a ddos?
[17:33] * reed (~reed@2602:244:b653:6830:2c27:126:1cc:4840) has joined #ceph
[17:33] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) Quit (Quit: Leaving.)
[17:35] * derjohn_mob (~aj@88.128.80.133) has joined #ceph
[17:36] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:39] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[17:40] * longguang_home (~chatzilla@60.28.18.192) has joined #ceph
[17:40] * angdraug (~angdraug@c-50-174-102-105.hsd1.ca.comcast.net) has joined #ceph
[17:41] * p66kumar (~p66kumar@74.119.205.248) has joined #ceph
[17:43] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[17:48] * vjujjuri_ (~chatzilla@204.14.239.105) Quit (Ping timeout: 480 seconds)
[17:49] <fxmulder> skullone: that's what I read on the mailing list
[17:50] * tk12 (~tk12@68.140.239.132) has joined #ceph
[17:51] <fxmulder> ah sounds like the hosting box has just met its load limit with expected traffic
[17:52] <achieva> how can i check an available pg numbers in each osd?
[17:55] * Azru (~poller@0.tor.exit.babylon.network) has joined #ceph
[17:56] * bandrus (~brian@128.sub-70-211-79.myvzw.com) has joined #ceph
[18:02] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[18:02] * ircolle (~Adium@2601:1:a580:1735:b530:d8dd:b9e2:8fa2) Quit (Quit: Leaving.)
[18:05] * cholcombe (~chris@pool-108-42-125-114.snfcca.fios.verizon.net) has joined #ceph
[18:07] * bitserker (~toni@88.87.194.130) Quit (Ping timeout: 480 seconds)
[18:08] * p66kumar (~p66kumar@74.119.205.248) Quit (Quit: p66kumar)
[18:12] * vjujjuri (~chatzilla@204.14.239.105) has joined #ceph
[18:13] <vjujjuri> Hello all. I sent an email with basic ceph questions to ceph-devel can someone answer them? sage ?
[18:14] <vjujjuri> or we can talk here. Basically I am trying to understand how the placement and read/write wroks in the error path.
[18:15] * sankarsh_ (~sankarsha@171.76.52.227) has joined #ceph
[18:16] * bitserker (~toni@85.238.9.126) has joined #ceph
[18:17] <vjujjuri> Here are top two questions:
[18:17] <vjujjuri> 1. Machines going up and down is fairly common in a data center.
[18:17] <vjujjuri> How often does the cluster map change?
[18:17] <vjujjuri> Every machine bounce causes an update/distribution of cluster map?
[18:17] <vjujjuri> and affect the CRUSH? Does it cause cluster network too chatty?
[18:17] <vjujjuri> 2. Ceph mainly depends on the primary OSD in a given PG.
[18:17] <vjujjuri> What happens in the read/write path if that OSD is down at that moment?
[18:17] <vjujjuri> There can be cases, OSD is down but cluster map is not up to date.
[18:17] <vjujjuri> When the write/read fail, does the client retry it after populating clustermap?
[18:17] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[18:19] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[18:20] * sankarsh_ (~sankarsha@171.76.52.227) Quit (Quit: Leaving...)
[18:20] * Kioob`Taff (~plug-oliv@2a01:e35:2e8a:1e0::42:10) Quit (Quit: Leaving.)
[18:21] * ircolle (~Adium@2601:1:a580:1735:d02c:4dd6:bb0f:2b20) has joined #ceph
[18:21] * lovejoy (~lovejoy@213.83.69.6) has joined #ceph
[18:23] * phoenix42_ (~phoenix42@122.252.249.67) Quit (Ping timeout: 480 seconds)
[18:23] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Remote host closed the connection)
[18:24] <joelm> vjujjuri: there are a number of maps, osdmap, pgmap, mdsmap, monmap, CRUSHmap etc
[18:24] <joelm> they tick at different intervals
[18:25] <joelm> and depend on different factors.. monmap only increases when you change mon
[18:25] * Azru (~poller@425AAAMEL.tor-irc.dnsbl.oftc.net) Quit ()
[18:25] <joelm> pgmap will tick over regardless
[18:25] * LorenXo (~CobraKhan@5NZAABQ9W.tor-irc.dnsbl.oftc.net) has joined #ceph
[18:25] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has joined #ceph
[18:25] <joelm> when dealing with a cluster, you need to look at the #ceph osd set commands
[18:25] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has left #ceph
[18:26] <joelm> i.e. ceph osd set noout - will make the cluster not eject a disk from the map
[18:26] <joelm> so you can do maintenance on a given node without it rebalancing
[18:26] * oliver1 (~oliver@92.39.19.242.fixip.bitel.net) Quit (Quit: Leaving.)
[18:26] <joelm> stuff like that..
[18:26] <vjujjuri> joelm: I see. I am looking more at spikes of response times in the error path.
[18:27] <joelm> well you're writing to replicas at the same time
[18:27] <joelm> dependsing on number of replica set
[18:27] <vjujjuri> looking at 3
[18:27] <joelm> I'd point you to the site where it tells you all, but it's dead atm :(
[18:27] <vjujjuri> minimum 3 replicas.
[18:28] <joelm> shows how data is written and what happens at a low level
[18:28] * ircolle (~Adium@2601:1:a580:1735:d02c:4dd6:bb0f:2b20) Quit (Quit: Leaving.)
[18:28] <vjujjuri> joelm: that will be awesome.
[18:28] <vjujjuri> So some of the error path scenarios I am looking at is - IF the dependency on primary OSD could cause any latency issues.
[18:29] <vjujjuri> because it is single point at a given instant of the cluster map.
[18:29] * ircolle (~Adium@2601:1:a580:1735:d02c:4dd6:bb0f:2b20) has joined #ceph
[18:29] * daniel2_ (~daniel@12.0.207.18) Quit (Remote host closed the connection)
[18:29] <joelm> no, there is more abstraction
[18:29] <joelm> https://webcache.googleusercontent.com/search?q=cache:prv-LHmD_wgJ:ceph.com/docs/master/architecture/+&cd=1&hl=en&ct=clnk&gl=uk
[18:29] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:29] * phoenix42 (~phoenix42@122.252.249.67) has joined #ceph
[18:31] <vjujjuri> ok; will go through the link. Thanks a lot. Somehow images are not loading.
[18:31] <joelm> that's google cache
[18:31] <joelm> it never does afaik
[18:31] <joelm> hopefully the main site will be fixed soon
[18:31] <vjujjuri> ok. :)
[18:31] <joelm> annoying as there are pictures as well :)
[18:31] <joelm> which shows OSD write stages
[18:31] * daniel2_ (~daniel@12.0.207.18) has joined #ceph
[18:31] <vjujjuri> i see. that will be helpful.
[18:32] <vjujjuri> joelm does ceph support quorum writes? 2 out of 3 etc
[18:32] <joelm> no idea, sorry
[18:32] * joelm justa user
[18:32] * p66kumar (~p66kumar@74.119.205.248) has joined #ceph
[18:32] * vpol (~vpol@000131a0.user.oftc.net) Quit (Quit: vpol)
[18:33] <joelm> afaik it;s just to primary and then that'sll sip to 2nd and 3rd
[18:33] <joelm> but could be wrong
[18:33] * derjohn_mob (~aj@88.128.80.133) Quit (Ping timeout: 480 seconds)
[18:33] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[18:36] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[18:37] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[18:37] * sugoruyo (~georgev@paarthurnax.esc.rl.ac.uk) Quit (Quit: I'm going home!)
[18:37] * thomnico (~thomnico@AToulouse-654-1-278-245.w90-5.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[18:37] * MACscr (~Adium@2601:d:c800:de3:100c:9116:fdbc:1f5d) has joined #ceph
[18:38] <m0zes> vjujjuri: you can set min_size to whatever you want on a pool. then the write won't return for the client until min_size is met.
[18:39] <vjujjuri> m0zes: so can I put minsize=2 and maxsize=3? i.e write returns after 2 writes but eventually it gets replicated to 3 copies?
[18:40] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[18:40] <m0zes> min_size vs size, but yet.
[18:40] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[18:40] * kawa2014 (~kawa@89.184.114.246) Quit ()
[18:40] <m0zes> s/t/s/
[18:41] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) Quit (Ping timeout: 480 seconds)
[18:41] <skullone> ohh, i didnt know it could do that
[18:43] <m0zes> I would *not* recommend setting min_size to 1 ;)
[18:43] <m0zes> did that briefly on a cache tier, paused the cluster when I rebooted a server.
[18:44] <m0zes> well, it paused when the server crashed.
[18:44] <vjujjuri> I saw a old presentation on youtube may be from sage ? saying ceph won't support 2 out of 3 kind of model. But I am not sure if it is still a true statement...with latest changes.
[18:45] * dgurtner (~dgurtner@178.197.231.76) Quit (Ping timeout: 480 seconds)
[18:45] <championofcyrodi> is there a way to mount a swift bucket via radosgw? like from an ubuntu client?
[18:46] <championofcyrodi> I'd like to be able to use something like fuse or another solution to achieve this. I've heard Centos 7 has a ceph client?
[18:46] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[18:46] <championofcyrodi> But not sure how that works w/ radosgw
[18:47] * jordanP (~jordan@213.215.2.194) Quit (Quit: Leaving)
[18:47] <championofcyrodi> http://ceph.com/docks/master/radosgw/ seems to be timing out
[18:47] <championofcyrodi> a lot of the ceph docs actually.
[18:48] * sleinen1 (~Adium@2001:620:0:82::102) has joined #ceph
[18:50] <sage> vjujjuri: correct, no plan to support acks from partial writes.
[18:50] <sage> we could make an additional "semi-ack" that tells the client that the write 'is probably safe', but users need to be careful relying on it.
[18:51] <magicrobotmonkey> i have 1 unclean object, is it possible to see what it is?
[18:51] <magicrobotmonkey> er rather, 1 degraded object
[18:54] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[18:55] * LorenXo (~CobraKhan@5NZAABQ9W.tor-irc.dnsbl.oftc.net) Quit ()
[18:55] * Sysadmin88 (~IceChat77@054527d3.skybroadband.com) has joined #ceph
[18:55] * Spikey (~Kaervan@98EAABBGY.tor-irc.dnsbl.oftc.net) has joined #ceph
[18:55] * ajazdzewski (~ajazdzews@lpz-66.sprd.net) Quit (Quit: Konversation terminated!)
[18:56] * kefu (~kefu@114.92.111.70) has joined #ceph
[18:57] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) has joined #ceph
[19:02] * longguang_home (~chatzilla@60.28.18.192) Quit (Quit: ChatZilla 0.9.91.1 [Firefox 37.0.1/20150402191859])
[19:04] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[19:04] <skullone> is there another site that mirrors the ceph documentation? eu.ceph.com doesnt seem to have it =/
[19:04] <magicrobotmonkey> i just found this
[19:04] <magicrobotmonkey> http://dachary.org/loic/ceph-doc/
[19:04] <m0zes> https://github.com/ceph/ceph/tree/master/doc
[19:05] <skullone> awesome, thanks :)
[19:07] * sbfox (~Adium@72.2.49.50) has joined #ceph
[19:07] <championofcyrodi> well i got the swift-python-client to work, but getting cott@cott:~/s3-curl$ swift upload --object-name Object PUT failed: http://192.168.3.122:8080/swift/v1/test/Solaris.vdi 400 Bad Request EntityTooLarge
[19:07] <championofcyrodi> ahh i see
[19:10] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:10] <vjujjuri> sage, can you expand on users need to be careful part. I think swift supports it. The guarantee from filesystem is - the thrid replica will be made asap. and users should aware that 2 failures could cause dataloss as opposed to 3.
[19:11] <vjujjuri> sage, question about dependency on primary OSD, if primary OSD is down during read/write , client need to repopulate maps needed for CRUSH and retry the IO?
[19:11] <sage> there is no such thing as a guarantee when you don't wait for the last ack as the first 2 (or whatever) may fail. in fact in general, for any 2-node failure on a loaded, there will likely be several writes that were acked but then lost
[19:12] <sage> so anybody using that "feature" should not write an application that actually relies on completed writes being durable or else it will break in subtle ways
[19:21] * DV_ (~veillard@2001:41d0:1:d478::1) Quit (Remote host closed the connection)
[19:22] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[19:25] * Spikey (~Kaervan@98EAABBGY.tor-irc.dnsbl.oftc.net) Quit ()
[19:25] * Vale (~Epi@5.79.68.161) has joined #ceph
[19:26] * hellertime (~Adium@72.246.0.14) Quit (Quit: Leaving.)
[19:26] * phoenix42 (~phoenix42@122.252.249.67) Quit (Remote host closed the connection)
[19:29] * daniel2_ (~daniel@12.0.207.18) Quit (Ping timeout: 480 seconds)
[19:30] * kefu (~kefu@114.92.111.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:31] * angdraug (~angdraug@c-50-174-102-105.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[19:32] * phoenix42 (~phoenix42@122.252.249.67) has joined #ceph
[19:32] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[19:34] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[19:34] * bkopilov (~bkopilov@bzq-109-65-10-75.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[19:37] * phoenix42 (~phoenix42@122.252.249.67) Quit (Remote host closed the connection)
[19:37] * reed (~reed@2602:244:b653:6830:2c27:126:1cc:4840) Quit (Quit: Ex-Chat)
[19:37] * hellertime (~Adium@72.246.0.14) has joined #ceph
[19:40] * bkopilov (~bkopilov@bzq-79-180-180-6.red.bezeqint.net) has joined #ceph
[19:42] * xarses (~andreww@12.164.168.117) has joined #ceph
[19:44] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[19:44] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[19:45] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[19:46] * lovejoy_ (~lovejoy@57519dc8.skybroadband.com) has joined #ceph
[19:46] * reed (~reed@c-50-136-136-46.hsd1.ca.comcast.net) has joined #ceph
[19:47] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has joined #ceph
[19:48] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Remote host closed the connection)
[19:48] * bkopilov (~bkopilov@bzq-79-180-180-6.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[19:49] * bkopilov (~bkopilov@bzq-109-66-143-37.red.bezeqint.net) has joined #ceph
[19:49] * lovejoy_ (~lovejoy@57519dc8.skybroadband.com) Quit ()
[19:51] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[19:52] * lovejoy (~lovejoy@213.83.69.6) Quit (Ping timeout: 480 seconds)
[19:54] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Read error: No route to host)
[19:54] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[19:55] * Vale (~Epi@5NZAABRDM.tor-irc.dnsbl.oftc.net) Quit ()
[19:55] * Rehevkor (~SurfMaths@5NZAABRFJ.tor-irc.dnsbl.oftc.net) has joined #ceph
[19:56] * daniel2_ (~daniel@12.164.168.117) has joined #ceph
[19:57] * bkopilov (~bkopilov@bzq-109-66-143-37.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[19:57] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[20:02] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Ping timeout: 480 seconds)
[20:03] * DV_ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[20:03] * bitserker (~toni@85.238.9.126) Quit (Ping timeout: 480 seconds)
[20:06] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[20:06] * phoenix42 (~phoenix42@122.252.249.67) has joined #ceph
[20:07] * phoenix42 (~phoenix42@122.252.249.67) Quit (Remote host closed the connection)
[20:07] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[20:07] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) Quit (Quit: Ex-Chat)
[20:21] * lalatenduM (~lalatendu@122.171.65.15) has joined #ceph
[20:22] * dopesong (~dopesong@lb0.mailer.data.lt) Quit (Quit: Leaving...)
[20:23] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[20:23] * vpol (~vpol@000131a0.user.oftc.net) has joined #ceph
[20:23] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[20:25] * Rehevkor (~SurfMaths@5NZAABRFJ.tor-irc.dnsbl.oftc.net) Quit ()
[20:25] * utugi______ (~visored@5.61.34.63) has joined #ceph
[20:27] * puffy (~puffy@50.185.218.255) has joined #ceph
[20:28] * circ-user-sBFJA (~circuser-@198.60.31.75) has joined #ceph
[20:29] * daniel2_ (~daniel@12.164.168.117) Quit (Remote host closed the connection)
[20:30] * circ-user-sBFJA (~circuser-@198.60.31.75) Quit (Remote host closed the connection)
[20:30] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Ping timeout: 480 seconds)
[20:32] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Quit: Leaving...)
[20:35] * Hnaf (~Hnaf@198.60.31.75) has joined #ceph
[20:36] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[20:39] * cholcombe (~chris@pool-108-42-125-114.snfcca.fios.verizon.net) Quit (Remote host closed the connection)
[20:42] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[20:42] * cholcombe (~chris@pool-108-42-125-114.snfcca.fios.verizon.net) has joined #ceph
[20:43] * Emi21 (~Emi21@madrid-s01-i01.cg-dialup.net) has joined #ceph
[20:43] <Emi21> There is no such thing as Free Porn
[20:45] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[20:45] * Emi21 (~Emi21@madrid-s01-i01.cg-dialup.net) Quit (Read error: Connection reset by peer)
[20:49] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) Quit (Quit: Leaving.)
[20:50] * linuxkidd_ (~linuxkidd@mobile-166-173-251-043.mycingular.net) Quit (Quit: Leaving)
[20:51] <devicenull> so, what can I do if I have PGs stuck incomplete... "down_osds_we_would_probe" shows they're waiting for an OSD that no longer exists
[20:53] * sherlocked (~watson@14.139.82.6) has joined #ceph
[20:54] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[20:55] * vpol (~vpol@000131a0.user.oftc.net) Quit (Quit: vpol)
[20:55] * utugi______ (~visored@5NZAABRHD.tor-irc.dnsbl.oftc.net) Quit ()
[20:55] * Moriarty (~spidu_@0.tor.exit.babylon.network) has joined #ceph
[21:03] * p66kumar (~p66kumar@74.119.205.248) Quit (Quit: p66kumar)
[21:03] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[21:07] * sbfox (~Adium@72.2.49.50) Quit (Quit: Leaving.)
[21:15] * lalatenduM (~lalatendu@122.171.65.15) Quit (Quit: Leaving)
[21:25] * Moriarty (~spidu_@5NZAABRIM.tor-irc.dnsbl.oftc.net) Quit ()
[21:25] * Bobby (~luckz@tor-proxy-readme.cloudexit.eu) has joined #ceph
[21:25] * sherlocked (~watson@14.139.82.6) Quit (Quit: Leaving)
[21:32] * hellertime1 (~Adium@72.246.0.14) has joined #ceph
[21:32] * hellertime (~Adium@72.246.0.14) Quit (Read error: Connection reset by peer)
[21:34] * reed (~reed@c-50-136-136-46.hsd1.ca.comcast.net) Quit (Quit: Ex-Chat)
[21:38] * derjohn_mob (~aj@tmo-112-62.customers.d1-online.com) has joined #ceph
[21:38] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[21:39] * rendar (~I@host135-119-dynamic.57-82-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[21:41] * rendar (~I@host135-119-dynamic.57-82-r.retail.telecomitalia.it) has joined #ceph
[21:53] * bene2 (~ben@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[21:53] * bkopilov (~bkopilov@bzq-79-183-107-179.red.bezeqint.net) has joined #ceph
[21:54] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Ping timeout: 480 seconds)
[21:55] * Bobby (~luckz@5NZAABRJ8.tor-irc.dnsbl.oftc.net) Quit ()
[21:55] * Swompie` (~rcfighter@176.10.99.205) has joined #ceph
[21:58] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[21:58] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[22:05] * hellertime1 (~Adium@72.246.0.14) Quit (Quit: Leaving.)
[22:06] * hellertime (~Adium@72.246.0.14) has joined #ceph
[22:06] * tmh_ (~e@loophole.cc) has joined #ceph
[22:09] * thomnico (~thomnico@92.175.70.202) has joined #ceph
[22:11] * linuxkidd (~linuxkidd@mobile-166-173-251-043.mycingular.net) has joined #ceph
[22:12] * cholcombe (~chris@pool-108-42-125-114.snfcca.fios.verizon.net) Quit (Remote host closed the connection)
[22:13] * linuxkidd (~linuxkidd@mobile-166-173-251-043.mycingular.net) Quit ()
[22:13] * cholcombe (~chris@pool-108-42-125-114.snfcca.fios.verizon.net) has joined #ceph
[22:13] * linuxkidd (~linuxkidd@mobile-166-173-251-043.mycingular.net) has joined #ceph
[22:13] * puffy (~puffy@50.185.218.255) Quit (Quit: Leaving.)
[22:14] * puffy (~puffy@50.185.218.255) has joined #ceph
[22:15] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[22:15] * bobrik (~bobrik@83.243.64.45) has joined #ceph
[22:16] * jwilkins (~jwilkins@2601:9:4580:f4c:ea2a:eaff:fe08:3f1d) Quit (Ping timeout: 480 seconds)
[22:17] * DV_ (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[22:18] * georgem (~Adium@184.151.190.73) has joined #ceph
[22:24] * reed (~reed@2602:244:b653:6830:bc97:e4a5:9d73:7630) has joined #ceph
[22:25] * thomnico (~thomnico@92.175.70.202) Quit (Quit: Ex-Chat)
[22:25] * Swompie` (~rcfighter@5NZAABRLX.tor-irc.dnsbl.oftc.net) Quit ()
[22:25] * Enikma (~Behedwin@tor.nullbyte.me) has joined #ceph
[22:25] * vpol (~vpol@000131a0.user.oftc.net) has joined #ceph
[22:26] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[22:27] * tk12_ (~tk12@68.140.239.132) has joined #ceph
[22:27] <litwol> Hello
[22:27] * jwilkins (~jwilkins@c-67-180-123-48.hsd1.ca.comcast.net) has joined #ceph
[22:28] <litwol> I am setting up ceph for the first time after watching many hour of talks and intros about it
[22:28] <litwol> i am testing it in a virtual machine
[22:28] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has left #ceph
[22:28] <litwol> underlying osd storage is a single virtual disk formatted as zfs pool
[22:29] <litwol> i have 4 monitors running without a problem.
[22:29] <litwol> i am however running into an issue with creation if first OSD
[22:29] <litwol> my expectation is to see ceph utilize zfs native support for snapshots
[22:29] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[22:30] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[22:30] <bstillwell> You want an odd numbers of monitors to avoid split brain.
[22:30] <litwol> however having created a new (first) osd, i see that there's no snapshotting going on
[22:30] <litwol> bstillwell: yes. 4 mons are up. 3 are in a quorum.
[22:30] <bstillwell> I don't know if zfs snapshots are supported, someone else will have to chime in.
[22:31] <bstillwell> afaik most people are using xfs
[22:31] <litwol> that doesn't help me much
[22:32] <gregsfortytwo> you won't get split brain with an even number of monitors, it just doesn't add resiliency beyond the previous odd number
[22:32] <gregsfortytwo> there are many discussions on the list about this if you want more info :)
[22:33] <litwol> I am following this tutorial https://wiki.gentoo.org/wiki/Ceph
[22:33] * tk12 (~tk12@68.140.239.132) Quit (Ping timeout: 480 seconds)
[22:34] * tmh_ (~e@loophole.cc) Quit (Killed (NickServ (Too many failed password attempts.)))
[22:35] * tmh_ (~e@loophole.cc) has joined #ceph
[22:35] * adeel (~adeel@fw1.ridgeway.scc-zip.net) Quit (Quit: Leaving...)
[22:36] <litwol> Does ceph manage underying filesystem automatically? or is there some kind of configuration that i can use to control, or at least hint to ceph what to do ?
[22:36] <bstillwell> gregsfortytwo: thanks, I'll take a look
[22:36] * linuxkidd (~linuxkidd@mobile-166-173-251-043.mycingular.net) Quit (Quit: Leaving)
[22:44] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[22:45] * hellertime (~Adium@72.246.0.14) Quit (Quit: Leaving.)
[22:47] * jeff-YF (~jeffyf@67.23.117.122) Quit (Quit: jeff-YF)
[22:49] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Quit: later)
[22:49] * ganders (~root@190.2.42.21) Quit (Quit: WeeChat 0.4.2)
[22:54] * tmh_ (~e@loophole.cc) Quit (Ping timeout: 480 seconds)
[22:55] * Enikma (~Behedwin@2WVAABONL.tor-irc.dnsbl.oftc.net) Quit ()
[22:57] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[23:00] * fsimonce (~simon@host178-188-dynamic.26-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[23:01] * fireD (~fireD@93-142-231-194.adsl.net.t-com.hr) Quit (Quit: leaving)
[23:01] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) has left #ceph
[23:04] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[23:05] * georgem (~Adium@184.151.190.73) Quit (Quit: Leaving.)
[23:12] * p66kumar (~p66kumar@74.119.205.248) has joined #ceph
[23:13] * sbfox (~Adium@72.2.49.50) has joined #ceph
[23:15] * vata (~vata@208.88.110.46) Quit (Quit: Leaving.)
[23:16] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[23:17] * vpol (~vpol@000131a0.user.oftc.net) Quit (Quit: vpol)
[23:24] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:25] * Nephyrin (~Pirate@5NZAABRQW.tor-irc.dnsbl.oftc.net) has joined #ceph
[23:28] <litwol> Is there a way to reset "HEALTH_WARN clock skew detected" without restarting monitor?
[23:28] <litwol> i've enabled ntp
[23:29] <litwol> but health warning stayed
[23:29] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) has joined #ceph
[23:29] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) Quit ()
[23:31] * nljmo_ (~nljmo@5ED6C263.cm-7-7d.dynamic.ziggo.nl) Quit (Quit: Textual IRC Client: www.textualapp.com)
[23:33] <jcsp1> it generally takes some time to go away as the clocks drift back into sync
[23:36] * daniel2_ (~daniel@12.164.168.117) has joined #ceph
[23:36] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[23:39] * nljmo (~nljmo@5ED6C263.cm-7-7d.dynamic.ziggo.nl) has joined #ceph
[23:42] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[23:43] * sjmtest (uid32746@id-32746.uxbridge.irccloud.com) has joined #ceph
[23:46] <litwol> jcsp1: Is there a specific timer that can be specified in config?
[23:46] * sbfox (~Adium@72.2.49.50) Quit (Quit: Leaving.)
[23:47] <litwol> jcsp1: better yet. is there a "ceph [xx]" command that triggers cluster-wide heath check which would remove health_warn?
[23:47] <litwol> even better if there's a docs page which described this that i can read
[23:47] <litwol> :)
[23:48] <florz> litwol: are the clocks actually in sync?
[23:48] <lurbs> litwol: I've seen that too. Not sure if the WARN state eventually clears - I just restarted the monitors.
[23:48] <litwol> florz: they were put in sync /after/ health_warn was initiated.
[23:51] <florz> litwol: well, jcsp1's point was that enabling NTP does not necessarily sync the clock immediately
[23:51] <litwol> i understand. but in my case it did :)
[23:51] <litwol> i've made sure of it
[23:51] <litwol> gentoo splits ntp into a syncing daemon, and into a command that syncs time "now!"
[23:51] <litwol> i've ran both.
[23:52] <litwol> and then checked time to be in sync
[23:52] <florz> in that case, no clue, but restart shouldn't hurt ;-)
[23:52] <litwol> salt ceph-node-\* cmd.run 'date'
[23:52] <litwol> that shows identical time
[23:53] <jcsp1> ???to within the nearest whole second
[23:54] <jcsp1> ceph's default tolerance is 0.05s
[23:54] <florz> (though actually I'd guess that the health warning shouldn't really hurt either?)
[23:55] * Nephyrin (~Pirate@5NZAABRQW.tor-irc.dnsbl.oftc.net) Quit ()
[23:55] * Wizeon (~Jones@tor.het.net) has joined #ceph
[23:58] <gregsfortytwo> I think there was a bug at one point where this state didn't get cleared properly after being fixed; what version are you running and have you checked subsequent release notes for anything about clock skew detection?
[23:59] <litwol> i did not.
[23:59] <litwol> i was only interested in whether there's a "ceph XX" command that forces cluster health check, or clear previous warning/status.

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.