#ceph IRC Log

Index

IRC Log for 2014-09-02

Timestamps are in GMT/BST.

[0:04] * fsimonce (~simon@host135-17-dynamic.8-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[0:10] * haomaiwang (~haomaiwan@223.223.183.114) Quit (Read error: Connection reset by peer)
[0:18] * AfC (~andrew@customer-hotspot.esshotell.se) has joined #ceph
[0:25] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[0:25] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:31] * AfC (~andrew@customer-hotspot.esshotell.se) Quit (Ping timeout: 480 seconds)
[0:31] * monsterzz (~monsterzz@94.19.146.224) Quit (Read error: Connection reset by peer)
[0:32] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[0:32] * rendar (~I@host39-6-dynamic.7-79-r.retail.telecomitalia.it) Quit ()
[0:35] * haomaiwang (~haomaiwan@223.223.183.114) has joined #ceph
[0:40] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[0:44] * Pedras (~Adium@50.185.218.255) has joined #ceph
[0:49] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[0:51] * dgbaley27 (~matt@c-98-245-167-2.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[1:01] * haomaiwang (~haomaiwan@223.223.183.114) Quit (Read error: Connection reset by peer)
[1:05] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[1:10] * haomaiwang (~haomaiwan@223.223.183.114) has joined #ceph
[1:13] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[1:18] * oms101 (~oms101@p20030057EA597000C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:18] * Pedras (~Adium@50.185.218.255) Quit (Quit: Leaving.)
[1:20] * tinklebear (~tinklebea@66.55.144.246) Quit (Quit: Nettalk6 - www.ntalk.de)
[1:22] * haomaiwang (~haomaiwan@223.223.183.114) Quit (Read error: Connection reset by peer)
[1:23] * scuttle|afk is now known as scuttlemonkey
[1:26] * oms101 (~oms101@p20030057EA405500C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:27] * sleinen1 (~Adium@2001:620:0:68::104) Quit (Ping timeout: 480 seconds)
[1:30] * haomaiwang (~haomaiwan@223.223.183.114) has joined #ceph
[1:37] * carrot (~oftc-webi@103.6.103.83) has joined #ceph
[1:38] <carrot> Hello, is there webmaster of ceph.com ?
[1:40] * carrot slaps sage around a bit with a large fishbot
[1:41] * carrot (~oftc-webi@103.6.103.83) Quit (Remote host closed the connection)
[1:43] * Pedras (~Adium@50.185.218.255) has joined #ceph
[1:54] <iggy> it's probably somebody else
[1:54] <iggy> the person I was going to suggest isn't online right now
[1:55] * haomaiwang (~haomaiwan@223.223.183.114) Quit (Read error: Connection reset by peer)
[2:04] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[2:05] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[2:06] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[2:07] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[2:12] * car (~oftc-webi@103.6.103.83) has joined #ceph
[2:13] * haomaiwang (~haomaiwan@223.223.183.114) has joined #ceph
[2:13] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[2:16] * adamcrume (~quassel@2601:9:6680:47:9436:783b:dc33:41a0) Quit (Remote host closed the connection)
[2:17] * haomaiwang (~haomaiwan@223.223.183.114) Quit (Read error: Connection reset by peer)
[2:18] * car (~oftc-webi@103.6.103.83) Quit (Remote host closed the connection)
[2:19] * carrot (carrot@103.6.103.83) has joined #ceph
[2:21] <carrot> Hello, who is the web admin of ceph.com ?
[2:33] * lofejndif (~lsqavnbok@37PAABKRI.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[2:39] <carrot> hmm
[2:39] * haomaiwang (~haomaiwan@223.223.183.114) has joined #ceph
[2:42] * haomaiwang (~haomaiwan@223.223.183.114) Quit (Read error: Connection reset by peer)
[2:44] * alekseyp (~masta@190.7.213.210) Quit (Quit: Leaving...)
[2:45] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[2:46] * zack_dolby (~textual@p843a3d.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:48] * haomaiwang (~haomaiwan@223.223.183.114) has joined #ceph
[2:51] <carrot> I can't reach ceph.com with 403 forbidden, I think web admin block it by IP address.
[2:53] * steveeJ (~junky@HSI-KBW-085-216-022-246.hsi.kabelbw.de) Quit (Quit: Leaving)
[2:56] * linuxkidd (~linuxkidd@cpe-066-057-017-151.nc.res.rr.com) Quit (Quit: Leaving)
[2:56] <jcsp> carrot: works for me. http://www.downforeveryoneorjustme.com/ceph.com
[2:58] * lucas1 (~Thunderbi@218.76.25.66) Quit (Quit: lucas1)
[2:58] <carrot> jcsp / I know it, I think ceph's web admin block all Korea's IP.
[2:58] <carrot> If I using proxy server ( china, us, japan .. etc ) then I can reach ceph.com
[2:59] <carrot> I changed many ISPs in Korea, can't reach it.
[3:06] * Pedras (~Adium@50.185.218.255) Quit (Quit: Leaving.)
[3:06] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[3:09] <iggy> probably trying to cut down on wiki spam
[3:10] <carrot> iggy/ Hmm.. how to solve this ?
[3:11] <iggy> you said you already got around it by using a proxy server... seems like a valid way to solve it
[3:12] <blahnana> yup
[3:14] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[3:14] <carrot> But I need to install ceph on my server, run ceph-deploy & download to https://ceph.com/... 403 forbidden
[3:29] <blahnana> how are you trying to download it on your server? can you not use a proxy server there?
[3:36] * KevinPerks (~Adium@2606:a000:80a1:1b00:f168:fadf:43:a31b) Quit (Quit: Leaving.)
[3:37] * zerick (~eocrospom@190.187.21.53) Quit (Ping timeout: 480 seconds)
[3:40] * haomaiwang (~haomaiwan@223.223.183.114) Quit (Read error: Connection reset by peer)
[3:40] * mfa298 (~mfa298@gateway.yapd.net) Quit (Ping timeout: 480 seconds)
[3:42] * zhaochao (~zhaochao@111.204.252.1) has joined #ceph
[3:42] * mtl2 (~Adium@c-98-245-49-17.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[3:43] * haomaiwang (~haomaiwan@223.223.183.114) has joined #ceph
[3:46] * wangqty (~qiang@111.161.17.105) has joined #ceph
[3:48] <carrot> blahnana / proxy server donesn't support https. It should be connect to direct.
[3:48] * zack_dolby (~textual@e0109-114-22-14-33.uqwimax.jp) has joined #ceph
[3:49] * mfa298 (~mfa298@gateway.yapd.net) has joined #ceph
[3:50] * haomaiwang (~haomaiwan@223.223.183.114) Quit (Read error: Connection reset by peer)
[3:52] * lucas1 (~Thunderbi@222.247.57.50) has joined #ceph
[3:52] * bipinkunal (~bkunal@1.23.195.149) has joined #ceph
[4:01] * haomaiwang (~haomaiwan@124.248.205.4) has joined #ceph
[4:04] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[4:07] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[4:09] * lucas1 (~Thunderbi@222.247.57.50) Quit (Remote host closed the connection)
[4:15] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[4:21] * apolloJess (~Thunderbi@202.60.8.252) has joined #ceph
[4:22] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[4:27] * jtaguinerd (~jtaguiner@203.215.116.66) has joined #ceph
[4:36] * lucas1 (~Thunderbi@218.76.25.66) Quit (Quit: lucas1)
[4:42] * bipinkunal (~bkunal@1.23.195.149) Quit (Ping timeout: 480 seconds)
[4:42] * haomaiwang (~haomaiwan@124.248.205.4) Quit (Read error: Connection reset by peer)
[4:48] * haomaiwang (~haomaiwan@124.248.205.4) has joined #ceph
[5:08] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[5:09] * monsterz_ (~monsterzz@94.19.146.224) has joined #ceph
[5:09] * monsterzz (~monsterzz@94.19.146.224) Quit (Read error: Connection reset by peer)
[5:15] * haomaiwang (~haomaiwan@124.248.205.4) Quit (Read error: Connection reset by peer)
[5:15] * haomaiwang (~haomaiwan@124.248.205.4) has joined #ceph
[5:17] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[5:17] * monsterz_ (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[5:23] * haomaiwang (~haomaiwan@124.248.205.4) Quit (Ping timeout: 480 seconds)
[5:27] * Vacuum (~vovo@i59F791D3.versanet.de) has joined #ceph
[5:28] * haomaiwang (~haomaiwan@124.248.205.4) has joined #ceph
[5:31] * yanzheng1 (~zhyan@171.221.139.239) has joined #ceph
[5:32] * longguang (~chatzilla@123.126.33.253) Quit (Read error: Connection reset by peer)
[5:34] * Vacuum_ (~vovo@88.130.214.161) Quit (Ping timeout: 480 seconds)
[5:34] * yanzheng (~zhyan@171.221.143.132) Quit (Ping timeout: 480 seconds)
[5:43] * bipinkunal (~bkunal@121.244.87.115) has joined #ceph
[5:43] * Sysadmin88 (~IceChat77@176.250.164.108) has joined #ceph
[5:56] * haomaiwang (~haomaiwan@124.248.205.4) Quit (Ping timeout: 480 seconds)
[6:01] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Remote host closed the connection)
[6:03] * RameshN (~rnachimu@121.244.87.117) has joined #ceph
[6:06] * haomaiwang (~haomaiwan@223.223.183.114) has joined #ceph
[6:20] * wangqty (~qiang@111.161.17.105) Quit (Quit: Leaving.)
[6:21] * yanzheng1 (~zhyan@171.221.139.239) Quit (Quit: This computer has gone to sleep)
[6:22] * Lingo (~Lingo@192.200.151.164) has joined #ceph
[6:22] <Lingo> hello
[6:23] <Sysadmin88> hello
[6:24] <Lingo> I'm a ceph newer, I'm learning ceph source code.
[6:24] <Lingo> but I found it's difficult to learn.
[6:24] <Sysadmin88> what are you trying to do?
[6:25] <Lingo> I want to learn more about rados gateway
[6:25] <Sysadmin88> you want to learn about it or use it?
[6:26] <Lingo> my company want to use ceph to build cloud stroage server.
[6:26] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[6:27] <Sysadmin88> have you considered how much storage you want, and what hardware you can put in place?
[6:32] <Lingo> I have not clear idea for the storage and hardware now, maybe I'll learn more about that.Now I just want to learn and understand the sourcecode.
[6:33] <Lingo> I look up the docs on the ceph official website, I didn't found any docs about the code, like the flow chart of call relationship etc.
[6:33] <Lingo> I have learn openstack swift before, it's easier to ceph.
[6:33] <Sysadmin88> the documentation should tell you how to get it working
[6:33] <Sysadmin88> not usually a map of the source...
[6:34] <Sysadmin88> have you tried ceph and got it working on some test VMs/machines?
[6:38] <Lingo> Not yet, but we have VMs running ceph, include gateway and rados.
[6:38] <Sysadmin88> hopefully OSDs...
[6:38] <Sysadmin88> and monitors
[6:40] <Lingo> Ceph is write with C++, do you know what IDE use when developing ceph?
[6:40] <Sysadmin88> no, don't think i need to... i am not trying to rewrite it
[6:43] <Lingo> yes, we don't rewrite ceph at the begining, but we think maybe we will add custom function with it.
[6:44] <Sysadmin88> you said you dont even know how much storage your looking to put in. or the hardware, but your interested in customizing it? i think you would need to know how to use it and the hardware/storage you have before you could even approach custom stuff
[6:47] <Lingo> yes,you're right, I'll learn how to use it first
[6:47] <Sysadmin88> what sort of thing are you looking to do with ceph?
[6:50] <Lingo> my company have many systems, the image data of these systems save on the NAS and db before, we want to move the data to ceph.
[6:55] * Lingo (~Lingo@192.200.151.164) Quit (Quit: Lingo - http://www.lingoirc.com)
[6:57] * vbellur (~vijay@121.244.87.117) has joined #ceph
[6:58] * wangqty (~qiang@125.33.118.141) has joined #ceph
[7:04] * yanzheng (~zhyan@171.221.139.239) Quit (Quit: This computer has gone to sleep)
[7:05] * rdas (~rdas@121.244.87.115) has joined #ceph
[7:10] * lucas1 (~Thunderbi@218.76.25.66) Quit (Quit: lucas1)
[7:21] * haomaiwang (~haomaiwan@223.223.183.114) Quit (Read error: Connection reset by peer)
[7:21] * wangqty (~qiang@125.33.118.141) Quit (Remote host closed the connection)
[7:23] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[7:24] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has joined #ceph
[7:24] * haomaiwang (~haomaiwan@223.223.183.114) has joined #ceph
[7:26] * jtaguinerd1 (~jtaguiner@203.215.120.254) has joined #ceph
[7:28] * jtaguinerd2 (~jtaguiner@203.215.116.66) has joined #ceph
[7:28] * haomaiwang (~haomaiwan@223.223.183.114) Quit (Read error: Connection reset by peer)
[7:29] * haomaiwang (~haomaiwan@223.223.183.114) has joined #ceph
[7:32] * jtaguinerd (~jtaguiner@203.215.116.66) Quit (Ping timeout: 480 seconds)
[7:33] * michalefty (~micha@p20030071CF057100F1A43B84573DFEAA.dip0.t-ipconnect.de) has joined #ceph
[7:33] * zhaozhiming (~zhaozhimi@118.122.91.223) has joined #ceph
[7:34] * jtaguinerd1 (~jtaguiner@203.215.120.254) Quit (Ping timeout: 480 seconds)
[7:35] * danieljh (~daniel@0001b4e9.user.oftc.net) Quit (Ping timeout: 480 seconds)
[7:38] * zhaozhiming_ (~zhaozhimi@192.200.151.177) has joined #ceph
[7:38] * haomaiwang (~haomaiwan@223.223.183.114) Quit (Read error: Connection reset by peer)
[7:38] * zhaozhiming_ (~zhaozhimi@192.200.151.177) Quit (Remote host closed the connection)
[7:38] * zhaozhiming_ (~zhaozhimi@192.200.151.177) has joined #ceph
[7:43] * haomaiwang (~haomaiwan@223.223.183.114) has joined #ceph
[7:45] * jtaguinerd (~jtaguiner@112.205.19.199) has joined #ceph
[7:45] * zhaozhiming (~zhaozhimi@118.122.91.223) Quit (Ping timeout: 480 seconds)
[7:48] * shang (~ShangWu@111-83-21-88.EMOME-IP.hinet.net) has joined #ceph
[7:48] * zhaozhiming__ (~zhaozhimi@118.122.91.223) has joined #ceph
[7:50] * jtaguinerd2 (~jtaguiner@203.215.116.66) Quit (Ping timeout: 480 seconds)
[7:50] * haomaiwang (~haomaiwan@223.223.183.114) Quit (Read error: Connection reset by peer)
[7:54] * Nacer (~Nacer@2001:41d0:fe82:7200:a991:d695:7028:bb6f) has joined #ceph
[7:54] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[7:55] * zhaozhiming_ (~zhaozhimi@192.200.151.177) Quit (Ping timeout: 480 seconds)
[7:58] * zhaozhiming__ (~zhaozhimi@118.122.91.223) Quit (Remote host closed the connection)
[7:58] * zhaozhiming (~zhaozhimi@192.200.151.208) has joined #ceph
[7:58] * zhaozhiming (~zhaozhimi@192.200.151.208) Quit (Remote host closed the connection)
[7:58] * zhaozhiming (~zhaozhimi@192.200.151.208) has joined #ceph
[8:02] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[8:03] * shang (~ShangWu@111-83-21-88.EMOME-IP.hinet.net) Quit (Quit: Ex-Chat)
[8:04] * shang (~ShangWu@111-83-21-88.EMOME-IP.hinet.net) has joined #ceph
[8:12] * Nats_ (~natscogs@114.31.195.238) has joined #ceph
[8:12] * Nats (~natscogs@114.31.195.238) Quit (Read error: Connection reset by peer)
[8:12] * shang (~ShangWu@111-83-21-88.EMOME-IP.hinet.net) Quit (Read error: Connection reset by peer)
[8:16] * Nacer (~Nacer@2001:41d0:fe82:7200:a991:d695:7028:bb6f) Quit (Remote host closed the connection)
[8:16] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[8:17] * Nacer (~Nacer@2001:41d0:fe82:7200:a991:d695:7028:bb6f) has joined #ceph
[8:22] * Concubidated (~Adium@66-87-142-242.pools.spcsdns.net) has joined #ceph
[8:23] * Nats (~natscogs@114.31.195.238) has joined #ceph
[8:23] * Nats_ (~natscogs@114.31.195.238) Quit (Read error: Connection reset by peer)
[8:24] * zhaozhiming (~zhaozhimi@192.200.151.208) Quit (Quit: Computer has gone to sleep.)
[8:25] * Nacer (~Nacer@2001:41d0:fe82:7200:a991:d695:7028:bb6f) Quit (Ping timeout: 480 seconds)
[8:25] * zhaozhiming (~zhaozhimi@192.200.151.208) has joined #ceph
[8:26] * shang (~ShangWu@111-83-21-88.EMOME-IP.hinet.net) has joined #ceph
[8:29] * branto (~borix@ip-213-220-214-245.net.upcbroadband.cz) has joined #ceph
[8:32] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[8:32] * lcavassa (~lcavassa@89.184.114.246) has joined #ceph
[8:32] * zhaozhiming_ (~zhaozhimi@192.200.151.214) has joined #ceph
[8:33] * zhaozhiming (~zhaozhimi@192.200.151.208) Quit (Ping timeout: 480 seconds)
[8:39] * shang (~ShangWu@111-83-21-88.EMOME-IP.hinet.net) Quit (Quit: Ex-Chat)
[8:43] * zhaozhiming_ (~zhaozhimi@192.200.151.214) Quit (Ping timeout: 480 seconds)
[8:48] * rendar (~I@host187-180-dynamic.32-79-r.retail.telecomitalia.it) has joined #ceph
[8:51] * Concubidated (~Adium@66-87-142-242.pools.spcsdns.net) Quit (Quit: Leaving.)
[8:56] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) has joined #ceph
[8:58] * garphy`aw is now known as garphy
[9:08] * tab (~oftc-webi@194.249.247.164) Quit (Remote host closed the connection)
[9:10] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[9:13] * Scar3cr0w (~Scar3cr0w@173-13-173-53-sfba.hfc.comcastbusiness.net) Quit (Quit: Later)
[9:16] * AfC (~andrew@93.94.208.154) has joined #ceph
[9:16] * simulx2 (~simulx@vpn.expressionanalysis.com) Quit (Quit: Nettalk6 - www.ntalk.de)
[9:17] * lucas1 (~Thunderbi@222.247.57.50) has joined #ceph
[9:21] * peedu (~peedu@185.46.20.35) has joined #ceph
[9:22] <peedu> hi, is there a way to config ceph osd-s to use disk labels? Yesterday changing one disk server got kernel panic, and after reboot disk names eg sda,sdb had changed and all osd went down.
[9:22] <kraken> http://i.imgur.com/WS4S2.gif
[9:23] * mgarcesMZ (~mgarces@5.206.228.5) has joined #ceph
[9:23] <mgarcesMZ> hi guys
[9:23] <mgarcesMZ> I was wondering if anyone from Inktank hangs around?
[9:24] <mgarcesMZ> my company is looking for Ceph training
[9:25] * vbellur (~vijay@209.132.188.8) has joined #ceph
[9:25] <Kioob`Taff> peedu : for that problem I use disk UUID (and part-uuid for journals). Did you try that ?
[9:25] <fghaas> mgarcesMZ: we're an Inktank/Red Hat partner for Ceph training, if that helps. Where are you located?
[9:26] <mgarcesMZ> fghaas: Mozambique
[9:26] <mgarcesMZ> :)
[9:26] <mgarcesMZ> you?
[9:26] <kraken> you are awesome (alfredodeza on 07/25/2014 04:01PM)
[9:26] <fghaas> I'm in Europe, but my company does Ceph training world wide. Are you looking for on-site or on-line classes?
[9:27] <mgarcesMZ> It would be more cost effective for us to do virtual trainning
[9:28] <mgarcesMZ> I saw CEPH100 and CEPH110 courses in Inktank site
[9:28] <mgarcesMZ> look very interesting
[9:28] <peedu> Kioob`Taff will look into that ty
[9:29] <Kioob`Taff> for example in fstab : UUID=cf46cc7b-6723-467c-bde0-01141d83d7ed /var/lib/ceph/osd/ceph-23 xfs rw,noatime,nodev,nosuid,noquota,attr2,inode64,logbsize=256k,allocsize=4m 0 2
[9:29] <mgarcesMZ> fghaas: where in europe?
[9:29] <Kioob`Taff> and in ceph.conf : [osd.36]
[9:29] <Kioob`Taff> host = stor6
[9:29] <Kioob`Taff> osd journal = /dev/disk/by-partuuid/3c51d0a2-c1e1-4651-8dd3-424b53527b29
[9:30] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[9:30] <fghaas> me? Vienna. I'm actually teaching those exact two courses in .cz week after next :)
[9:30] <fghaas> mgarcesMZ: how many would-be attendees?
[9:30] <mgarcesMZ> 2
[9:30] <mgarcesMZ> perhaps more, but 2 for sure
[9:31] <mgarcesMZ> the thing is, I am preparing a big project around ceph
[9:31] <mgarcesMZ> and they want to rush this to production
[9:31] <mgarcesMZ> I just want to go trough training, so I can take care of learning stuff I might not cover by reading the docs
[9:32] <fghaas> mgarcesMZ: you could write a quick email to training@hastexo.com and I'll see what we can do. I am presently not aware of an online training option that Inktank offers, but I'll be happy to discuss this with my contacts at Inktank ??? it's quite possible that we can help you out there.
[9:32] <mgarcesMZ> fghaas: thank you
[9:32] <mgarcesMZ> I already sent the request on Inktank site
[9:33] <mgarcesMZ> to get more info on those 2 courses
[9:33] <fghaas> sure, but drop us a quick line anyway -- sometimes we can help speed up the process a bit. :)
[9:33] <mgarcesMZ> thanks :)
[9:34] <mgarcesMZ> I have so many stuff I still need to learn
[9:34] <mgarcesMZ> getting training is the best way, because I get to stay focused on that
[9:38] * analbeard (~shw@support.memset.com) has joined #ceph
[9:40] * florent (~florent@2a04:2500:0:103:35fe:649a:cc6a:baa0) has joined #ceph
[9:44] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[9:44] * boichev2 (~boichev@213.169.56.130) has joined #ceph
[9:45] * haomaiwang (~haomaiwan@182.48.117.114) has joined #ceph
[9:45] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:47] * zack_dolby (~textual@e0109-114-22-14-33.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[9:47] <mgarcesMZ> fghaas: email sent :)
[9:48] <fghaas> mgarcesMZ: reply sent :)
[9:48] <mgarcesMZ> that was fast! :)
[9:48] * boichev (~boichev@213.169.56.130) Quit (Ping timeout: 480 seconds)
[9:49] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Remote host closed the connection)
[9:51] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[9:52] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[9:55] <fghaas> mgarcesMZ: we aim to please :)
[9:56] <mgarcesMZ> eheheh
[9:57] * fsimonce (~simon@host135-17-dynamic.8-79-r.retail.telecomitalia.it) has joined #ceph
[9:57] <mgarcesMZ> fghaas: so refreshing to see a non html email
[9:57] <fghaas> mgarcesMZ: HTML email is evil, I avoid it like the plague wherever I can
[9:58] <mgarcesMZ> I wish I could
[9:58] <mgarcesMZ> not my decision to make
[9:58] <fghaas> I could also send you a PGP/MIME encrypted one for your geeking pleasure :)
[9:59] <mgarcesMZ> no no, thats to much emotion for the morning :D
[10:00] * zack_dolby (~textual@ai126184055119.15.access-internet.ne.jp) has joined #ceph
[10:03] * fdmanana (~fdmanana@bl5-77-181.dsl.telepac.pt) has joined #ceph
[10:04] * lucas1 (~Thunderbi@222.247.57.50) Quit (Quit: lucas1)
[10:04] * vbellur (~vijay@209.132.188.8) Quit (Ping timeout: 480 seconds)
[10:04] * hybrid512 (~walid@195.200.167.70) Quit (Quit: Leaving.)
[10:05] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[10:06] * hybrid512 (~walid@195.200.167.70) has joined #ceph
[10:08] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:08] * mancdaz (~mancdaz@2a00:1a48:7807:102:94f4:6b56:ff08:886c) Quit (Quit: ZNC - http://znc.in)
[10:09] * saurabh (~saurabh@209.132.188.8) has joined #ceph
[10:11] * michalefty (~micha@p20030071CF057100F1A43B84573DFEAA.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[10:16] * vbellur (~vijay@121.244.87.117) has joined #ceph
[10:19] <ghartz> leseb, why in your post you disable the update on start ? http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
[10:20] <ghartz> old stopped OSD (old crushmap) may erase the new crushmap ?
[10:21] * Sysadmin88 (~IceChat77@176.250.164.108) Quit (Quit: I used to think I was indecisive, but now I'm not too sure.)
[10:21] * AfC (~andrew@93.94.208.154) Quit (Ping timeout: 480 seconds)
[10:22] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[10:22] * michalefty (~micha@p20030071CF082B00F1A43B84573DFEAA.dip0.t-ipconnect.de) has joined #ceph
[10:28] * steveeJ (~junky@HSI-KBW-085-216-022-246.hsi.kabelbw.de) has joined #ceph
[10:30] * AfC (~andrew@93.94.208.154) has joined #ceph
[10:33] * steveeJ (~junky@HSI-KBW-085-216-022-246.hsi.kabelbw.de) Quit (Remote host closed the connection)
[10:36] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[10:37] <mgarcesMZ> which caching mechanism using SSD do you guys recommend?
[10:37] * saurabh (~saurabh@209.132.188.8) Quit (Ping timeout: 480 seconds)
[10:39] * boichev (~boichev@213.169.56.130) has joined #ceph
[10:41] <chowmeined> mgarcesMZ, SSD journals make a big difference
[10:41] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[10:42] <chowmeined> and they're the easiest to get started with
[10:42] <mgarcesMZ> but this is OS level, or Ceph level?
[10:42] <chowmeined> i didnt have much luck with SSD tier, but I don't think i had enough hardware
[10:43] <chowmeined> placing the OSD journal on SSD happens at the Ceph level
[10:43] * blackmen (~Ajit@121.244.87.115) has joined #ceph
[10:43] * boichev3 (~boichev@213.169.56.130) has joined #ceph
[10:43] <chowmeined> theres also SSD tiering at the Ceph level
[10:43] * boichev2 (~boichev@213.169.56.130) Quit (Ping timeout: 480 seconds)
[10:45] <mgarcesMZ> I was looking into something like dm-cache
[10:45] <chowmeined> there is also that, but ive heard mixed things when it comes to using it with Ceph
[10:46] <ghartz> mgarcesMZ, I test "all" cache type for ceph
[10:46] * zack_dolby (~textual@ai126184055119.15.access-internet.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[10:46] <chowmeined> iirc, you dont want to use dm-cache 'below' Ceph, at the OSD layer
[10:46] <ghartz> horrible performance/behavior
[10:46] <chowmeined> but you could try using dm-cache on top of rbd
[10:46] * saurabh (~saurabh@121.244.87.117) has joined #ceph
[10:46] * steveeJ (~junky@HSI-KBW-085-216-022-246.hsi.kabelbw.de) has joined #ceph
[10:47] * boichev4 (~boichev@213.169.56.130) has joined #ceph
[10:47] <ghartz> there is fs cache which is native with latest kernel
[10:47] * boichev (~boichev@213.169.56.130) Quit (Read error: Operation timed out)
[10:47] <mgarcesMZ> im not using rbd
[10:48] <mgarcesMZ> object storage only
[10:48] <chowmeined> and do you want to cache reads or writes or both?
[10:49] <wonko_be> hi all
[10:49] <wonko_be> is there anything in ceph to provide secure deletion of data? something that will wipe/overwrite the blocks on disk with zero's or random data?
[10:50] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[10:51] * boichev3 (~boichev@213.169.56.130) Quit (Ping timeout: 480 seconds)
[10:51] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[10:52] * DV_ (~veillard@veillard.com) Quit (Ping timeout: 480 seconds)
[10:55] <mgarcesMZ> wonko_be: dd ? :)
[10:55] <mgarcesMZ> chowmeined: both
[10:55] <mgarcesMZ> but most important are writes
[10:56] * boichev5 (~boichev@213.169.56.130) has joined #ceph
[10:59] * nljmo_ (~nljmo@5ED6C263.cm-7-7d.dynamic.ziggo.nl) has joined #ceph
[10:59] * nljmo (~nljmo@5ED6C263.cm-7-7d.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[10:59] * boichev4 (~boichev@213.169.56.130) Quit (Ping timeout: 480 seconds)
[10:59] <wonko_be> mgarcesMZ: it should be provided through the normal attributes when using CephFS (+s xattr i think), or through some setting in ceph config...
[11:00] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[11:00] <mgarcesMZ> wonko_be: I was refering in using the command ???dd??? like ???dd if=/dev/zero of=/volume"
[11:01] * steveeJ (~junky@HSI-KBW-085-216-022-246.hsi.kabelbw.de) Quit (Quit: Leaving)
[11:01] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[11:02] <wonko_be> yeah, but that isn't a solution
[11:02] <wonko_be> I can't force my clients to implement this in their applications
[11:02] <mgarcesMZ> thats why I did the :)
[11:02] <mgarcesMZ> sorry
[11:03] <wonko_be> no problem... actually, dd was the first thing I suggested to the client :)
[11:03] <wonko_be> but that turned out to be a no-go
[11:03] * boichev6 (~boichev@213.169.56.130) has joined #ceph
[11:04] * darkling (~hrm@00012bd0.user.oftc.net) has joined #ceph
[11:06] * boichev5 (~boichev@213.169.56.130) Quit (Ping timeout: 480 seconds)
[11:13] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[11:14] * boichev7 (~boichev@213.169.56.130) has joined #ceph
[11:15] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[11:15] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[11:16] * thomnico (~thomnico@2a01:e35:8b41:120:2135:9410:bc07:ad4a) Quit (Quit: Ex-Chat)
[11:16] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[11:17] * zhaozhiming (~zhaozhimi@192.200.151.168) has joined #ceph
[11:18] * zhaozhiming (~zhaozhimi@192.200.151.168) Quit (Remote host closed the connection)
[11:18] * zhaozhiming (~zhaozhimi@192.200.151.168) has joined #ceph
[11:18] * boichev6 (~boichev@213.169.56.130) Quit (Ping timeout: 480 seconds)
[11:19] * boichev8 (~boichev@213.169.56.130) has joined #ceph
[11:20] * steveeJ (~junky@HSI-KBW-085-216-022-246.hsi.kabelbw.de) has joined #ceph
[11:21] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[11:22] * boichev7 (~boichev@213.169.56.130) Quit (Ping timeout: 480 seconds)
[11:25] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[11:30] <loicd> leseb: what document would you recommend to better understand how to choose the number PG ? A more verbose version of http://ceph.com/docs/master/rados/operations/placement-groups/
[11:32] * loicd reading http://blog.bit-isle.jp/bird/category/ceph
[11:35] <leseb> loicd: hum I believe it's a good start yes
[11:36] <leseb> loicd: however choosing the right amount of PG remains a bit obscur there is no accurate rules
[11:40] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[11:44] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[11:44] * AfC (~andrew@93.94.208.154) Quit (Ping timeout: 480 seconds)
[11:47] * lucas1 (~Thunderbi@222.240.148.154) has joined #ceph
[11:49] <mgarcesMZ> loicd: link is full japanese??? I cant understand anything??? :)
[11:50] <loicd> :-)
[11:53] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[11:53] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[11:54] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[11:54] * zack_dolby (~textual@e0109-114-22-3-142.uqwimax.jp) has joined #ceph
[11:54] * yanzheng (~zhyan@171.221.139.239) Quit (Quit: This computer has gone to sleep)
[11:55] * zhaozhiming_ (~zhaozhimi@118.122.91.223) has joined #ceph
[11:56] * zhaozhiming (~zhaozhimi@192.200.151.168) Quit (Ping timeout: 480 seconds)
[11:56] * zack_dolby (~textual@e0109-114-22-3-142.uqwimax.jp) Quit ()
[11:56] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[11:58] * zack_dolby (~textual@e0109-114-22-3-142.uqwimax.jp) has joined #ceph
[11:59] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[12:01] * zhaozhiming_ (~zhaozhimi@118.122.91.223) Quit (Remote host closed the connection)
[12:01] * zhaozhiming (~zhaozhimi@192.200.151.170) has joined #ceph
[12:02] * zhaozhiming (~zhaozhimi@192.200.151.170) Quit (Remote host closed the connection)
[12:02] * zhaozhiming (~zhaozhimi@192.200.151.170) has joined #ceph
[12:02] * zhaozhiming (~zhaozhimi@192.200.151.170) Quit ()
[12:02] * analbeard1 (~shw@support.memset.com) has joined #ceph
[12:06] * analbeard (~shw@support.memset.com) Quit (Ping timeout: 480 seconds)
[12:10] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[12:10] <mgarcesMZ> I wish there was a good book on ceph
[12:10] <mgarcesMZ> Im trying to transform the documentation in epub/mobi format
[12:10] <mgarcesMZ> but its all sparsed
[12:11] * lucas1 (~Thunderbi@222.240.148.154) Quit (Quit: lucas1)
[12:16] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit (Quit: Quitte)
[12:22] <mgarcesMZ> uhoh: health HEALTH_WARN pool .rgw.buckets has too few pgs
[12:23] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has left #ceph
[12:25] * analbeard1 (~shw@support.memset.com) Quit (Ping timeout: 480 seconds)
[12:34] * florent (~florent@2a04:2500:0:103:35fe:649a:cc6a:baa0) Quit (Ping timeout: 480 seconds)
[12:35] * analbeard (~shw@support.memset.com) has joined #ceph
[12:37] * ghartz (~ghartz@ircad17.u-strasbg.fr) has joined #ceph
[12:45] * zhaochao (~zhaochao@111.204.252.1) has left #ceph
[12:46] * linjan (~linjan@176.195.6.203) has joined #ceph
[12:50] * yanzheng (~zhyan@171.221.139.239) Quit (Quit: This computer has gone to sleep)
[12:50] * analbeard (~shw@support.memset.com) Quit (Ping timeout: 480 seconds)
[12:55] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[12:56] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[12:57] * dignus (~jkooijman@t-x.dignus.nl) has joined #ceph
[13:11] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Read error: Connection reset by peer)
[13:12] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[13:13] * monsterz_ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[13:13] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Read error: Connection reset by peer)
[13:14] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[13:14] * monsterz_ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Read error: Connection reset by peer)
[13:15] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Quit: Konversation terminated!)
[13:15] * RameshN (~rnachimu@121.244.87.117) Quit (Remote host closed the connection)
[13:15] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[13:15] * RameshN (~rnachimu@121.244.87.117) has joined #ceph
[13:16] * monsterz_ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[13:16] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Read error: Connection reset by peer)
[13:17] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[13:17] * monsterz_ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Read error: Connection reset by peer)
[13:18] * monsterz_ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[13:18] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Read error: Connection reset by peer)
[13:23] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Quit: Konversation terminated!)
[13:23] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[13:23] * florent (~florent@2a04:2500:0:103:35fe:649a:cc6a:baa0) has joined #ceph
[13:24] * lucas1 (~Thunderbi@218.76.25.66) Quit ()
[13:24] * i_m (~ivan.miro@deibp9eh1--blueice3n2.emea.ibm.com) has joined #ceph
[13:26] * peedu_ (~peedu@170.91.235.80.dyn.estpak.ee) has joined #ceph
[13:26] * tab (~oftc-webi@194.249.247.164) has joined #ceph
[13:28] <tab> Question. Is it possible for objects, stored through librados interface, to be accessed through RadosGW? As I read, for now, there is no chance to read objects through radosgw interface if they were written using RBD interface, but it is like feuture in progress.
[13:29] * yanzheng (~zhyan@171.221.139.239) Quit (Quit: This computer has gone to sleep)
[13:31] * monsterz_ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Ping timeout: 480 seconds)
[13:32] * danieljh (~daniel@0001b4e9.user.oftc.net) has joined #ceph
[13:32] * peedu (~peedu@185.46.20.35) Quit (Read error: No route to host)
[13:32] * florent_ (~florent@2a04:2500:0:103:35fe:649a:cc6a:baa0) has joined #ceph
[13:33] * peedu (~peedu@185.46.20.35) has joined #ceph
[13:34] * analbeard (~shw@support.memset.com) has joined #ceph
[13:35] * florent_ (~florent@2a04:2500:0:103:35fe:649a:cc6a:baa0) Quit ()
[13:35] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[13:37] * true (~antrue@2a02:6b8:0:401:697c:6695:cdd2:4d89) has joined #ceph
[13:37] <true> hi
[13:37] <true> is anybody here?
[13:37] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[13:39] <mgarcesMZ> hi true
[13:39] * peedu_ (~peedu@170.91.235.80.dyn.estpak.ee) Quit (Ping timeout: 480 seconds)
[13:39] <true> i have some troubles with ceph, can you help me?
[13:39] <mgarcesMZ> im very new to ceph, but I will try
[13:39] <mgarcesMZ> :)
[13:40] <true> haha
[13:40] * bipinkunal (~bkunal@121.244.87.115) Quit (Read error: Operation timed out)
[13:42] <true> ok, i have a little cluster, on 0.80.4 it works almost good
[13:43] <true> hmmm... there is 4 servers in cluster, 20 osds, 3 mons, 3 mds and about 200 clients mounting storage by kernel module
[13:44] <true> and sometimes on read or listing files from storage it hangs on fstat syscall
[13:46] * apolloJess (~Thunderbi@202.60.8.252) Quit (Quit: apolloJess)
[13:47] <absynth_> "mounting storage by kernel module", do you mean rbd.ko or cephfs?
[13:47] <true> cephfs
[13:47] <absynth_> uh
[13:47] <absynth_> not production ready
[13:47] <true> but how i can use it as remote storage?
[13:50] * fdmanana (~fdmanana@bl5-77-181.dsl.telepac.pt) Quit (Quit: Leaving)
[13:51] <true> i mean shared storage
[13:52] * florent (~florent@2a04:2500:0:103:35fe:649a:cc6a:baa0) Quit (Quit: Leaving)
[13:52] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:53] <dignus> you can, but not in production :)
[13:53] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[13:53] * RameshN (~rnachimu@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:54] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[13:55] * infernix has a real headscratcher on his hands
[13:55] <infernix> rdb bench seq does 1500MB/s
[13:55] <infernix> a dd on a rbd mapped device does 20mb/sec
[13:56] <infernix> a parallel dd does around 600
[13:56] <true> dignus, and it will hung every time when i'll read something from it?)
[13:57] <infernix> does anyone know where to set osd_bench_large_size_max_throughput? in each osds config?
[13:58] <true> in [osd] section, i think if it global for all osds
[14:03] * boichev8 is now known as boichev
[14:03] * dneary (~dneary@96.237.180.105) has joined #ceph
[14:03] <infernix> well something is very wrong with ceph tell osd.bench
[14:04] <infernix> bench {<int>} {<int>} : OSD benchmark: write <count> <size>-byte objects, (default 1G size 4MB). Results in log.
[14:04] <infernix> ceph tell osd.2 bench 1024000 4194304
[14:04] <infernix> this completes in under one second
[14:04] <infernix> this on 0.80.5
[14:04] * KevinPerks (~Adium@2606:a000:80a1:1b00:987f:ae30:93c5:9e78) has joined #ceph
[14:04] <infernix> i'm looking at iostat for osd 2 and it writes one single object
[14:05] * diegows (~diegows@190.190.5.238) has joined #ceph
[14:07] <boichev> I'm trying to integrate ceph into openstack but the documentation on "http://ceph.com/docs/next/rbd/rbd-openstack/" does not say where to run "ceph auth get-or-create client.cinder mon ..." commands Can someone help me because running them on the ceph mon node seems to return me a key but only in stdout ... and then "ceph auth get-or-create client.glance | ssh {your-glance-api-server} sudo tee /etc/ceph/ceph.client.glance.keyring"
[14:07] <boichev> failes because there is no ceph.client.glance.keyring in /etc/ceph from the previows command .... should I just copy the output of ceph auth and put it into this file ?
[14:07] * dignus (~jkooijman@t-x.dignus.nl) Quit (Ping timeout: 480 seconds)
[14:11] <infernix> boichev: that should do it
[14:11] <boichev> I got it, my bad :)
[14:14] <boichev> infernix client.cinder should be on the cinder-api node or on the cinder-volume node ?
[14:14] <freire> Anyone here using Erasure Coded pool? Is too bad?
[14:14] * AfC (~andrew@93.94.208.154) has joined #ceph
[14:15] <infernix> boichev: not sure on openstack specifics, but anywhere where that pool needs to be accessed
[14:15] <boichev> infernix so probably on the volume node
[14:16] <infernix> boichev: note that a key is also needed to create volumes
[14:18] * zack_dolby (~textual@e0109-114-22-3-142.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[14:21] <infernix> Timing buffered disk reads: 26 MB in 3.17 seconds = 8.20 MB/sec
[14:21] <ghartz> where i can find information about the key/value backend ? there is a big lack in the documentation
[14:21] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[14:21] * jdillaman (~jdillaman@pool-108-18-232-208.washdc.fios.verizon.net) has joined #ceph
[14:21] <infernix> i wonder if that may have something to do with my performance problem >.<
[14:21] <infernix> crappy WD drives
[14:22] <ghartz> https://wiki.ceph.com/Planning/Blueprints/Firefly/osd%3A_new_key%2F%2Fvalue_backend no so relevant
[14:23] * AfC (~andrew@93.94.208.154) Quit (Quit: Leaving.)
[14:23] * dneary (~dneary@96.237.180.105) Quit (Ping timeout: 480 seconds)
[14:27] * yanzheng (~zhyan@171.221.139.239) Quit (Quit: This computer has gone to sleep)
[14:39] * bipinkunal (~bkunal@1.22.77.159) has joined #ceph
[14:40] * RameshN (~rnachimu@101.222.252.217) has joined #ceph
[14:48] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[14:50] * RameshN (~rnachimu@101.222.252.217) Quit (Ping timeout: 480 seconds)
[14:52] * RameshN (~rnachimu@101.222.252.217) has joined #ceph
[14:52] * fdmanana (~fdmanana@bl5-77-181.dsl.telepac.pt) has joined #ceph
[14:54] * Sysadmin88 (~IceChat77@176.250.164.108) has joined #ceph
[14:54] * mtl1 (~Adium@c-67-174-109-212.hsd1.co.comcast.net) has joined #ceph
[14:55] * rdas (~rdas@121.244.87.115) Quit (Quit: Leaving)
[14:57] * b0e (~aledermue@213.95.25.82) has joined #ceph
[14:57] * kanagaraj (~kanagaraj@27.7.17.15) has joined #ceph
[15:02] * markbby (~Adium@168.94.245.3) has joined #ceph
[15:02] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[15:02] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has left #ceph
[15:08] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:08] <runfromnowhere> sage: Hit that MDS fail condition again and took a dump of the cache. It's about 17MB. When you're around, am I looking for anything in particular?
[15:10] * mgarcesMZ (~mgarces@5.206.228.5) Quit (Ping timeout: 480 seconds)
[15:10] * mtl1 (~Adium@c-67-174-109-212.hsd1.co.comcast.net) has left #ceph
[15:11] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[15:12] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[15:17] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[15:23] * vbellur (~vijay@209.132.188.8) has joined #ceph
[15:27] * ganders (~root@200-127-158-54.net.prima.net.ar) has joined #ceph
[15:34] * amatus (amatus@leon.g-cipher.net) has left #ceph
[15:41] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[15:43] * KevinPerks1 (~Adium@2606:a000:80a1:1b00:1c4f:3f6f:a441:b7f0) has joined #ceph
[15:43] * mgarcesMZ (~mgarces@5.206.228.5) has joined #ceph
[15:45] <mgarcesMZ> hi again
[15:46] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[15:47] * KevinPerks (~Adium@2606:a000:80a1:1b00:987f:ae30:93c5:9e78) Quit (Ping timeout: 480 seconds)
[15:49] * thomnico (~thomnico@2a01:e35:8b41:120:10d0:d311:1f43:e5fe) has joined #ceph
[15:52] * ircolle (~Adium@2601:1:a580:145a:89c2:e59d:c643:3a31) has joined #ceph
[15:53] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (Remote host closed the connection)
[15:54] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[15:55] * michalefty (~micha@p20030071CF082B00F1A43B84573DFEAA.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[15:56] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:59] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) has joined #ceph
[15:59] * jobewan (~jobewan@snapp.centurylink.net) has joined #ceph
[16:00] * simulx (~simulx@vpn.expressionanalysis.com) has joined #ceph
[16:03] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[16:04] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[16:06] * jobewan (~jobewan@snapp.centurylink.net) Quit (Remote host closed the connection)
[16:06] * jobewan (~jobewan@snapp.centurylink.net) has joined #ceph
[16:06] * analbeard (~shw@support.memset.com) Quit (Ping timeout: 480 seconds)
[16:08] * angdraug (~angdraug@23-25-151-233-static.hfc.comcastbusiness.net) has joined #ceph
[16:11] * zack_dolby (~textual@p843a3d.tokynt01.ap.so-net.ne.jp) has joined #ceph
[16:12] * peedu (~peedu@185.46.20.35) Quit (Ping timeout: 480 seconds)
[16:14] * aegeaner (~Aegeaner@60.247.94.14) has joined #ceph
[16:15] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (Read error: Operation timed out)
[16:17] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[16:17] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[16:18] * aegeaner (~Aegeaner@60.247.94.14) has left #ceph
[16:18] * kanagaraj (~kanagaraj@27.7.17.15) Quit (Ping timeout: 480 seconds)
[16:18] * kanagaraj (~kanagaraj@27.7.17.15) has joined #ceph
[16:20] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[16:20] * ChanServ sets mode +v andreask
[16:20] * mgarcesMZ (~mgarces@5.206.228.5) Quit (Quit: mgarcesMZ)
[16:21] * mgarcesMZ (~mgarces@5.206.228.5) has joined #ceph
[16:23] * RameshN (~rnachimu@101.222.252.217) Quit (Remote host closed the connection)
[16:27] * vbellur (~vijay@209.132.188.8) Quit (Quit: Leaving.)
[16:30] * bkopilov (~bkopilov@nat-pool-tlv-u.redhat.com) has joined #ceph
[16:33] * saurabh (~saurabh@121.244.87.117) Quit (Quit: Leaving)
[16:35] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Read error: Connection reset by peer)
[16:36] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[16:42] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has left #ceph
[16:42] * longguang_home (~chatzilla@111.202.0.58) has joined #ceph
[16:43] * RameshN (~rnachimu@101.222.252.217) has joined #ceph
[16:44] * \ask (~ask@oz.develooper.com) Quit (Ping timeout: 480 seconds)
[16:45] * \ask (~ask@oz.develooper.com) has joined #ceph
[16:48] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:51] * kanagaraj (~kanagaraj@27.7.17.15) Quit (Ping timeout: 480 seconds)
[16:52] <longguang_home> which file is ceph command's source code?
[16:53] * dmsimard_away is now known as dmsimard
[16:55] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[16:56] * JCL (~JCL@2601:9:5980:39b:4825:9364:c857:1693) has joined #ceph
[17:01] <jcsp> longguang_home: ceph.in is the python entry point, the python code learns available commands from the mon at runtime, the commands are defined in mon/MonCommandsh
[17:01] <jcsp> * mon/MonCommands.h
[17:02] * bkopilov (~bkopilov@nat-pool-tlv-u.redhat.com) Quit (Ping timeout: 480 seconds)
[17:02] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[17:03] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[17:03] * diegows (~diegows@190.190.5.238) Quit (Read error: Operation timed out)
[17:09] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[17:09] * masta (~masta@190.7.213.210) has joined #ceph
[17:10] * tinklebear (~tinklebea@66.55.134.218) has joined #ceph
[17:11] * BManojlovic (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[17:12] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[17:17] * dignus (~jkooijman@t-x.dignus.nl) has joined #ceph
[17:19] <longguang_home> jcsp:thanks, please help to confirm my confusion. i know that osd'leveldb store history of osdmap and pg-log. what is in pg-log?
[17:21] <jcsp> I'm not the best person for OSD internals
[17:21] * darkling (~hrm@00012bd0.user.oftc.net) has left #ceph
[17:21] * darkling (~hrm@00012bd0.user.oftc.net) has joined #ceph
[17:22] <longguang_home> ok. which part are you familar with? i am digging into the code. have many questiones. haha
[17:22] * brother| (foobaz@vps1.hacking.dk) has joined #ceph
[17:22] * absynth__ (~absynth@irc.absynth.de) has joined #ceph
[17:22] * gregmark (~Adium@68.87.42.115) has joined #ceph
[17:24] * wangqty (~qiang@111.204.252.0) has joined #ceph
[17:24] * absynth_ (~absynth@irc.absynth.de) Quit (Ping timeout: 480 seconds)
[17:24] * brother (foobaz@vps1.hacking.dk) Quit (Ping timeout: 480 seconds)
[17:24] * oms101 (~oms101@p20030057EA405500C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[17:27] * oms101 (~oms101@p20030057EA405500C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[17:27] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) Quit (Quit: Leaving.)
[17:27] * branto (~borix@ip-213-220-214-245.net.upcbroadband.cz) has left #ceph
[17:34] * kanagaraj (~kanagaraj@27.7.17.15) has joined #ceph
[17:37] * bandrus (~Adium@216.57.72.205) has joined #ceph
[17:37] * adamcrume (~quassel@50.247.81.99) has joined #ceph
[17:38] <mgarcesMZ> does someone know how do I set the radosgw token TTL?
[17:39] <loicd> Is there a workaround for http://tracker.ceph.com/issues/6109 ?
[17:40] * wangqty (~qiang@111.204.252.0) Quit (Read error: Connection timed out)
[17:41] <mgarcesMZ> found it: rgw_swift_token_expiration
[17:42] * wangqty (~qiang@111.204.252.0) has joined #ceph
[17:43] * gregsfortytwo1 (~Adium@cpe-107-184-64-126.socal.res.rr.com) Quit (Quit: Leaving.)
[17:43] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[17:44] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[17:45] * Gill (~gillabada@static-72-80-16-227.nycmny.fios.verizon.net) has joined #ceph
[17:45] <Gill> Hey guys. I am completely new to CEPH haven???t even attempted an installl yet. Just have a quick question though. Can a CEPH clustrer span data centers?
[17:46] <absynth__> nope
[17:46] <kraken> http://i.imgur.com/foEHo.gif
[17:46] <absynth__> not reliably
[17:47] <absynth__> it#s part of the theoretical concept, but right now (inktankers, correct me if i'm wrong) this would require lan-quality interconnect
[17:48] <Gill> absynth__: thanks??? so how do people use CEPH if they have servers is multiple data centers? what if one datacenter dies?
[17:48] <absynth__> then ceph is not part of the solution
[17:48] <Gill> oh??? no way to have 2 ceph clusters and rsync between?
[17:49] <absynth__> there's something like that in new versions, but (afair) severely limited, i think to cephFS or something
[17:49] <absynth__> https://wiki.ceph.com/FAQs/Can_Ceph_Support_Multiple_Data_Centers%3F
[17:49] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[17:49] * wangqty (~qiang@111.204.252.0) Quit (Quit: Leaving.)
[17:50] <mgarcesMZ> how do I change a default value in ceph.conf.. the default for rgw_swift_token_expiration is 86400, I want it to be 60
[17:50] <mgarcesMZ> I have put this in ceph.conf, bellow radosgw config: ???rgw_swift_token_expiration=60???
[17:50] <mgarcesMZ> restarted everything, but still now go
[17:50] <mgarcesMZ> *no go
[17:50] <mgarcesMZ> the default value is still there
[17:52] <Gill> thanks absynth__ so I either don???t use ceph or I can try a cronjob of a snapshot and restore
[17:53] <absynth__> Gill: if you find a decent, reliable, scalable concept for that, drop me a lilne
[17:53] <absynth__> it's hard to do.
[17:53] <mgarcesMZ> oh
[17:53] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[17:53] <mgarcesMZ> I was testing using the osd *.asok
[17:53] <Gill> absynth__: yea I haven???t found one??? I was thinking NFS with rsync will have to do
[17:53] <Gill> is there a way to rsync the data from ceph?
[17:53] <absynth__> not really
[17:54] <absynth__> you would enter an endless loop
[17:54] <steveeJ> mgarcesMZ: try replacing '_' by ' '
[17:54] <mgarcesMZ> steveeJ: I did
[17:54] <steveeJ> mgarcesMZ: no success ?
[17:54] <Gill> oh :( I was thinking then I could just have backup NFS and a ceph cluster as primary
[17:54] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Ping timeout: 480 seconds)
[17:54] <mgarcesMZ> when i do ???ceph --admin-daemon ceph-client.radosgw.gw.asok config show??? it shows the propertie changed
[17:54] <mgarcesMZ> but if I do that in the osd asok
[17:54] <mgarcesMZ> it shows me the default value
[17:54] <steveeJ> Gill: what I wanted to test for a while now is using xdelta3 to create diffs on the rbd images
[17:55] * jtaguinerd (~jtaguiner@112.205.19.199) Quit (Quit: Leaving.)
[17:55] <steveeJ> Gill: it would require some programming to have a clean solution but nothing too complicated
[17:56] <Gill> so you???d check the diffs then rsync only the rbd images that changed?
[17:57] <seapasul1i> is it possible to copy data between two different ceph clusters directly?
[17:57] <steveeJ> Gill: no, xdelta3 produces a diff between any two files you pass it. you could diff two snapshots of the same rbd image
[17:58] <Gill> steveeJ: I???m sorry I don???t follow??? what would be the benefit?
[17:59] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[17:59] <steveeJ> Gill: a generic solution to synchronize rbd images and only transfer changed bits
[17:59] <Gill> oh!
[18:00] <Gill> is it possible to sync rbd images now?
[18:00] <steveeJ> that would only work for rbd though. for cephfs you would have to find something else
[18:00] <steveeJ> Gill: with the method i just described it should work, i haven't tested it myself yet
[18:01] * sleinen1 (~Adium@2001:620:0:68::100) has joined #ceph
[18:01] <steveeJ> Gill: try playing around with xdelta3 and some lvm volume and snapshots. it is basically the same
[18:01] <steveeJ> Gill: you can also use a VM image that lays around somewhere
[18:01] <Gill> cool ill give it a try
[18:02] * hybrid512 (~walid@195.200.167.70) Quit (Quit: Leaving.)
[18:02] * longguang_home (~chatzilla@111.202.0.58) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 29.0.1/20140506152807])
[18:02] * mathias_ (~mathias@pd95b4613.dip0.t-ipconnect.de) has joined #ceph
[18:04] <mathias_> reading http://www.admin-magazine.com/HPC/Articles/Ceph-Maintenance/%28language%29/eng-US I found this: "Administrators should understand that not all OSDs in the cluster need to be full for cluster to be unable to perform its functions." Why is that? I dont understand the why.
[18:05] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:06] <danieljh> mathias_: suppose I put n objects to the same pg; once the osd anaging this pg s full you're out of luck.
[18:07] <danieljh> *managing
[18:07] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[18:09] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[18:09] <danieljh> mathias_: To counter this you depend on the hash function for the object's name to generate a "good distribution" of objects to pgs, see http://ceph.com/dev-notes/whats-new-in-the-land-of-osd/ for context. Or search the ceph-dev mailinglist for "mapping all objects to a single PG" for a way to do this.
[18:10] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[18:11] * i_m (~ivan.miro@deibp9eh1--blueice3n2.emea.ibm.com) Quit (Quit: Leaving.)
[18:11] <danieljh> mathias_: and here's a python script to quickly check the distribution's quality by doing a chi-squared hypothesis test, see: https://gist.github.com/daniel-j-h/d7d87dfe5de3c5bbfd0f
[18:13] <mathias_> hmm looks like it didnt fully understand the whole object-to-pg-to-osd mapping thing ... sigh
[18:14] * _nitti (~nitti@162.222.47.218) has joined #ceph
[18:15] <mathias_> ok, the number of PGs is some static value - right?
[18:15] * angdraug (~angdraug@23-25-151-233-static.hfc.comcastbusiness.net) Quit (Quit: Leaving)
[18:16] <danieljh> you can modify it, just read a bit through the documentation, e.g. here: http://ceph.com/docs/master/rados/operations/placement-groups/
[18:18] * joshd (~jdurgin@2602:306:c5db:310:8105:4585:9d92:c2) has joined #ceph
[18:18] * vbellur (~vijay@122.167.132.224) has joined #ceph
[18:19] * angdraug (~angdraug@23-25-151-233-static.hfc.comcastbusiness.net) has joined #ceph
[18:19] <danieljh> mathias_: I have to go now but as I said, just read the docs for the beginning; it takes a while to understand how all works together.
[18:19] * danieljh (~daniel@0001b4e9.user.oftc.net) Quit (Quit: leaving)
[18:21] * darkling (~hrm@00012bd0.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:22] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[18:23] <mathias_> danieljh: thx I will :)
[18:24] <mgarcesMZ> what is the max objects you recommend in a container?
[18:26] * yanzheng (~zhyan@171.221.139.239) Quit (Quit: This computer has gone to sleep)
[18:28] * lcavassa (~lcavassa@89.184.114.246) Quit (Quit: Leaving)
[18:28] * kanagaraj (~kanagaraj@27.7.17.15) Quit (Quit: Leaving)
[18:33] * sjustwork (~sam@2607:f298:a:607:f118:4266:5b77:8449) has joined #ceph
[18:36] * masta (~masta@190.7.213.210) Quit (Quit: Linkinus - http://linkinus.com)
[18:41] <mathias_> I read in the doc that "A Placement Group (PG) aggregates a series of objects into a group, and maps the group to a series of OSDs". I figure a PG belongs to a pool so the number of OSDs a PG maps to depends on the number of replicas defined on the pool level - is that correct?
[18:42] <steveeJ> mathias_: right, there's also a formula in the docs for calculating number of PGs that includes the replica size as a variable
[18:43] * mgarcesMZ (~mgarces@5.206.228.5) Quit (Quit: mgarcesMZ)
[18:46] * nwat (~textual@eduroam-238-17.ucsc.edu) has joined #ceph
[18:49] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[18:49] <mathias_> right - ok I guess I got most of that now
[18:51] * houkouonchi-home (~linux@houkouonchi-1-pt.tunnel.tserv15.lax1.ipv6.he.net) has joined #ceph
[18:52] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[18:54] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[18:55] * bipinkunal (~bkunal@1.22.77.159) Quit (Quit: Leaving)
[18:55] * Gill (~gillabada@static-72-80-16-227.nycmny.fios.verizon.net) Quit (Quit: Gill)
[18:57] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[18:58] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Quit: leaving)
[18:59] * reed (~reed@75-101-54-131.dsl.static.sonic.net) has joined #ceph
[19:01] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[19:02] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit ()
[19:02] * true (~antrue@2a02:6b8:0:401:697c:6695:cdd2:4d89) Quit (Read error: Connection timed out)
[19:03] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[19:03] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit ()
[19:04] * true (~antrue@2a02:6b8:0:401:697c:6695:cdd2:4d89) has joined #ceph
[19:04] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[19:05] * blackmen (~Ajit@121.244.87.115) Quit (Quit: Leaving)
[19:06] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[19:07] * TiCPU (~jeromepou@190-130.cgocable.ca) has joined #ceph
[19:07] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[19:08] * adamcrume (~quassel@50.247.81.99) Quit (Remote host closed the connection)
[19:10] * davidz (~Adium@cpe-23-242-12-23.socal.res.rr.com) has joined #ceph
[19:11] * angdraug (~angdraug@23-25-151-233-static.hfc.comcastbusiness.net) Quit (Remote host closed the connection)
[19:11] * tloveridge (~Adium@67.21.63.133) has joined #ceph
[19:12] * angdraug (~angdraug@23-25-151-233-static.hfc.comcastbusiness.net) has joined #ceph
[19:13] * angdraug (~angdraug@23-25-151-233-static.hfc.comcastbusiness.net) Quit ()
[19:13] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Operation timed out)
[19:14] * diegows (~diegows@190.190.5.238) has joined #ceph
[19:14] * tinklebear (~tinklebea@66.55.134.218) Quit (Quit: Nettalk6 - www.ntalk.de)
[19:15] * alram (~alram@38.122.20.226) has joined #ceph
[19:19] * debian112 (~bcolbert@c-24-99-94-44.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[19:19] <mathias_> is setting a propert weight value in a custom crush map the proper way to deal with differently sized OSD devices (like 3 with 1TB disks then later extended by 2 with 2TB each)
[19:21] <mathias_> or is that something not recommended at all? I feel like performance will suffer from a bunch of larger HDDs mixed with smaller and therefore faster / less occupied ones
[19:22] * beardo (~sma310@beardo.cc.lehigh.edu) has joined #ceph
[19:22] * RameshN (~rnachimu@101.222.252.217) Quit (Ping timeout: 480 seconds)
[19:24] <iggy> mathias_: yes
[19:27] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[19:27] <mathias_> iggy: yes, performance will suffer or yes, thats the correct way to deal with them
[19:27] * mathias_ (~mathias@pd95b4613.dip0.t-ipconnect.de) Quit (Quit: leaving)
[19:28] * mathias (~mathias@pd95b4613.dip0.t-ipconnect.de) has joined #ceph
[19:29] <iggy> correct way
[19:29] <mathias> ok thx
[19:32] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:33] * adamcrume (~quassel@2601:9:6680:47:f41c:255b:d68a:90cb) has joined #ceph
[19:35] * RameshN (~rnachimu@101.222.241.104) has joined #ceph
[19:35] * cephalobot (~ceph@ds3553.dreamservers.com) has joined #ceph
[19:37] * cephalobot (~ceph@ds3553.dreamservers.com) Quit ()
[19:37] * cephalobot (~ceph@ds3553.dreamservers.com) has joined #ceph
[19:41] * rwheeler (~rwheeler@173.48.207.57) has joined #ceph
[19:41] <scuttlemonkey> .
[19:42] * johntwilkins (~john@2601:9:4580:289:6cb0:7b06:c8f2:3df9) has joined #ceph
[19:42] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Read error: Operation timed out)
[19:43] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[19:45] <mathias> is it true that reads and writes always to through the same OSD? I was hoping a client would distribute reads and writes accross all OSDs that contain the requested data!?
[19:46] * RameshN (~rnachimu@101.222.241.104) Quit (Ping timeout: 480 seconds)
[19:49] * bkopilov (~bkopilov@213.57.16.164) has joined #ceph
[19:50] * xarses (~andreww@12.164.168.117) has joined #ceph
[19:55] * blackmen (~Ajit@42.104.14.44) has joined #ceph
[20:03] * squisher (~david@2601:0:580:8be:811e:a7b3:1bf0:1dd7) has joined #ceph
[20:04] * BManojlovic (~steki@95.180.4.243) has joined #ceph
[20:06] * Meths (~meths@2.27.106.237) Quit (Ping timeout: 480 seconds)
[20:07] <carmstrong> does anyone know if I can tell ceph-deploy *not* to use upstart? I'd like it to start ceph-related services manually when doing an install/new
[20:07] <carmstrong> I'm containerizing Ceph and the use of upstart is blocking
[20:07] <alfredodeza> I don't think you can carmstrong
[20:08] <carmstrong> :/
[20:08] <carmstrong> ok
[20:08] <carmstrong> not sure how to proceed then.
[20:08] <alfredodeza> but that sounds like it could be a usable feature
[20:08] <alfredodeza> would you mind creating a ceph-deploy ticket and assign it to me?
[20:08] <alfredodeza> carmstrong: you will need an account: http://tracker.ceph.com/projects/devops/issues/new
[20:08] <carmstrong> sure thing, thanks alfredodeza
[20:10] <alfredodeza> it is one of those things where you think 'I will not make it an option because who wouldn't want this?' thing
[20:10] * Meths (~meths@2.27.107.56) has joined #ceph
[20:11] <carmstrong> yeah, understandable. docker is a new paradigm for all this stuff
[20:13] * JayJ (~jayj@157.130.21.226) has joined #ceph
[20:14] <JayJ> Hello all, Could someone tell me where to download ceph-extras/debian/ packages for Ubuntu Trusty? Repository is obviously missing trusty packages and I see there is a bug open with no updates here: http://tracker.ceph.com/issues/8303
[20:16] * joshd1 (~jdurgin@2607:f298:a:607:b195:4f8f:73aa:fcb7) has joined #ceph
[20:18] * mathias (~mathias@pd95b4613.dip0.t-ipconnect.de) Quit (Quit: leaving)
[20:20] * Nacer (~Nacer@2001:41d0:fe82:7200:d579:4a0:4100:eb6d) has joined #ceph
[20:20] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Read error: Operation timed out)
[20:24] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[20:25] * bjornar_ (~bjornar@ti0099a430-0158.bb.online.no) has joined #ceph
[20:25] <carmstrong> alfredodeza: created http://tracker.ceph.com/issues/9313 - let me know if you need more info in the ticket
[20:26] <alfredodeza> !norris carmstrong
[20:26] <kraken> When carmstrong looks at himself at a mirror, there is no reflection. There can only be one carmstrong.
[20:26] <alfredodeza> thank you sir, ticket looks good
[20:27] <carmstrong> awesome :)
[20:29] <carmstrong> alfredodeza: as a workaround, does it make sense for me to follow the "old-school" instructions for setting up the cluster, configuring monitors, and OSDs? i.e. I can't use ceph-deploy
[20:31] <squisher> carmstrong, why not? and if I remember correctly ceph-deploy is "just" a shell script, but the instructions do work
[20:31] <carmstrong> squisher: I'm in docker containers, and I don't hvae the upstart daemon
[20:32] <alfredodeza> carmstrong: you could manually edit it
[20:32] <alfredodeza> or fork it and install your fork
[20:32] <alfredodeza> you would probably need to just return from ceph_deploy/hosts/debian/create.py:create()
[20:33] <alfredodeza> for OSDs you would need to *not* activate
[20:33] <carmstrong> the thing is, I need it to start the daemons. is that logic in there at all? I thought it assumed that upstart starts them. I could certainly patch that shell logic if it's possible
[20:33] <alfredodeza> because activate passes the init type to ceph-disk on remote nodes
[20:33] <carmstrong> I can read python somewhat decently
[20:34] <alfredodeza> carmstrong: ceph-deploy not only executes stuff for you, it will *always* tell you what is calling
[20:34] <alfredodeza> so that you can have an idea of what to do when you want to use something else
[20:35] * blackmen (~Ajit@42.104.14.44) Quit (Quit: Leaving)
[20:36] <carmstrong> hmmn
[20:36] <JayJ> Folks, I want to build a local ubuntu repo for Ceph Firefly. Could anyone point me to some instructions? I know its more Ubuntu question but I cannot find certain repositories.
[20:38] <carmstrong> alfredodeza: it looks like I can override distro.init and then add my own logic for distro.init == 'manual' or something
[20:38] <carmstrong> but that's only in the mon directory - does similar logic apply to osds?
[20:38] <alfredodeza> yes
[20:38] * Nacer (~Nacer@2001:41d0:fe82:7200:d579:4a0:4100:eb6d) Quit (Remote host closed the connection)
[20:38] <alfredodeza> but only when activating
[20:45] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[20:45] * rendar (~I@host187-180-dynamic.32-79-r.retail.telecomitalia.it) Quit (Read error: Operation timed out)
[20:46] * Nacer (~Nacer@2001:41d0:fe82:7200:7a31:c1ff:febd:ec2e) has joined #ceph
[20:48] <carmstrong> ok. looks like it's calling ceph-disk with --mark-init upstart. and in the ceph-disk source, init systems is an array, but if the value isn't upstart or sysinitv, start_daemon will throw an error
[20:49] * BManojlovic (~steki@95.180.4.243) Quit (Quit: Ja odoh a vi sta 'ocete...)
[20:49] * BManojlovic (~steki@95.180.4.243) has joined #ceph
[20:49] * marrusl (~mark@cpe-24-193-20-3.nyc.res.rr.com) has joined #ceph
[20:51] * rendar (~I@host187-180-dynamic.32-79-r.retail.telecomitalia.it) has joined #ceph
[20:52] * Gill (~Gill@static-72-80-16-227.nycmny.fios.verizon.net) has joined #ceph
[21:01] * thomnico (~thomnico@2a01:e35:8b41:120:10d0:d311:1f43:e5fe) Quit (Remote host closed the connection)
[21:01] * linjan (~linjan@176.195.6.203) Quit (Ping timeout: 480 seconds)
[21:02] * bandrus (~Adium@216.57.72.205) Quit (Quit: Leaving.)
[21:02] * bandrus (~Adium@216.57.72.205) has joined #ceph
[21:05] * linjan (~linjan@176.195.196.165) has joined #ceph
[21:06] * _nitti_ (~nitti@162.222.47.218) has joined #ceph
[21:08] * marrusl (~mark@cpe-24-193-20-3.nyc.res.rr.com) Quit (Remote host closed the connection)
[21:11] * linuxkidd (~linuxkidd@cpe-066-057-017-151.nc.res.rr.com) has joined #ceph
[21:11] * vbellur (~vijay@122.167.132.224) Quit (Ping timeout: 480 seconds)
[21:12] * _nitti (~nitti@162.222.47.218) Quit (Read error: Operation timed out)
[21:13] * marrusl (~mark@cpe-24-193-20-3.nyc.res.rr.com) has joined #ceph
[21:25] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[21:30] * neurodrone (~neurodron@static-108-29-37-206.nycmny.fios.verizon.net) has joined #ceph
[21:33] * alram_ (~alram@38.122.20.226) has joined #ceph
[21:38] * BManojlovic (~steki@95.180.4.243) Quit (Ping timeout: 480 seconds)
[21:39] * alram (~alram@38.122.20.226) Quit (Ping timeout: 480 seconds)
[21:39] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[21:52] * reed (~reed@75-101-54-131.dsl.static.sonic.net) Quit (Read error: Operation timed out)
[21:58] * tloveridge (~Adium@67.21.63.133) Quit (Ping timeout: 480 seconds)
[22:01] * BManojlovic (~steki@212.200.65.141) has joined #ceph
[22:01] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[22:03] * codice_ is now known as codice
[22:06] * johntwilkins (~john@2601:9:4580:289:6cb0:7b06:c8f2:3df9) Quit (Ping timeout: 480 seconds)
[22:09] * johntwilkins (~john@2601:9:4580:289:6cb0:7b06:c8f2:3df9) has joined #ceph
[22:14] * joef1 (~Adium@2601:9:280:f2e:d858:a165:c3e6:da81) has joined #ceph
[22:17] * tloveridge (~Adium@67.21.63.134) has joined #ceph
[22:18] * ultimape (~Ultimape@c-174-62-192-41.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[22:26] * Nats (~natscogs@114.31.195.238) Quit (Read error: Connection reset by peer)
[22:27] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: leaving)
[22:29] * ganders is now known as summit
[22:29] * summit (~root@200-127-158-54.net.prima.net.ar) Quit (Quit: WeeChat 0.4.1)
[22:29] * sz0 (~sz0@94.55.197.185) has joined #ceph
[22:30] * summit (~root@200-127-158-54.net.prima.net.ar) has joined #ceph
[22:32] <summit> i've the following scheme for journals:
[22:33] <summit> osd1server has 9 osd daemons and 1 fusion-io card for journals
[22:33] <summit> osd2server has 9 osd daemons and 1 fusion-io card for journs
[22:33] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[22:33] <summit> osd3server has 9 osd daemons and 3 ssd disks (3 osd : 1 ssd journal)
[22:33] <summit> osd4server has 9 osd daemons and 3 ssd disks (3 osd : 1 ssd journal)
[22:34] <summit> now, i need that if a power outage occurs at the datacenter, then i've a way to reconstruct the entire cluster back again with the data on pools
[22:36] <summit> assuming that the batt of the fusion-io cards goes off (72hs)
[22:37] <summit> if i had a RF of 3
[22:38] <summit> so at least one osdXserver has the data on disk, is possible to get it back w the datra?
[22:40] * sreddy (~oftc-webi@32.97.110.56) has joined #ceph
[22:40] * yanzheng (~zhyan@171.221.139.239) has joined #ceph
[22:41] * tloveridge (~Adium@67.21.63.134) Quit (Quit: Leaving.)
[22:42] * tloveridge (~Adium@67.21.63.133) has joined #ceph
[22:43] <sreddy> Trying to deploy ceph behind a fire wall using ceph-deploy. I've setup a local yum repository, gpgcheck is set to 0. But, while trying to "ceph-deploy install mon1 mon2 mon3 osd1 osd2 osd3", ceph-deploy keeps trying to download "release.asc" from ceph.com and it fails
[22:44] <alfredodeza> sreddy: you need to point ceph-deploy to your repo
[22:44] <alfredodeza> ceph-deploy install --help
[22:44] <alfredodeza> has some examples
[22:44] <alfredodeza> with --repo-url and --gpg-url
[22:44] <sreddy> I tried that too.. but there is no release.asc file there
[22:45] <alfredodeza> where is "there"? your yum repo?
[22:45] <gregmark> alredodeza: you said "it [ceph-deploy] will *always* tell you what is calling". Really? Not with 0.61.8-1raring
[22:45] <sreddy> Failed to execute command: rpm --import /opt/repo/rpm-firefly/rhel6
[22:45] <gregmark> Are this new or are the options I can use with the various ceph-deploy calls?
[22:45] <gregmark> Are this new! Lord have merct. Is this new?
[22:45] <alfredodeza> gregmark: that is new but for a while, what version are you using?
[22:45] <sreddy> I've downloaded all the packages from noarch and x86_64
[22:46] <alfredodeza> sreddy: yeah the release.asc file lives somewhere else
[22:46] <sreddy> how do I get the release.sc file
[22:46] <alfredodeza> let me get you the url
[22:46] <gregmark> alfredodeza: 0.61.8-1raring
[22:46] <alfredodeza> one sec
[22:46] <sreddy> its on git server..
[22:46] <alfredodeza> gregmark: no, what version of ceph-deploy
[22:46] <alfredodeza> sreddy: yes
[22:46] <sreddy> tried pulling it
[22:46] <sreddy> I got about 36k file
[22:46] <alfredodeza> what
[22:46] <alfredodeza> no way
[22:46] <alfredodeza> no
[22:47] <alfredodeza> sreddy: wget https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
[22:49] * summit (~root@200-127-158-54.net.prima.net.ar) Quit (Quit: WeeChat 0.4.1)
[22:50] <sreddy> yes, still getting the 36k file
[22:50] <sreddy> index.html?p=ceph.git
[22:50] <sreddy> -rw-------. 1 root root 36773 Sep 2 20:49 index.html?p=ceph.git
[22:50] <alfredodeza> try with quotes
[22:50] <alfredodeza> sreddy: ^ ^
[22:50] <dmick> yeah, that url has maybe just a few shell metachars
[22:52] * garphy is now known as garphy`aw
[22:53] <sreddy> sorry, where do I use the quotes?
[22:53] <squisher> wget 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
[22:53] <alfredodeza> sreddy: http://fpaste.org/130512/91208140/
[22:54] <alfredodeza> or what squisher said
[22:54] <sreddy> ah.. back ticks
[22:54] <squisher> no, single quotes
[22:54] <alfredodeza> not back ticks
[22:54] <alfredodeza> right
[22:54] * sreddy (~oftc-webi@32.97.110.56) Quit (Quit: Page closed)
[22:54] <alfredodeza> single or double quotes
[22:55] * sreddy (~oftc-webi@32.97.110.56) has joined #ceph
[22:55] <sreddy> yes, single quotes
[22:57] <sreddy> -rw-------. 1 root root 1752 Sep 2 20:53 index.html?p=ceph.git;a=blob_plain;f=keys%2Frelease.asc
[23:02] * jtaguinerd (~jtaguiner@203.215.116.66) has joined #ceph
[23:05] <dmick> -O would let you set the name, but, you can just mv that
[23:07] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has joined #ceph
[23:07] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) Quit ()
[23:11] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[23:22] * JC (~JC@2601:9:5980:39b:e42b:a974:1096:6efb) has joined #ceph
[23:22] * kfei (~root@114-27-89-2.dynamic.hinet.net) Quit (Read error: Connection reset by peer)
[23:27] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[23:29] * Nats (~natscogs@114.31.195.238) has joined #ceph
[23:30] * johntwilkins (~john@2601:9:4580:289:6cb0:7b06:c8f2:3df9) Quit (Quit: Leaving)
[23:32] * neurodrone (~neurodron@static-108-29-37-206.nycmny.fios.verizon.net) Quit (Quit: neurodrone)
[23:34] * tloveridge (~Adium@67.21.63.133) has left #ceph
[23:36] * xarses (~andreww@12.164.168.117) Quit (Read error: Operation timed out)
[23:37] * JC (~JC@2601:9:5980:39b:e42b:a974:1096:6efb) Quit (Quit: Leaving.)
[23:39] * kfei (~root@61-227-13-158.dynamic.hinet.net) has joined #ceph
[23:41] <steveeJ> what do i need on a fresh machine to access rbd images with rbd-fuse and cephx enabled? rbd-fuse does not take user or key/keyfile as an argument
[23:44] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[23:44] * ChanServ sets mode +v andreask
[23:47] <dmick> steveeJ: been a while, but you should be able to set them in CEPH_ARGS when starting the fuse server, or arrange for them in the ceph.conf file with -c
[23:48] * bjornar_ (~bjornar@ti0099a430-0158.bb.online.no) Quit (Ping timeout: 480 seconds)
[23:49] <steveeJ> dmick: how would (2) look like?
[23:49] * JayJ (~jayj@157.130.21.226) Quit (Quit: Computer has gone to sleep.)
[23:50] <dmick> I think it'll run as client.admin, so just give it access to the ceph.conf that refers to the keyrings that are active for client.admin (i.e., probably, just -c /etc/ceph/ceph.conf)
[23:50] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[23:51] * xarses (~andreww@12.164.168.117) has joined #ceph
[23:51] <steveeJ> I'm not referring to a keyring in the ceph.conf. but what I've done is copied the ceph.{conf,client.admin.keyring} to the new machine and it works
[23:52] * JayJ (~jayj@157.130.21.226) has joined #ceph
[23:53] <steveeJ> this is not very secure though
[23:54] * Nacer (~Nacer@2001:41d0:fe82:7200:7a31:c1ff:febd:ec2e) Quit (Remote host closed the connection)
[23:56] * sleinen1 (~Adium@2001:620:0:68::100) Quit (Read error: Connection reset by peer)
[23:58] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[23:58] <steveeJ> dmick: thanks, I've specified the keyfile now in the conf. do you know if it's possible to simply specifiy another users key than admin?
[23:58] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[23:58] * ChanServ sets mode +v andreask
[23:58] <dmick> you should be able to do so with CEPH_ARGS
[23:58] <dmick> the -n and -i options are common to all the ceph tools
[23:59] <steveeJ> sounds good! thank you

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.