#ceph IRC Log

Index

IRC Log for 2015-06-03

Timestamps are in GMT/BST.

[0:03] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[0:04] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[0:06] * ircolle (~ircolle@c-71-229-136-109.hsd1.co.comcast.net) has joined #ceph
[0:11] * ircolle (~ircolle@c-71-229-136-109.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[0:13] * linuxkidd (~linuxkidd@63.79.89.21) Quit (Quit: Leaving)
[0:24] * valeech (~valeech@pool-72-86-37-215.clppva.fios.verizon.net) has joined #ceph
[0:24] * markl (~mark@knm.org) Quit (Read error: Connection reset by peer)
[0:24] * markl (~mark@knm.org) has joined #ceph
[0:26] * bene (~ben@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[0:27] * datagutt (~spate@5NZAAC54R.tor-irc.dnsbl.oftc.net) Quit ()
[0:27] * debian112 (~bcolbert@24.126.201.64) Quit (Quit: Leaving.)
[0:31] * visbits (~textual@8.29.138.28) Quit (Ping timeout: 480 seconds)
[0:33] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[0:34] * LeaChim (~LeaChim@host86-163-124-72.range86-163.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[0:42] <aarontc> Can anyone help troubleshoot pgs stuck in "down+peering" state?
[0:43] * alram (~alram@192.41.52.12) Quit (Quit: leaving)
[0:45] * rlrevell (~leer@184.52.129.221) has joined #ceph
[0:53] * diegows (~diegows@190.190.5.238) has joined #ceph
[0:56] * ismell_ (~ismell@host-24-56-189-172.beyondbb.com) has joined #ceph
[1:01] * ZombieTree (~anadrom@95.128.43.164) has joined #ceph
[1:03] * ismell (~ismell@host-24-52-35-110.beyondbb.com) Quit (Ping timeout: 480 seconds)
[1:08] * bandrus (~brian@175.sub-70-211-66.myvzw.com) Quit (Ping timeout: 480 seconds)
[1:08] * imjustmatthew (~imjustmat@pool-74-110-227-240.rcmdva.fios.verizon.net) has joined #ceph
[1:12] * oms101_ (~oms101@p20030057EA288400C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:13] * rlrevell (~leer@184.52.129.221) Quit (Quit: Leaving.)
[1:17] * itsjpr (~imjpr@138.26.125.8) Quit (Ping timeout: 480 seconds)
[1:18] * imjpr (~imjpr@thing2.it.uab.edu) Quit (Ping timeout: 480 seconds)
[1:19] * bandrus (~brian@106.sub-70-211-65.myvzw.com) has joined #ceph
[1:20] * oms101_ (~oms101@p20030057EA3C8900C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:21] * rlrevell (~leer@184.52.129.221) has joined #ceph
[1:23] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[1:25] * arbrandes (~arbrandes@191.7.148.91) Quit (Quit: Leaving)
[1:31] * ZombieTree (~anadrom@7R2AABG8I.tor-irc.dnsbl.oftc.net) Quit ()
[1:32] * rlrevell (~leer@184.52.129.221) Quit (Quit: Leaving.)
[1:35] * ircolle1 (~Adium@2601:1:a580:1735:c50c:9433:f79b:a4ab) Quit (Quit: Leaving.)
[1:36] * evilrob00 (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[1:36] * jclm (~jclm@118.130.198.164) has joined #ceph
[1:37] * CScrace (cb638001@107.161.19.53) has joined #ceph
[1:37] * rlrevell (~leer@184.52.129.221) has joined #ceph
[1:39] * evilrob00 (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) has joined #ceph
[1:40] <CScrace> My monitors won't reach quorum if I have jumbo frames on. Anyone had issues with this before?
[1:40] <gregsfortytwo> if you search the archives for ceph-users things like that come up periodically
[1:40] <gregsfortytwo> it usually means the switch doesn't support them or something
[1:41] <gregsfortytwo> and that is the full extent of my knowledge on them ;)
[1:41] <CScrace> Hmm, thanks.
[1:43] <rkeene> Are jumbo frames working ?
[1:43] <rkeene> ping -s <bigValue> each host from the other
[1:44] * nsoffer (~nsoffer@bzq-79-177-255-248.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[1:47] <CScrace> ping -s 10000 hangs
[1:47] <rkeene> That's too big.
[1:47] <rkeene> What's your MTU ?
[1:48] <rkeene> Jumbo frames usually refers to an MTU of around 9000
[1:48] <CScrace> 8192
[1:48] <rkeene> That's a weird MTU
[1:48] <CScrace> ping -s 8000 is fine
[1:48] <CScrace> something to do with our switch I think?
[1:48] <CScrace> wouldn't support 9000
[1:54] * rlrevell (~leer@184.52.129.221) Quit (Read error: Connection reset by peer)
[1:54] * alram (~alram@64.134.221.151) has joined #ceph
[1:54] * shakamunyi (~shakamuny@204.238.46.103) Quit (Remote host closed the connection)
[1:54] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[1:56] * diegows (~diegows@190.190.5.238) Quit (Quit: Leaving)
[1:56] * rlrevell (~leer@184.52.129.221) has joined #ceph
[2:01] * zc00gii (~Maza@198.23.202.71) has joined #ceph
[2:02] * oblu (~o@62.109.134.112) Quit (Quit: ~)
[2:02] * oblu (~o@62.109.134.112) has joined #ceph
[2:07] * mgolub (~Mikolaj@91.225.200.88) Quit (Remote host closed the connection)
[2:07] * xarses (~andreww@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[2:12] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[2:12] * shakamunyi (~shakamuny@204.238.46.103) has joined #ceph
[2:14] <jidar> are you doing any sort of vpn here?
[2:14] <jidar> not sure I've seen <9000 byte Jumbo frame configs
[2:15] <jidar> or are these guys behind NAT?
[2:20] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) Quit (Quit: Leaving)
[2:20] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[2:23] * oro (~oro@84-72-20-79.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[2:28] <CScrace> theres no vpn or NAT but there is bonded interfaces and a bunch of vlans
[2:30] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[2:30] * rlrevell (~leer@184.52.129.221) Quit (Quit: Leaving.)
[2:31] * zc00gii (~Maza@9S0AAAIOQ.tor-irc.dnsbl.oftc.net) Quit ()
[2:31] * CoMa (~WedTM@89.105.194.80) has joined #ceph
[2:35] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[2:36] * Rickus (~Rickus@office.protected.ca) Quit (Read error: Connection reset by peer)
[2:43] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[2:43] * johanni (~johanni@173.226.103.101) has joined #ceph
[2:44] * johanni_ (~johanni@173.226.103.101) has joined #ceph
[2:56] * fam is now known as fam_away
[2:58] * fam_away is now known as fam
[2:58] * mildan (~textual@207.236.250.131) has joined #ceph
[3:01] * CoMa (~WedTM@8Q4AAA7WU.tor-irc.dnsbl.oftc.net) Quit ()
[3:02] * bkopilov (~bkopilov@bzq-79-178-52-86.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[3:02] * rlrevell (~leer@184.52.129.221) has joined #ceph
[3:05] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[3:08] * bkopilov (~bkopilov@bzq-79-176-57-219.red.bezeqint.net) has joined #ceph
[3:08] <aarontc> I use 7500 MTU on my OSD network without problems
[3:09] * yguang11 (~yguang11@nat-dip30-wl-d.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[3:13] * mildan (~textual@207.236.250.131) Quit (Ping timeout: 480 seconds)
[3:16] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Ping timeout: 480 seconds)
[3:21] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[3:27] * shakamunyi (~shakamuny@204.238.46.103) Quit (Ping timeout: 480 seconds)
[3:31] * Dysgalt (~AotC@89.105.194.79) has joined #ceph
[3:31] * fxmulder (~fxmulder@cpe-24-55-6-128.austin.res.rr.com) Quit ()
[3:33] * shohn1 (~shohn@dslb-094-222-211-105.094.222.pools.vodafone-ip.de) has joined #ceph
[3:34] * yguang11 (~yguang11@12.31.82.125) has joined #ceph
[3:36] * fxmulder (~fxmulder@cpe-24-55-6-128.austin.res.rr.com) has joined #ceph
[3:38] * shohn (~shohn@dslb-188-102-031-115.188.102.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[3:43] * rlrevell (~leer@184.52.129.221) Quit (Quit: Leaving.)
[3:45] * georgem (~Adium@24.140.226.3) has joined #ceph
[3:51] * shang (~ShangWu@175.41.48.77) has joined #ceph
[3:55] * rlrevell (~leer@184.52.129.221) has joined #ceph
[3:58] * midnightrunner (~midnightr@c-67-174-241-112.hsd1.ca.comcast.net) has joined #ceph
[3:59] * yguang11 (~yguang11@12.31.82.125) Quit (Remote host closed the connection)
[4:00] * MrHeavy_ (~MrHeavy@143.48.117.45) has joined #ceph
[4:00] * MrHeavy (~MrHeavy@pool-108-54-190-117.nycmny.fios.verizon.net) Quit (Read error: Connection reset by peer)
[4:01] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[4:01] * Dysgalt (~AotC@7R2AABHAZ.tor-irc.dnsbl.oftc.net) Quit ()
[4:01] * xanax` (~Sami345@marylou.nos-oignons.net) has joined #ceph
[4:05] * midnight_ (~midnightr@216.113.160.71) Quit (Ping timeout: 480 seconds)
[4:12] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Ping timeout: 480 seconds)
[4:17] * MrHeavy__ (~MrHeavy@pool-108-54-190-117.nycmny.fios.verizon.net) has joined #ceph
[4:19] * zhaochao (~zhaochao@125.39.8.226) has joined #ceph
[4:21] * alram (~alram@64.134.221.151) Quit (Quit: leaving)
[4:24] * CScrace (cb638001@107.161.19.53) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[4:25] * MrHeavy_ (~MrHeavy@143.48.117.45) Quit (Ping timeout: 480 seconds)
[4:26] * joao (~joao@249.38.136.95.rev.vodafone.pt) Quit (Ping timeout: 480 seconds)
[4:31] * xanax` (~Sami345@9S0AAAIUZ.tor-irc.dnsbl.oftc.net) Quit ()
[4:34] * kefu (~kefu@114.92.116.93) has joined #ceph
[4:35] * flisky (~Thunderbi@106.39.60.34) has joined #ceph
[4:38] * CScrace (cb638001@107.161.19.109) has joined #ceph
[4:39] * CScrace (cb638001@107.161.19.109) Quit ()
[4:40] * johanni (~johanni@173.226.103.101) Quit (Remote host closed the connection)
[4:41] * johanni (~johanni@173.226.103.101) has joined #ceph
[4:42] * joao (~joao@249.38.136.95.rev.vodafone.pt) has joined #ceph
[4:42] * ChanServ sets mode +o joao
[4:42] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[4:44] * Mika_c (~Mk@122.146.93.152) has joined #ceph
[4:46] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) has joined #ceph
[4:46] * johanni_ (~johanni@173.226.103.101) Quit (Ping timeout: 480 seconds)
[4:49] * johanni (~johanni@173.226.103.101) Quit (Ping timeout: 480 seconds)
[4:53] <snerd> any clue as to why I might have one radosgw user who can login fine but another that gets 403'd?
[5:01] * delcake (~AGaW@nx-01.tor-exit.network) has joined #ceph
[5:13] * zack_dolby (~textual@p2208095-ipngn17401marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[5:14] * zack_dolby (~textual@p2208095-ipngn17401marunouchi.tokyo.ocn.ne.jp) Quit ()
[5:15] * ketor (~ketor@182.48.117.114) has joined #ceph
[5:16] * johanni_ (~johanni@24.4.41.97) has joined #ceph
[5:16] * johanni (~johanni@24.4.41.97) has joined #ceph
[5:19] * calvinx (~calvin@101.100.172.246) has joined #ceph
[5:19] * overclk (~overclk@121.244.87.117) has joined #ceph
[5:26] * Vacuum__ (~Vacuum@i59F79744.versanet.de) has joined #ceph
[5:27] * yguang11 (~yguang11@12.31.82.125) has joined #ceph
[5:27] * evl (~chatzilla@139.216.138.39) has joined #ceph
[5:28] <snerd> aha got it, backslashes in secret key
[5:28] <snerd> argh
[5:30] * erice (~eric@c-73-14-155-49.hsd1.co.comcast.net) Quit (Remote host closed the connection)
[5:30] * yguang11_ (~yguang11@nat-dip15.fw.corp.yahoo.com) has joined #ceph
[5:30] * erice (~eric@50.245.231.209) has joined #ceph
[5:31] * delcake (~AGaW@7R2AABHCD.tor-irc.dnsbl.oftc.net) Quit ()
[5:31] * OODavo1 (~Spikey@176.10.99.209) has joined #ceph
[5:32] * kefu is now known as kefu|afk
[5:33] * Vacuum_ (~Vacuum@88.130.211.44) Quit (Ping timeout: 480 seconds)
[5:33] * squ (~Thunderbi@46.109.36.167) has joined #ceph
[5:36] * georgem (~Adium@24.140.226.3) Quit (Quit: Leaving.)
[5:37] * yguang11 (~yguang11@12.31.82.125) Quit (Ping timeout: 480 seconds)
[5:38] * johanni__ (~johanni@24.4.41.97) has joined #ceph
[5:40] * kefu|afk is now known as kefu
[5:45] * johanni_ (~johanni@24.4.41.97) Quit (Ping timeout: 480 seconds)
[5:46] * bandrus1 (~brian@19.sub-70-211-77.myvzw.com) has joined #ceph
[5:48] * bandrus (~brian@106.sub-70-211-65.myvzw.com) Quit (Ping timeout: 480 seconds)
[5:48] * lcurtis_ (~lcurtis@ool-18bfec0b.dyn.optonline.net) has joined #ceph
[5:49] * squ (~Thunderbi@46.109.36.167) Quit (Quit: squ)
[5:49] * squ (~Thunderbi@46.109.36.167) has joined #ceph
[5:54] * johanni_ (~johanni@24.4.41.97) has joined #ceph
[6:00] * johanni__ (~johanni@24.4.41.97) Quit (Ping timeout: 480 seconds)
[6:01] * OODavo1 (~Spikey@5NZAAC6LN.tor-irc.dnsbl.oftc.net) Quit ()
[6:01] * Sketchfile (~nartholli@89.105.194.75) has joined #ceph
[6:04] * calvinx (~calvin@101.100.172.246) Quit (Quit: calvinx)
[6:10] * deepsa (~Deependra@00013525.user.oftc.net) has joined #ceph
[6:11] * linjan (~linjan@176.195.249.215) Quit (Ping timeout: 480 seconds)
[6:16] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:23] * vbellur (~vijay@122.171.123.165) has joined #ceph
[6:23] * johanni__ (~johanni@24.4.41.97) has joined #ceph
[6:24] * erice_ (~eric@c-73-14-155-49.hsd1.co.comcast.net) has joined #ceph
[6:25] * erice (~eric@50.245.231.209) Quit (Ping timeout: 480 seconds)
[6:28] * linjan (~linjan@80.179.241.26) has joined #ceph
[6:28] * valeech (~valeech@pool-72-86-37-215.clppva.fios.verizon.net) Quit (Quit: valeech)
[6:30] * johanni_ (~johanni@24.4.41.97) Quit (Ping timeout: 480 seconds)
[6:30] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[6:31] * Sketchfile (~nartholli@5NZAAC6MZ.tor-irc.dnsbl.oftc.net) Quit ()
[6:36] * bandrus1 (~brian@19.sub-70-211-77.myvzw.com) Quit (Ping timeout: 480 seconds)
[6:44] * ketor (~ketor@182.48.117.114) Quit (Remote host closed the connection)
[6:51] * bandrus (~brian@250.sub-70-214-35.myvzw.com) has joined #ceph
[6:53] * johanni_ (~johanni@24.4.41.97) has joined #ceph
[6:55] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[6:56] * lcurtis_ (~lcurtis@ool-18bfec0b.dyn.optonline.net) Quit (Ping timeout: 480 seconds)
[7:00] * johanni__ (~johanni@24.4.41.97) Quit (Ping timeout: 480 seconds)
[7:01] * totalwormage (~Bobby@176.10.99.205) has joined #ceph
[7:03] * erice_ (~eric@c-73-14-155-49.hsd1.co.comcast.net) Quit (Remote host closed the connection)
[7:03] * erice (~eric@50.245.231.209) has joined #ceph
[7:13] * kefu (~kefu@114.92.116.93) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[7:14] * vbellur (~vijay@122.171.123.165) Quit (Ping timeout: 480 seconds)
[7:14] * kefu (~kefu@114.92.116.93) has joined #ceph
[7:14] * ketor (~ketor@182.48.117.114) has joined #ceph
[7:14] * johanni__ (~johanni@24.4.41.97) has joined #ceph
[7:18] * yguang11_ (~yguang11@nat-dip15.fw.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[7:22] * johanni_ (~johanni@24.4.41.97) Quit (Ping timeout: 480 seconds)
[7:23] * yguang11 (~yguang11@2001:4998:effd:7804::1179) has joined #ceph
[7:24] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) Quit (Quit: Leaving.)
[7:24] * johanni_ (~johanni@24.4.41.97) has joined #ceph
[7:30] * johanni__ (~johanni@24.4.41.97) Quit (Ping timeout: 480 seconds)
[7:31] * totalwormage (~Bobby@9S0AAAI5D.tor-irc.dnsbl.oftc.net) Quit ()
[7:31] * cryptk (~bret@89.105.194.88) has joined #ceph
[7:34] * johanni_ (~johanni@24.4.41.97) Quit (Remote host closed the connection)
[7:35] * nils_ (~nils@doomstreet.collins.kg) has joined #ceph
[7:36] * Hemanth (~Hemanth@117.192.234.178) has joined #ceph
[7:39] * MrHeavy__ (~MrHeavy@pool-108-54-190-117.nycmny.fios.verizon.net) Quit (Read error: Connection reset by peer)
[7:40] * johanni (~johanni@24.4.41.97) Quit (Ping timeout: 480 seconds)
[7:41] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:41] * branto (~borix@ip-213-220-214-203.net.upcbroadband.cz) has joined #ceph
[7:45] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[7:48] * yguang11 (~yguang11@2001:4998:effd:7804::1179) Quit (Ping timeout: 480 seconds)
[7:49] * johanni (~johanni@24.4.41.97) has joined #ceph
[7:49] * johanni_ (~johanni@24.4.41.97) has joined #ceph
[7:49] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[7:55] * erice_ (~eric@c-73-14-155-49.hsd1.co.comcast.net) has joined #ceph
[7:55] * erice_ (~eric@c-73-14-155-49.hsd1.co.comcast.net) Quit (Remote host closed the connection)
[7:55] * erice (~eric@50.245.231.209) Quit (Read error: Connection reset by peer)
[7:57] * erice (~eric@c-73-14-155-49.hsd1.co.comcast.net) has joined #ceph
[7:57] * erice (~eric@c-73-14-155-49.hsd1.co.comcast.net) Quit (Remote host closed the connection)
[7:58] * erice (~eric@50.245.231.209) has joined #ceph
[8:01] * linjan (~linjan@80.179.241.26) Quit (Ping timeout: 480 seconds)
[8:01] * cryptk (~bret@9S0AAAI7H.tor-irc.dnsbl.oftc.net) Quit ()
[8:01] * Popz (~Szernex@5NZAAC6S6.tor-irc.dnsbl.oftc.net) has joined #ceph
[8:02] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[8:09] * johanni__ (~johanni@24.4.41.97) has joined #ceph
[8:11] * nsoffer (~nsoffer@bzq-84-111-112-230.red.bezeqint.net) has joined #ceph
[8:12] * raw (~raw@37.48.65.172) has joined #ceph
[8:13] * vbellur (~vijay@121.244.87.117) has joined #ceph
[8:15] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[8:16] * johanni (~johanni@24.4.41.97) Quit (Ping timeout: 480 seconds)
[8:16] * shakamunyi (~shakamuny@216.127.127.20) has joined #ceph
[8:19] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[8:23] <raw> im using cephfs 0.94.1 with metadata on a SSD pool and data on a HDD pool with SSD cache tiering. now i have ascii file that contains 576 NULL bytes and i have no idea where they should come from. i have now issued a deep scrub in all pgs, something else i can do to verify file integrity?
[8:27] * madkiss (~madkiss@guestkeeper.heise.de) has joined #ceph
[8:28] * johanni (~johanni@24.4.41.97) has joined #ceph
[8:28] * johanni- (~johanni@24.4.41.97) has joined #ceph
[8:31] * Popz (~Szernex@5NZAAC6S6.tor-irc.dnsbl.oftc.net) Quit ()
[8:31] * avib (~Ceph@al.secure.elitehosts.com) Quit (Ping timeout: 480 seconds)
[8:32] * johanni__ (~johanni@24.4.41.97) Quit (Ping timeout: 480 seconds)
[8:34] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[8:34] * johanni_ (~johanni@24.4.41.97) Quit (Ping timeout: 480 seconds)
[8:37] * bandrus (~brian@250.sub-70-214-35.myvzw.com) Quit (Quit: Leaving.)
[8:39] * Hau_MI is now known as HauM1
[8:39] * oro (~oro@84-72-20-79.dclient.hispeed.ch) has joined #ceph
[8:41] * avib (~Ceph@alt.secure.elitehosts.com) has joined #ceph
[8:47] * Kioob`Taff (~plug-oliv@2a01:e35:2e8a:1e0::42:10) has joined #ceph
[8:50] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[8:51] * OutOfNoWhere (~rpb@199.68.195.102) has joined #ceph
[8:54] * Hemanth (~Hemanth@117.192.234.178) Quit (Ping timeout: 480 seconds)
[8:55] * ketor (~ketor@182.48.117.114) Quit (Remote host closed the connection)
[8:55] * ketor (~ketor@182.48.117.114) has joined #ceph
[8:57] * johanni_ (~johanni@24.4.41.97) has joined #ceph
[9:00] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[9:01] * johanni (~johanni@24.4.41.97) Quit (Ping timeout: 480 seconds)
[9:01] * JamesHarrison (~Keiya@marcuse-1.nos-oignons.net) has joined #ceph
[9:02] * avib (~Ceph@alt.secure.elitehosts.com) Quit (Ping timeout: 480 seconds)
[9:02] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[9:03] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[9:05] * bitserker (~toni@188.87.126.203) has joined #ceph
[9:07] * analbeard (~shw@support.memset.com) has joined #ceph
[9:08] * Concubidated (~Adium@gw.sepia.ceph.com) Quit (Remote host closed the connection)
[9:10] * b0e (~aledermue@213.95.25.82) has joined #ceph
[9:11] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[9:11] <Be-El> hi
[9:13] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[9:13] * avib (~Ceph@al.secure.elitehosts.com) has joined #ceph
[9:15] <SamYaple> hello Be-El
[9:17] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit (Quit: Leaving.)
[9:18] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[9:18] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit ()
[9:18] <nils_> so I'm currently on Giant, should I upgrade to Hammer?
[9:19] * dgurtner (~dgurtner@178.197.231.155) has joined #ceph
[9:20] * mivaho_ (~quassel@xternal.xs4all.nl) Quit (Quit: Going)
[9:20] * mivaho (~quassel@xternal.xs4all.nl) has joined #ceph
[9:22] * Kioob`Taff (~plug-oliv@2a01:e35:2e8a:1e0::42:10) Quit (Remote host closed the connection)
[9:22] * rotbeard (~redbeard@x5f74c18e.dyn.telefonica.de) has joined #ceph
[9:23] * daviddcc (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) has joined #ceph
[9:24] <nils_> btw. is there a reason why there are no packages for debian jessie?
[9:24] * Kioob`Taff (~plug-oliv@2a01:e35:2e8a:1e0::42:10) has joined #ceph
[9:25] * nsoffer (~nsoffer@bzq-84-111-112-230.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[9:25] * cok (~chk@2a02:2350:18:1010:18b3:c6dd:91a9:f1fc) has joined #ceph
[9:26] * johanni (~johanni@24.4.41.97) has joined #ceph
[9:27] * linjan (~linjan@46.210.218.144) has joined #ceph
[9:29] * Hemanth (~Hemanth@117.192.233.3) has joined #ceph
[9:29] <nils_> there is probably a typo here: https://ceph.com/releases/v0-94-hammer-released/ (enable experimental unrecoverable data corrupting featuers = keyvaluestore)
[9:30] * Hemanth (~Hemanth@117.192.233.3) Quit ()
[9:31] * JamesHarrison (~Keiya@5NZAAC6WV.tor-irc.dnsbl.oftc.net) Quit ()
[9:31] * GuntherDW (~Throlkim@tor.metaether.net) has joined #ceph
[9:32] * johanni_ (~johanni@24.4.41.97) Quit (Ping timeout: 480 seconds)
[9:35] * johanni (~johanni@24.4.41.97) Quit (Remote host closed the connection)
[9:35] * johanni (~johanni@24.4.41.97) has joined #ceph
[9:40] * ketor (~ketor@182.48.117.114) Quit (Remote host closed the connection)
[9:42] * ketor (~ketor@182.48.117.114) has joined #ceph
[9:42] * johanni- (~johanni@24.4.41.97) Quit (Ping timeout: 480 seconds)
[9:43] * johanni (~johanni@24.4.41.97) Quit (Ping timeout: 480 seconds)
[9:54] <Nats> certainly at some point since giant is eol
[9:54] * evl (~chatzilla@139.216.138.39) Quit (Quit: ChatZilla 0.9.91.1 [Firefox 38.0.1/20150518070256])
[10:01] * GuntherDW (~Throlkim@9S0AAAJE1.tor-irc.dnsbl.oftc.net) Quit ()
[10:02] * gaveen (~gaveen@175.157.23.156) has joined #ceph
[10:06] * ahmeni (~tuhnis@192.3.177.167) has joined #ceph
[10:09] <Mika_c> hello everyone, have anyone ever used the command 'rbd-replay'?
[10:10] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:12] * jordanP (~jordan@213.215.2.194) has joined #ceph
[10:13] * Cybert1nus (~Cybertinu@cybertinus.customer.cloud.nl) has joined #ceph
[10:16] * oro (~oro@84-72-20-79.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[10:17] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) Quit (Ping timeout: 480 seconds)
[10:18] * jclm (~jclm@118.130.198.164) Quit (Quit: Leaving.)
[10:20] * linjan (~linjan@46.210.218.144) Quit (Ping timeout: 480 seconds)
[10:26] * fsimonce (~simon@host253-71-dynamic.3-87-r.retail.telecomitalia.it) has joined #ceph
[10:29] * linjan (~linjan@109.253.45.103) has joined #ceph
[10:30] * jclm (~jclm@211.177.245.165) has joined #ceph
[10:33] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[10:36] * ahmeni (~tuhnis@5NZAAC605.tor-irc.dnsbl.oftc.net) Quit ()
[10:36] * Redshift (~brannmar@chomsky.torservers.net) has joined #ceph
[10:37] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[10:38] * fdmanana__ (~fdmanana@bl13-138-253.dsl.telepac.pt) has joined #ceph
[10:39] * flisky1 (~Thunderbi@106.39.60.34) has joined #ceph
[10:40] * flisky (~Thunderbi@106.39.60.34) Quit (Ping timeout: 480 seconds)
[10:45] * Hemanth (~Hemanth@117.192.235.178) has joined #ceph
[10:59] * Hemanth (~Hemanth@117.192.235.178) Quit (Ping timeout: 480 seconds)
[11:03] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[11:06] * Redshift (~brannmar@9S0AAAJI6.tor-irc.dnsbl.oftc.net) Quit ()
[11:06] * csharp (~Sketchfil@108.61.190.17) has joined #ceph
[11:09] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[11:09] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[11:09] * Hemanth (~Hemanth@61.1.232.225) has joined #ceph
[11:15] * kawa2014 (~kawa@212.110.41.244) has joined #ceph
[11:17] * nc_ch (~nc@flinux01.tu-graz.ac.at) has joined #ceph
[11:20] <nc_ch> hi ... i am having problems with tgt, i want to use rbd as a backing store, and i get ... well, a very non-explaining error ...
[11:20] <nc_ch> first things first,
[11:20] <nc_ch> my tgt is compiled with rbd ...
[11:20] <nc_ch> tgtadm --lld iscsi --op show --mode system | grep rbd
[11:20] <nc_ch> rbd (bsoflags sync:direct)
[11:20] <nc_ch> tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 --bstype=rbd --backing-store="iscsi01/test01" --bsopts "conf=/etc/ceph/ceph.conf;id=iscsi01"
[11:21] <nc_ch> gives me a tgtadm: unknown error
[11:22] <nc_ch> client.iscsi01 has access to the pool iscsi01, the image test01 exists as well
[11:23] * Hemanth (~Hemanth@61.1.232.225) Quit (Ping timeout: 480 seconds)
[11:24] * sleinen (~Adium@2001:620:0:82::102) has joined #ceph
[11:24] * Hemanth (~Hemanth@61.1.233.235) has joined #ceph
[11:24] * oro (~oro@2001:620:20:16:1448:5e8f:2af8:1741) has joined #ceph
[11:29] <nc_ch> as a matter of fact, the problem seems to boil down to: Jun 3 11:27:38 cephi01 tgtd: device_mgmt(246) sz:74 params:path=iscsi01/test01,bstype=rbd,bsopts=conf=/etc/ceph/ceph.conf;id=iscsi01
[11:29] <nc_ch> Jun 3 11:27:38 cephi01 tgtd: bs_rbd_init(531) bs_rbd: ignoring unknown option ""
[11:29] <nc_ch> Jun 3 11:27:38 cephi01 tgtd: bs_rbd_init(540) bs_rbd_init: confname /etc/ceph/ceph.conf
[11:29] <nc_ch> Jun 3 11:27:38 cephi01 tgtd: bs_rbd_init(542) bs_rbd_init bsopts=;id=iscsi01
[11:29] <nc_ch> Jun 3 11:27:38 cephi01 tgtd: bs_rbd_init(565) bs_rbd_init: rados_connect: -2
[11:29] <nc_ch> shows that bsopts starts with a ; ... which seems to break there ...
[11:35] * rlrevell (~leer@184.52.129.221) Quit (Read error: Connection reset by peer)
[11:36] * csharp (~Sketchfil@3DDAAALUA.tor-irc.dnsbl.oftc.net) Quit ()
[11:36] * Dragonshadow (~tritonx@manning2.torservers.net) has joined #ceph
[11:37] * rlrevell (~leer@184.52.129.221) has joined #ceph
[11:42] * SkyEye (~gaveen@175.157.57.148) has joined #ceph
[11:44] * linjan (~linjan@109.253.45.103) Quit (Read error: Connection reset by peer)
[11:44] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) has joined #ceph
[11:45] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[11:46] * gaveen (~gaveen@175.157.23.156) Quit (Ping timeout: 480 seconds)
[11:50] * kefu (~kefu@114.92.116.93) Quit (Max SendQ exceeded)
[11:51] * kefu (~kefu@114.92.116.93) has joined #ceph
[11:58] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[12:01] * kefu (~kefu@114.92.116.93) Quit (Max SendQ exceeded)
[12:01] * lucas1 (~Thunderbi@218.76.52.64) Quit ()
[12:01] * kefu (~kefu@114.92.116.93) has joined #ceph
[12:06] * Dragonshadow (~tritonx@7R2AABHIL.tor-irc.dnsbl.oftc.net) Quit ()
[12:06] * Diablodoct0r (~dusti@91.230.121.131) has joined #ceph
[12:10] * kawa2014 (~kawa@212.110.41.244) Quit (Ping timeout: 480 seconds)
[12:10] * colonD (~colonD@70.58.239.67) Quit (Ping timeout: 480 seconds)
[12:10] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[12:10] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[12:14] * kefu is now known as kefu|afk
[12:16] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[12:19] * colonD (~colonD@173-165-224-105-minnesota.hfc.comcastbusiness.net) has joined #ceph
[12:25] * cok (~chk@2a02:2350:18:1010:18b3:c6dd:91a9:f1fc) Quit (Quit: Leaving.)
[12:30] * Hemanth (~Hemanth@61.1.233.235) Quit (Ping timeout: 480 seconds)
[12:36] * Diablodoct0r (~dusti@9S0AAAJN7.tor-irc.dnsbl.oftc.net) Quit ()
[12:36] * airsoftglock (~OODavo@tor-exit2-readme.puckey.org) has joined #ceph
[12:41] * Hemanth (~Hemanth@117.192.243.177) has joined #ceph
[12:43] * ismell (~ismell@host-64-17-88-159.beyondbb.com) has joined #ceph
[12:45] <tuxcraft1r> hi all, im trying to build a ceph cluster
[12:45] <tuxcraft1r> http://paste.debian.net/196614/
[12:45] <tuxcraft1r> 015-06-03 12:43:57.769411 7f02a814b700 0 -- :/1012821 >> 192.168.24.23:6789/0 pipe(0x7f02a40253e0 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f02a4025670).fault
[12:45] <tuxcraft1r> where can i find what that means
[12:50] * ismell_ (~ismell@host-24-56-189-172.beyondbb.com) Quit (Ping timeout: 480 seconds)
[12:52] * Mika_c (~Mk@122.146.93.152) Quit (Quit: Konversation terminated!)
[12:54] <tuxcraft1r> should ceph be running on port 6789?
[12:54] <tuxcraft1r> netstat 192.168.24.23 -tulp | grep 6789
[12:54] <tuxcraft1r> doesnt return anything
[12:54] <Be-El> afaik 6789 is the mon port, and the error message means that the mon/osd cannot access the mon on host 192.168.24.23
[13:01] * squ (~Thunderbi@46.109.36.167) Quit (Ping timeout: 480 seconds)
[13:03] * cok (~chk@nat-cph1-sys.net.one.com) has joined #ceph
[13:05] <tuxcraft1r> Be-El: i think there may be something wrong with systemd and the ceph package in debian
[13:05] <tuxcraft1r> it looks like it aint responding to the /etc/init.d/ceph start mon.ceph01
[13:05] <tuxcraft1r> systemctl status ceph.service < indicates its active
[13:05] <tuxcraft1r> but i cant see the binary
[13:05] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) has joined #ceph
[13:06] * airsoftglock (~OODavo@9S0AAAJPH.tor-irc.dnsbl.oftc.net) Quit ()
[13:06] <Be-El> why do you try do start it with the init script on a systemd setup?
[13:06] * vegas3 (~thundercl@176.10.99.208) has joined #ceph
[13:07] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[13:08] * bitserker (~toni@188.87.126.203) Quit (Read error: Connection reset by peer)
[13:08] * bitserker (~toni@188.87.126.203) has joined #ceph
[13:10] * linjan (~linjan@109.253.44.103) has joined #ceph
[13:10] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[13:11] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[13:13] * vikhyat (~vumrao@121.244.87.116) Quit ()
[13:15] * flisky1 (~Thunderbi@106.39.60.34) Quit (Quit: flisky1)
[13:16] * ganders (~root@190.2.42.21) has joined #ceph
[13:21] <tuxcraft1r> Be-El: it seems to be the default behaiour
[13:21] <tuxcraft1r> i sent an email to 12:54 < tuxcraft1r> netstat 192.168.24.23 -tulp | grep 6789
[13:21] <Be-El> tuxcraft1r: i've no clue, i
[13:21] <tuxcraft1r> ceph-maintainers@lists.ceph.com
[13:22] <Be-El> tuxcraft1r: i've no clue, i've successfully avoid systemd until now
[13:22] <tuxcraft1r> i will see if i can install an other init system
[13:22] <Be-El> does this problem occur with the first mon?
[13:22] <tuxcraft1r> Be-El: i have had lots of issues with systemd so far and i even did a rhel7 exame with it
[13:23] <Be-El> that's why i avoid it like hell ;-)
[13:23] <tuxcraft1r> Be-El: yes with the first mon, but im varly new to ceph and this is my first deployment
[13:23] * OutOfNoWhere (~rpb@199.68.195.102) Quit (Ping timeout: 480 seconds)
[13:23] <Be-El> tuxcraft1r: so you have defined a list of mon hosts in ceph.conf, and the first mon started is now trying to contact the others?
[13:24] * zhaochao (~zhaochao@125.39.8.226) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 38.0.1/20150526223604])
[13:24] <Be-El> tuxcraft1r: or are you trying to setup a cluster with a single mon first?
[13:25] * cok (~chk@nat-cph1-sys.net.one.com) Quit (Quit: Leaving.)
[13:25] * sleinen (~Adium@2001:620:0:82::102) Quit (Ping timeout: 480 seconds)
[13:25] <tuxcraft1r> Be-El: yes http://docs.ceph.com/docs/master/install/manual-deployment/
[13:26] <Be-El> tuxcraft1r: yes to multiple mons or yes to single mon?
[13:30] <tuxcraft1r> Be-El: single mon
[13:32] <Be-El> tuxcraft1r: so the mon should be starting without a problem. you can check the log file in /var/log/ceph/ for error during mon start
[13:33] <Be-El> tuxcraft1r: maybe it's a good idea to use the init scripts manually first to ensure the mon is up and running, stop it afterwards and then try to get the systemd sh*t running
[13:36] <tuxcraft1r> Be-El: yes im switching to sysvinit-core now to get some control back
[13:36] * vegas3 (~thundercl@3DDAAAL1C.tor-irc.dnsbl.oftc.net) Quit ()
[13:36] * kefu|afk is now known as kefu
[13:36] * cooey1 (~ricin@marcuse-1.nos-oignons.net) has joined #ceph
[13:46] <tuxcraft1r> Be-El: victory got it to start
[13:46] <tuxcraft1r> is there a way to add a note to the offical documentation
[13:46] <tuxcraft1r> apt-get install -y sysvinit-core && touch /var/lib/ceph/mon/ceph-ceph01/sysvinit
[13:47] <Be-El> tuxcraft1r: i think the documentation is managed in the ceph git repository. you should clone it, make the changes and send a pull request
[13:47] <Be-El> tuxcraft1r: or send a bugreport to the mailing list
[13:50] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) has joined #ceph
[13:51] * rlrevell (~leer@184.52.129.221) Quit (Quit: Leaving.)
[13:52] * valeech (~valeech@pool-72-86-37-215.clppva.fios.verizon.net) has joined #ceph
[14:01] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[14:01] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) has joined #ceph
[14:04] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[14:05] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[14:06] * cooey1 (~ricin@3DDAAAL2Z.tor-irc.dnsbl.oftc.net) Quit ()
[14:06] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[14:06] * Architect (~Kwen@176.10.99.207) has joined #ceph
[14:06] <vikhyat> leseb_: Hey Sebastien , I have some query regarding cinder backup blog : http://www.sebastien-han.fr/blog/2015/02/17/openstack-and-backup/
[14:07] <vikhyat> leseb_: wanted to have a quick chat ?
[14:07] <vikhyat> leseb_: please let me know if you have some time
[14:08] * cok (~chk@2a02:2350:18:1010:514d:55bb:4a55:6cc8) has joined #ceph
[14:09] <leseb_> vikhyat: hey, what is it about?
[14:10] * zenpac (~zenpac3@66.55.33.66) has joined #ceph
[14:10] <vikhyat> leseb_: it is regarding consistent backup
[14:11] <vikhyat> leseb_: https://paste.fedoraproject.org/228420/33333442/
[14:12] <vikhyat> leseb_: if 2nd point is not implemented will it cause any issue because if I will tale the snapshot
[14:12] <vikhyat> take*
[14:12] <vikhyat> leseb_: I am in good condition
[14:12] <vikhyat> leseb_: steps 1,4 and 5 to get a new volume backup
[14:13] <vikhyat> take the snapshot and then backup the cinder volume
[14:13] <vikhyat> and one good feature we will get here is that it will be RBD incremental backups so less space usage in cluster
[14:14] <vikhyat> leseb_: is my understanding is correct ?
[14:15] * georgem (~Adium@184.151.179.0) has joined #ceph
[14:17] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) has joined #ceph
[14:19] <zenpac> What is the smallest number of servers needed to setup a test-system for Ceph? My goals are to be able to simulate and monitor failure modes of Ceph. I don't need performance for now.
[14:19] <leseb_> vikhyat: give me a sec, reading :)
[14:19] <T1w> 3 virtual servers on a laptop?
[14:19] <vikhyat> leseb_: sure thanks :)
[14:20] <zenpac> T1w: Ok.. Thanks.
[14:20] <T1w> an OSD on each (3 in all), a MON on one and perhaps a RGW on another
[14:20] <T1w> that should be the bare minimum thats compatible with default settings of 3 copies of everything
[14:20] <zenpac> Ok,, That makes sense and agrees with what I was reading in the docs.
[14:21] <lathiat> To simulate a monitor failure you need 3 mon nodes
[14:21] <lathiat> http://ceph.com/docs/master/rados/operations/add-or-rm-mons/
[14:21] <zenpac> Why not have Mon running on all 3?
[14:21] <leseb_> vikhyat: step 2 is not implemented, this won't prevent you to take any snapshot, but if you want to get advantage of the fs freeze/thaw API you'll have to run the calls manually
[14:21] <zenpac> You beat me to it.
[14:22] <zenpac> Is it possible to make all the VM's identical?
[14:22] <T1w> yeah, ok, but adding monitors is also a part of testing, so.. ;)
[14:22] <vikhyat> leseb_: like xfs_freeze before taking the snapshot ?
[14:22] * rotbeard (~redbeard@x5f74c18e.dyn.telefonica.de) Quit (Quit: Leaving)
[14:22] <leseb_> like this: http://www.sebastien-han.fr/blog/2015/02/09/openstack-perform-consistent-snapshots-with-qemu-guest-agent/
[14:22] <leseb_> vikhyat: ^
[14:22] <T1w> before you can test a MON failure you'd need at least 1 more (and preferably 2 more)
[14:23] <vikhyat> leseb_: okay
[14:23] <leseb_> vikhyat: does this answer your question?
[14:24] <T1w> zenpac: well.. make a new guest, install OS and add some diskspace for OSDs
[14:24] <T1w> clone it 2 times
[14:24] <vikhyat> leseb_:hmm yes but I was thinking if I wont freeze the I/O and only take the snapshot it wont be consistent backup ?
[14:24] <T1w> change hostname/ip
[14:24] <T1w> fire up
[14:24] <T1w> that's it
[14:25] <zenpac> T1w: Yea i thought I'd need a quorum to keep Mon happy, and that seems like 3 MONs.
[14:25] <T1w> zenpac: always an uneven number!
[14:25] <leseb_> zenpac: or you can use https://github.com/ceph/ceph-ansible to get a vagrant env with everything configured
[14:26] <leseb_> vikhyat: well it depends which consistency you're looking for (application level? fs level? block?)
[14:26] <vikhyat> leseb_: like I have a cinder volume and I want to take the backup of this cinder volume I know if I will run cinder-backup it wont be consistent but If I will take snapshot of that volume and then take the backup it would be consistent as snapshots are consistent ?
[14:26] <vikhyat> leseb_: fslevel ?
[14:26] <leseb_> vikhyat: filesystem level (with fs freeze or xfs_freeeze)
[14:27] <leseb_> vikhyat: yup your statement is correct
[14:27] <vikhyat> leseb_: then fs freeze + sync (flush the cache) + snapshot + cinder backup ?
[14:27] <leseb_> but then you get a consistent snapshot at the block layer, no fs, no application
[14:27] <vikhyat> leseb_: right
[14:27] * kefu is now known as kefu|afk
[14:28] <vikhyat> leseb_: but for fs : then fs freeze + sync (flush the cache) + snapshot + cinder backup ?
[14:28] <vikhyat> leseb_: this is correct ?
[14:28] * frickler_ is now known as frickler
[14:28] <leseb_> ideally first you have fsfreeze hooks (like mentioned in the article) then fs freeze, then block snapshot, then backup
[14:29] <vikhyat> leseb_: right got the point
[14:29] <vikhyat> leseb_: one more query regarding backup with SWIFT + Ceph
[14:29] <vikhyat> leseb_: it is a bad idea correct ?
[14:30] <leseb_> vikhyat: what do you mean swift + ceph?
[14:30] * deepsa (~Deependra@00013525.user.oftc.net) Quit (Quit: Textual IRC Client: www.textualapp.com)
[14:31] <vikhyat> leseb_: Swift backup driver and swift is used with ceph with the help of radosgw
[14:31] <vikhyat> leseb_: do you think it would be a good idea
[14:31] <leseb_> vikhyat: this is not necessary a bad idea to use rgw with swift API and erasure code, but with RBD you incremental backups
[14:32] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has joined #ceph
[14:32] <vikhyat> leseb_: with rbd we will get incremental backups
[14:32] <leseb_> with rgw and swift api you won't, this is implemented for the swift backend though
[14:33] <vikhyat> you mean swift backend other than rgw ?
[14:34] <leseb_> if you use swift directly it is (with the cinder incremental API)
[14:34] <vikhyat> okay got it
[14:35] <vikhyat> leseb_: thanks a lot for your time , nice inputs :)
[14:36] * Architect (~Kwen@9S0AAAJUQ.tor-irc.dnsbl.oftc.net) Quit ()
[14:36] * Unforgiven (~cheese^@wannabe.torservers.net) has joined #ceph
[14:36] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has left #ceph
[14:37] * sleinen (~Adium@194.230.155.185) has joined #ceph
[14:37] * sleinen (~Adium@194.230.155.185) Quit (Remote host closed the connection)
[14:38] * cmdrk (~lincoln@c-71-194-163-11.hsd1.il.comcast.net) has joined #ceph
[14:38] <leseb_> vikhyat: you're welcome
[14:42] * kefu|afk (~kefu@114.92.116.93) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[14:47] * dephcon (~oftc-webi@c73-110.rim.net) has joined #ceph
[14:50] <dephcon> does anyone know if it's possible to syslog to a remote/local address/port OR disble the facility used for syslog? I'm able to split out ceph-mon and ceph-osd to seperate files in syslog-ng but they still hit /var/log/syslog due to forcing a facility thats captured by f_syslog3 filter
[14:50] <dephcon> i'm trying to avoid touching the main syslog-ng.conf file and only use an include file in a massive deployment
[14:51] <dephcon> i see they've added the ability to change the facility, but i'm tried setting it to "" or null but it doesn't seem to work (i'm new to syslog) :(
[14:51] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[14:53] * primechu_ (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[14:53] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Read error: Connection reset by peer)
[14:54] * ramonskie (ab15507e@107.161.19.109) has joined #ceph
[14:55] <ramonskie> tried to add extra monitors with ceph-deploy and now i get allot of authentication errors
[14:55] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[14:55] <ramonskie> now i wat to remove them again but get the following error [WARNIN] 2015-06-03 14:40:55.837794 7fd5c0fdf700 0 librados: mon. authentication error (1) Operation not permitted
[14:55] <alfredodeza> ramonskie: how did you tried adding them, what command did you use?
[14:56] * georgem (~Adium@184.151.179.0) Quit (Quit: Leaving.)
[14:56] <ramonskie> first with ceph-deploy mon create hostname
[14:56] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) has joined #ceph
[14:56] <alfredodeza> that command doesn't 'add' that 'creates'
[14:56] <alfredodeza> there is an 'add' subcommand
[14:56] <alfredodeza> that is significantly different
[14:56] <alfredodeza> and that may be why you are having those issues
[14:57] <ramonskie> then i cheked the log files and it errors with authentication. then i tried to add it with monmaptool
[14:57] <ramonskie> and then it got really bad my initial monitor wouldnt come back up
[14:57] * Hemanth (~Hemanth@117.192.243.177) Quit (Ping timeout: 480 seconds)
[14:58] <ramonskie> so removed the new mon with the monmaptool and stoped all monitors and pushed the configuration again
[14:58] <alfredodeza> yep, so, I think that the problem is that you did 'create' instead of 'add' which is like creating a new cluster
[14:58] <ramonskie> damn
[14:58] <ramonskie> how can i fix this ?
[14:58] <alfredodeza> 'add' will get a new monitor added to an existing cluster
[14:59] <alfredodeza> I have no idea how to fix it :(
[14:59] <ramonskie> then this documentation is wrong http://ceph.com/docs/master/rados/deployment/ceph-deploy-mon/#add-a-monitor
[15:00] * calvinx (~calvin@101.100.172.246) has joined #ceph
[15:03] <ramonskie> any idea how i can check if i have multiple clusters now?
[15:05] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[15:05] * mildan (~mildan@206.47.249.246) has joined #ceph
[15:06] * Unforgiven (~cheese^@5NZAAC7DQ.tor-irc.dnsbl.oftc.net) Quit ()
[15:06] * csharp (~click@7R2AABHLY.tor-irc.dnsbl.oftc.net) has joined #ceph
[15:06] * Mika_c (~Mk@118-169-254-185.dynamic.hinet.net) has joined #ceph
[15:07] * Hemanth (~Hemanth@117.192.243.190) has joined #ceph
[15:08] * tupper (~tcole@2001:420:2280:1272:8900:f9b8:3b49:567e) has joined #ceph
[15:09] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:09] <zenpac> T1w: Would have have 3 separate MDS servers too, (one on each of my 3 vms)?
[15:12] * dneary (~dneary@66.187.233.207) has joined #ceph
[15:12] <anorak> Hi all. I was in the process of removing an osd from the cluster. However, i missed the first step in which I am suppose to stop the osd process. When I tried the *last* step, "ceph osd rm 0", it gives me an error "Error EBUSY: osd.0 is still up; must be down before removal."
[15:12] <anorak> which seems understandable
[15:13] <anorak> i am thinking about killing the process on the storage machine with "kill -9 "
[15:13] <anorak> but before I do that, I would like to have your opinion(s) on thisa
[15:13] <anorak> this*
[15:15] <T1w> zenpac: multiple active MDSs are not supported at the moment
[15:15] <T1w> but you can have 1 active and 2 standby MDSs
[15:15] <T1w> if the active fails, one of the passive takes over
[15:15] * vata (~vata@cable-21.246.173-197.electronicbox.net) Quit (Quit: Leaving.)
[15:17] <rlrevell> what's the best way to test the S3 gateway? are most people using https://github.com/ceph/s3-tests?
[15:19] * vbellur (~vijay@121.244.87.124) has joined #ceph
[15:19] * sleinen (~Adium@194.230.155.185) has joined #ceph
[15:19] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[15:20] <zenpac> T1w: good.. We can test that too..
[15:23] * Hemanth (~Hemanth@117.192.243.190) Quit (Ping timeout: 480 seconds)
[15:23] * sleinen (~Adium@194.230.155.185) Quit (Remote host closed the connection)
[15:25] <ramonskie> anorak you can still do stop ceph-osd id=0
[15:25] <ramonskie> then you can do crush remove and the rm and then auth del
[15:27] * kefu (~kefu@114.92.116.93) has joined #ceph
[15:28] <anorak> ramonskie: Thanks for your reply. I did that and it gave me an error "/etc/init.d/ceph: osd.0 not found (/etc/ceph/ceph.conf defines , /var/lib/ceph defines )"
[15:28] <anorak> ramonskie: but killing the process and waiting for 10 sec did the trick afterwards
[15:28] <ramonskie> great
[15:29] <ramonskie> normaly you can't even crush rm when the osd is still up so strange that you were able to do a rm before that
[15:32] <anorak> ramonskie: Yeah...strange but true :) . Thanks for your help though
[15:32] * real (~lalelu@invincible.the-real.org) has joined #ceph
[15:32] * JFQ (~ghartz@AStrasbourg-651-1-162-51.w90-6.abo.wanadoo.fr) has joined #ceph
[15:34] * calvinx (~calvin@101.100.172.246) Quit (Quit: calvinx)
[15:34] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[15:36] * csharp (~click@7R2AABHLY.tor-irc.dnsbl.oftc.net) Quit ()
[15:36] * Thononain (~Enikma@spftor1e1.privacyfoundation.ch) has joined #ceph
[15:36] <raw> i get corrupt files in cephfs - reproduceable. i have a tool that writes a 20GB ASCII file. If the file is written to cephfs, it contains some bytes long sections of 0x00 bytes - file size is the same.
[15:36] <raw> if im using the local filesystem instead the file is ok, so im sure it is cephfs related somehow. if i simply copy the file from local fs to cephfs, everything is fine. im using cephfs 0.94.1 with metadata on a SSD pool and data on a HDD pool with SSD cache tiering.
[15:37] <ramonskie> i see now the following in my monitor log "cephx: verify_authorizer could not decrypt ticket info: error: NSS AES final round failed"
[15:37] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[15:37] <raw> my command looks like "cat input.csv | php converter.php > output.csv". it takes 1-2 hours to process, much much longer than a normal file copy takes.
[15:38] <raw> the amount of NULL'ed data is different each run
[15:39] * ketor (~ketor@182.48.117.114) Quit (Remote host closed the connection)
[15:39] * Hemanth (~Hemanth@117.213.178.190) has joined #ceph
[15:39] * ghartz_ (~ghartz@AStrasbourg-651-1-221-160.w86-223.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[15:40] * calvinx (~calvin@101.100.172.246) has joined #ceph
[15:40] * treenerd (~treenerd@2001:4dd0:ff00:809d:221:ccff:feb9:4549) has joined #ceph
[15:40] <tuxcraft1r> does the ceph.conf have to be the same on all nodes?
[15:40] <tuxcraft1r> it makes sence if it does
[15:40] * calvinx (~calvin@101.100.172.246) Quit ()
[15:41] * DV (~veillard@2001:41d0:1:d478::1) Quit (Remote host closed the connection)
[15:42] <tuxcraft1r> but at the same time there can be diffrences in a few things like ssd caching etc
[15:42] <ramonskie> sorry i'm not sure
[15:44] <T1w> raw: is it input.csv or output.csv thats different?
[15:44] <zenpac> T1w: can my servers be single-ip systems?
[15:45] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[15:45] <T1w> zenpac: for testing there should be no need for a backend network
[15:45] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[15:46] <zenpac> Ok.. cool.. I realize production systems really need to have separate storage IO net, and that could be another failure mode to consider.
[15:47] <T1w> yeah, but you can lower that by using bonding
[15:47] * rdas (~rdas@121.244.87.116) has joined #ceph
[15:48] <T1w> and using a stack of switches
[15:48] <T1w> if one switch failes then the other would just continue
[15:48] <T1w> but of course that becomes a bit expensive with 10gb
[15:49] * rdas (~rdas@121.244.87.116) Quit ()
[15:49] * dephcon (~oftc-webi@c73-110.rim.net) Quit (Remote host closed the connection)
[15:50] * dephcon (~oftc-webi@c73-110.rim.net) has joined #ceph
[15:51] * Hemanth (~Hemanth@117.213.178.190) Quit (Ping timeout: 480 seconds)
[15:52] * ade (~abradshaw@tmo-102-78.customers.d1-online.com) has joined #ceph
[15:52] * Hemanth (~Hemanth@61.1.232.92) has joined #ceph
[15:53] * zaitcev (~zaitcev@2001:558:6001:10:61d7:f51f:def8:4b0f) has joined #ceph
[15:54] <raw> T1w, im reading from one file on cephfs and writing to a new one
[15:55] * T1w (~jens@node3.survey-it.dk) Quit (Quit: Leaving)
[15:57] * oro (~oro@2001:620:20:16:1448:5e8f:2af8:1741) Quit (Ping timeout: 480 seconds)
[15:59] * mildan (~mildan@206.47.249.246) Quit (Remote host closed the connection)
[15:59] * mildan (~mildan@206.172.0.204) has joined #ceph
[15:59] * treenerd (~treenerd@2001:4dd0:ff00:809d:221:ccff:feb9:4549) Quit (Quit: Verlassend)
[16:00] * alram (~alram@192.41.52.12) has joined #ceph
[16:02] <tuxcraft1r> should i be able to ssh between all nodes frist prior to setup ceph?
[16:02] <ramonskie> from the ceph-deploy server yes
[16:02] <tuxcraft1r> there is no ceph-deploy command in my ceph debian package
[16:02] <tuxcraft1r> so im doing everything manual
[16:02] <mildan> easy to setup with ssh-copy-id
[16:03] <ramonskie> you can add ceph-deploy see https://github.com/ceph/ceph-deploy
[16:06] * madkiss (~madkiss@guestkeeper.heise.de) Quit (Quit: Leaving.)
[16:06] * Thononain (~Enikma@8Q4AAA76Q.tor-irc.dnsbl.oftc.net) Quit ()
[16:06] * Neon (~hassifa@5NZAAC7IA.tor-irc.dnsbl.oftc.net) has joined #ceph
[16:08] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[16:08] * DV (~veillard@2001:41d0:1:d478::1) Quit (Remote host closed the connection)
[16:08] * linuxkidd (~linuxkidd@63.79.89.17) has joined #ceph
[16:08] * sleinen (~Adium@130.59.94.65) has joined #ceph
[16:10] * sleinen1 (~Adium@2001:620:0:82::102) has joined #ceph
[16:11] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[16:11] * ramonskie (ab15507e@107.161.19.109) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[16:11] * cok (~chk@2a02:2350:18:1010:514d:55bb:4a55:6cc8) Quit (Quit: Leaving.)
[16:12] * bkopilov (~bkopilov@bzq-79-176-57-219.red.bezeqint.net) Quit (Quit: Leaving)
[16:15] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[16:16] * sleinen (~Adium@130.59.94.65) Quit (Ping timeout: 480 seconds)
[16:22] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[16:24] * sleinen1 (~Adium@2001:620:0:82::102) Quit (Read error: Connection reset by peer)
[16:36] * Neon (~hassifa@5NZAAC7IA.tor-irc.dnsbl.oftc.net) Quit ()
[16:36] * xul (~maku@89.105.194.71) has joined #ceph
[16:36] * shakamunyi (~shakamuny@216.127.127.20) Quit (Remote host closed the connection)
[16:37] <zenpac> Does anyone have an overall diagram that maps out major services for Ceph? The stock ones in the docs don't seem to have all parts. I'm mainly concerned with high-level things like MDS, RGW, MON, OSD from a health/failure point of view...
[16:37] <rlrevell> so i've created an s3 gateway, following the docs in http://ceph.com/docs/master/radosgw/config/ but the test fails with 405 Method Not Allowed. is this just S3's way of saying "Invalid credentials"?
[16:38] <zenpac> I'm trying to model Ceph's overall health status.
[16:40] <zenpac> Perhaps down to the Object level.
[16:41] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[16:43] * ircolle (~Adium@2601:1:a580:1735:c50c:9433:f79b:a4ab) has joined #ceph
[16:45] <tuxcraft1r> http://docs.ceph.com/docs/master/install/manual-deployment/#long-form
[16:46] <tuxcraft1r> im going to ask is it stupid to setup a system manually without ceph-deploy?
[16:46] <tuxcraft1r> will it be managable?
[16:46] <alfredodeza> tuxcraft1r: of course it isn't !
[16:46] <tuxcraft1r> without using anisble or other management system
[16:47] <tuxcraft1r> as i am doing my best to understand the workings of ceph so i have better trouble shooting skills
[16:47] <alfredodeza> the difference is that ceph-deploy is very opinionated so that you can have something that follows certain conventions that are hard to keep up with when doing it manually
[16:47] <alfredodeza> tuxcraft1r: to get started and to actually understand what is going on I suggest using it
[16:47] <alfredodeza> because along the way it tells you **exactly** what is doing
[16:47] <alfredodeza> you can then use that output to see what happened and learn what it did
[16:48] <tuxcraft1r> i wonder why ceph-deploy isnt in debian jessie then
[16:48] <alfredodeza> because we don't have builds for jessie
[16:48] <tuxcraft1r> i know
[16:48] <alfredodeza> if you are comfortable with Python install tools you can install with python
[16:49] <tuxcraft1r> but ceph seems to be included with debian jessie
[16:49] <alfredodeza> pip/easy_install
[16:49] <tuxcraft1r> ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3)
[16:49] <tuxcraft1r> so im trying to setup a cluster with that
[16:49] <alfredodeza> tuxcraft1r: that is probably Debian-provided
[16:49] <tuxcraft1r> yes it is
[16:50] <tuxcraft1r> im a bit stuck with the key management
[16:50] <tuxcraft1r> when doing it manaualy
[16:50] <tuxcraft1r> i dont know what ceph wants from me
[16:51] <tuxcraft1r> i setup one monitor node and trying to configure the osd's on other nodes
[16:51] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[16:51] * ade (~abradshaw@tmo-102-78.customers.d1-online.com) Quit (Quit: Too sexy for his shirt)
[16:52] * Hemanth (~Hemanth@61.1.232.92) Quit (Ping timeout: 480 seconds)
[16:52] <tuxcraft1r> http://docs.ceph.com/docs/master/install/manual-deployment/ < its not clear for me if i need to set up the mon on all the nodes
[16:52] <tuxcraft1r> and then add the osdd's
[16:59] * sleinen (~Adium@130.59.94.65) has joined #ceph
[17:00] <tuxcraft1r> maybe i should install a debian 7 system and install ceph-deploy on that and try to configure debian 8 ceph nodes from there
[17:00] <raw> it turned out that the problem is not cephfs related. sorry.
[17:01] * Hemanth (~Hemanth@117.192.226.244) has joined #ceph
[17:01] * sleinen1 (~Adium@2001:620:0:82::100) has joined #ceph
[17:05] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:05] * reed (~reed@198.8.80.61) has joined #ceph
[17:06] * xul (~maku@5NZAAC7JV.tor-irc.dnsbl.oftc.net) Quit ()
[17:06] * blip2 (~ricin@nx-74205.tor-exit.network) has joined #ceph
[17:06] * analbeard (~shw@support.memset.com) has left #ceph
[17:07] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[17:07] * sleinen (~Adium@130.59.94.65) Quit (Ping timeout: 480 seconds)
[17:08] <raw> tuxcraft1r, for high availability, it is recommended to have an uneven amount of monitors but more than one. good numbers are 3, 5, 7...
[17:08] <raw> i have only used ceph-deploy so far, its very easy so i recommend
[17:09] <raw> tuxcraft1r, i had to play around a but until my debian installed the ceph 0.94 version from the ceph repo instead of the debians one
[17:11] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[17:11] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Remote host closed the connection)
[17:16] * itsjpr (~imjpr@thing2.it.uab.edu) has joined #ceph
[17:16] * imjpr (~imjpr@138.26.125.8) has joined #ceph
[17:17] * jordanP (~jordan@213.215.2.194) Quit (Quit: Leaving)
[17:19] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[17:21] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:25] * xinze (~xinze@222.47.66.8) has joined #ceph
[17:25] <jrocha> hi folks, so I am developing a new cls class and I am having some trouble when returning values in the out bufferlist
[17:25] <jrocha> I cannot manage to return values with this out bufferlist for some reason
[17:26] <kefu> jrocha: so you are able to run, for example the say_hello method ?
[17:26] <jrocha> I have tried with the encode method and with just appending a string, none of those seem to work
[17:27] <kefu> iirc, you asked about the cls test cases the other day.
[17:28] <kefu> jrocha: i believe the ceph_test_cls_hello is one of the "debug" executables.
[17:28] * thomnico (~thomnico@31.15.49.20) has joined #ceph
[17:28] <kefu> and if it works for you, then i believe your cls class should work as well.
[17:28] <jrocha> kefu, yes.. sorry. It runs smoothly but I have taken another look into it and it seems that the out bufferlist is only passed to the client when an error is set (!)
[17:29] * thomnico (~thomnico@31.15.49.20) Quit ()
[17:29] <jrocha> kefu, wow!
[17:29] <kefu> is set?
[17:29] <jrocha> kefu, it's my mistake for not having seen this behavior but this is weird behavior IMO
[17:29] <jrocha> kefu, // if we try to return anything > 0 here the client will see 0.
[17:30] * mildan (~mildan@206.172.0.204) Quit (Remote host closed the connection)
[17:30] <jrocha> kefu, that's in the writes_dont_return_data method
[17:30] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Remote host closed the connection)
[17:30] <kefu> should return 0
[17:30] <kefu> that's the protocol of cls, i assume.
[17:30] <jrocha> kefu, so if one sets the out bufferlist to something but returns an >= 0, it will not be passed on to the client
[17:31] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) has joined #ceph
[17:31] <jrocha> kefu, I guess there must be a good reason for that but I would never have guessed that!
[17:31] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[17:31] * Hemanth (~Hemanth@117.192.226.244) Quit (Ping timeout: 480 seconds)
[17:31] <kefu> true. you can see the return val of a cls method as merely the stuff visible in the transportation layer
[17:32] <jrocha> kefu, because this means that apparently we have to use the input buffer as both input and output!
[17:32] <kefu> but you should define your own protocol using the in/out bufferlist
[17:32] <jrocha> kefu, well, I gotta run to a meeting myself this time.
[17:32] <kefu> haha. sure.
[17:32] <kefu> ttul.
[17:32] <jrocha> kefu, we talk later/tomorrow
[17:32] <jrocha> kefu, and thanks!
[17:32] * moore (~moore@64.202.160.88) has joined #ceph
[17:32] <jrocha> ;)
[17:32] <kefu> i am about to call it a night. have a nice meeting =D
[17:32] <kefu> yw
[17:34] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Remote host closed the connection)
[17:35] * xinze (~xinze@222.47.66.8) has left #ceph
[17:36] * blip2 (~ricin@9S0AAAJ74.tor-irc.dnsbl.oftc.net) Quit ()
[17:36] * DougalJacobs (~Peaced@37.122.252.59) has joined #ceph
[17:39] <rlrevell> so no one has any idea why radosgw returns 405 Method Not Allowed when attempting any operation? I googled what this means in the context of S3 and I couldn't find anything applicable.
[17:39] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Remote host closed the connection)
[17:39] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) Quit (Read error: Connection reset by peer)
[17:39] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Remote host closed the connection)
[17:40] <rlrevell> all i could find was some references to DNS issues, but there is no issue accessing the S3 gateway by hostname as it's defined in /etc/hosts across all machines
[17:40] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[17:40] * Hemanth (~Hemanth@117.192.226.244) has joined #ceph
[17:41] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) has joined #ceph
[17:42] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[17:42] * kefu is now known as kefu|afk
[17:42] <zaitcev> Using /etc/hosts for hostname access does not sound terribly realistic, because you have to list every last bucket in there. In DNS there's a wildcard for that.
[17:42] * linjan (~linjan@109.253.44.103) Quit (Ping timeout: 480 seconds)
[17:43] <rlrevell> ah. i just went by the guide http://ceph.com/docs/master/radosgw/config/#test-s3-access which only wants the hostname of the radosgw machine for the test script
[17:44] * dgurtner (~dgurtner@178.197.231.155) Quit (Ping timeout: 480 seconds)
[17:44] <gregsfortytwo> jrocha: kefu|afk: you can't return data on anything that does a write, is your class doing that?
[17:45] * Rickus (~Rickus@office.protected.ca) has joined #ceph
[17:45] <gregsfortytwo> this is because returning the same data if the op gets replayed is very difficult or impossible :( and we need to handle that transparently, so you don't ever get to return any data besides the return code
[17:45] <gregsfortytwo> for reads you should be able to return whatever you want in most any circumstance, though
[17:46] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) Quit (Quit: Leaving.)
[17:51] * tw0fish (~tw0fish@UNIX4.ANDREW.CMU.EDU) has joined #ceph
[17:55] * xarses (~andreww@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:58] * lpabon (~quassel@24-151-54-34.dhcp.nwtn.ct.charter.com) Quit (Ping timeout: 480 seconds)
[18:01] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[18:02] * bandrus (~brian@76-14-123-148.rk.wavecable.com) has joined #ceph
[18:03] * oro (~oro@84-72-20-79.dclient.hispeed.ch) has joined #ceph
[18:04] * johanni (~johanni@173.226.103.101) has joined #ceph
[18:05] * yguang11 (~yguang11@nat-dip30-wl-d.cfw-a-gci.corp.yahoo.com) has joined #ceph
[18:05] * johanni_ (~johanni@173.226.103.101) has joined #ceph
[18:05] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[18:05] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[18:06] * DougalJacobs (~Peaced@5NZAAC7NA.tor-irc.dnsbl.oftc.net) Quit ()
[18:06] * QuantumBeep (~Pulec@exit1.ipredator.se) has joined #ceph
[18:09] * imjpr (~imjpr@138.26.125.8) Quit (Ping timeout: 480 seconds)
[18:09] * itsjpr (~imjpr@thing2.it.uab.edu) Quit (Ping timeout: 480 seconds)
[18:10] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Remote host closed the connection)
[18:10] * sleinen1 (~Adium@2001:620:0:82::100) Quit (Read error: Connection reset by peer)
[18:13] * johanni (~johanni@173.226.103.101) Quit (Remote host closed the connection)
[18:16] * Kioob`Taff (~plug-oliv@2a01:e35:2e8a:1e0::42:10) Quit (Quit: Leaving.)
[18:17] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[18:18] * nwf (~nwf@00018577.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:19] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[18:21] * johanni_ (~johanni@173.226.103.101) Quit (Ping timeout: 480 seconds)
[18:21] * kefu|afk (~kefu@114.92.116.93) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:23] * johanni (~johanni@173.226.103.101) has joined #ceph
[18:23] * johanni_ (~johanni@173.226.103.101) has joined #ceph
[18:24] * tupper (~tcole@2001:420:2280:1272:8900:f9b8:3b49:567e) Quit (Remote host closed the connection)
[18:25] * calvinx (~calvin@101.100.172.246) has joined #ceph
[18:26] * wushudoin_ (~wushudoin@209.132.181.86) has joined #ceph
[18:27] * xarses (~andreww@12.164.168.117) has joined #ceph
[18:28] * johanni (~johanni@173.226.103.101) Quit (Remote host closed the connection)
[18:28] * imjpr (~imjpr@164.111.200.170) has joined #ceph
[18:29] * itsjpr (~imjpr@164.111.200.170) has joined #ceph
[18:31] * nils__ (~nils@doomstreet.collins.kg) has joined #ceph
[18:33] * mildan_ (~mildan@206.172.0.204) has joined #ceph
[18:33] * wushudoin (~wushudoin@38.140.108.2) Quit (Ping timeout: 480 seconds)
[18:35] * wushudoin_ (~wushudoin@209.132.181.86) Quit (Ping timeout: 480 seconds)
[18:35] * Vacuum_ (~Vacuum@i59F79744.versanet.de) has joined #ceph
[18:35] * wushudoin_ (~wushudoin@38.140.108.2) has joined #ceph
[18:35] * Concubidated (~Adium@gw.sepia.ceph.com) has joined #ceph
[18:35] * johanni_ (~johanni@173.226.103.101) Quit (Ping timeout: 480 seconds)
[18:36] * QuantumBeep (~Pulec@3DDAAAMGR.tor-irc.dnsbl.oftc.net) Quit ()
[18:36] * AluAlu (~Arcturus@67.ip-92-222-38.eu) has joined #ceph
[18:36] * nils_ (~nils@doomstreet.collins.kg) Quit (Ping timeout: 480 seconds)
[18:37] * Vacuum__ (~Vacuum@i59F79744.versanet.de) Quit (Ping timeout: 480 seconds)
[18:37] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[18:45] * tupper (~tcole@2001:420:2280:1272:8900:f9b8:3b49:567e) has joined #ceph
[18:46] * Hemanth (~Hemanth@117.192.226.244) Quit (Ping timeout: 480 seconds)
[18:46] * Hemanth (~Hemanth@117.192.228.41) has joined #ceph
[18:47] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) has joined #ceph
[18:47] * ChanServ sets mode +o elder
[18:53] * linuxkidd (~linuxkidd@63.79.89.17) Quit (Ping timeout: 480 seconds)
[18:53] * bene (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[18:53] <Mika_c> rlrevell, fme 1. check radosgw daemon 2. ping domain name (ex???ping *.s3.amazonaws.com 3.check secreate key , If secreate key have + / \ then delete it and create new one.
[18:55] <rlrevell> Mika_c: are you saying i should be using $MY_HOSTNAME.s3.amazonaws.com as the hostname?
[18:55] <Mika_c> $MY_HOSTNAME = bucket name
[18:57] <Mika_c> even bucket not exist you should ping that domain and get response
[18:57] <rlrevell> Mika_c: ok, maybe the problem is that i do not know how S3 works. http://ceph.com/docs/master/radosgw/config/#test-s3-access says "Replace {hostname} with the hostname of the host where you have configured the gateway service i.e, the gateway host." i take that to mean, it should be the hostname i normally access the host by, not whatever.s3.amazonaws.com. and, i have not created any buckets or anything yet, i am just trying to run that s3test.py scr
[18:58] <Mika_c> do you already create rgw???
[18:58] <rlrevell> yes, and the daemon is running on the host in question.
[18:58] * johanni (~johanni@173.226.103.101) has joined #ceph
[18:58] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[18:59] * bitserker (~toni@188.87.126.203) Quit (Quit: Leaving.)
[18:59] * johanni_ (~johanni@173.226.103.101) has joined #ceph
[18:59] * bitserker (~toni@188.87.126.203) has joined #ceph
[18:59] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[19:00] <rlrevell> if i put $hostname.s3.amazonaws.com as the hostname in that script, it fails, because that resolves to 54.231.10.1, not to my radosgw node.
[19:01] <Mika_c> that cause problem. $hostname.s3.amazonaws.com should = radosgw ip
[19:01] * Hemanth (~Hemanth@117.192.228.41) Quit (Ping timeout: 480 seconds)
[19:01] * Concubidated (~Adium@gw.sepia.ceph.com) Quit (Remote host closed the connection)
[19:01] <rlrevell> Mika_c: but how can that be? I do not control DNS for s3.amazonaws.com
[19:02] * bandrus (~brian@76-14-123-148.rk.wavecable.com) Quit (Quit: Leaving.)
[19:02] * Hemanth (~Hemanth@117.192.246.189) has joined #ceph
[19:02] <Mika_c> You can try write $hostname.s3.amazonaws.com in /etc/hosts or just create a local dns
[19:02] <rlrevell> oh, so i make a fake DNS entry. trying.
[19:03] * bandrus (~brian@76-14-123-148.rk.wavecable.com) has joined #ceph
[19:03] * bandrus (~brian@76-14-123-148.rk.wavecable.com) Quit ()
[19:04] <rlrevell> same error 405 Method Not Allowed
[19:04] * bitserker (~toni@188.87.126.203) Quit ()
[19:04] * bitserker (~toni@188.87.126.203) has joined #ceph
[19:04] <Mika_c> What result when you ping $hostname.s3.amazonaws.com???
[19:05] <Mika_c> and don't forget check /etc/reslove.conf
[19:05] <rlrevell> Mika_c: resolves to the private IP of my radosgw node.
[19:06] * AluAlu (~Arcturus@9S0AAAKEI.tor-irc.dnsbl.oftc.net) Quit ()
[19:06] * totalwormage (~CoMa@marylou.nos-oignons.net) has joined #ceph
[19:06] <Mika_c> ok that's good. if you still get 405, you may need to check apache(http) log
[19:06] * SkyEye (~gaveen@175.157.57.148) Quit (Remote host closed the connection)
[19:07] <tw0fish> rlrevell: i ran into a problem with the gateway where the version of Apache being installed form the Ceph rpo doesn't support unix sockets
[19:07] * fdmanana__ (~fdmanana@bl13-138-253.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[19:07] <tw0fish> yet, the radosgw daemon is running with a socket.
[19:07] <rlrevell> Mika_c: it is not using apache, this is version 0.94.1 where it uses the embedded civetweb server
[19:08] <tw0fish> it sounds like you probably got past that, however
[19:08] <rlrevell> literally all i did was "ceph-deploy rgw create $HOSTNAME", created users, and tried to test it.
[19:08] <tw0fish> rlrevell: you created the rgw.conf with apache?
[19:08] <tw0fish> in conf.d
[19:08] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Quit: Leaving...)
[19:09] <Mika_c> oh... because i create rgw with apache and mod_fastcgi
[19:09] <alfredodeza> tw0fish: that command doesn't use apache
[19:09] <alfredodeza> nope
[19:09] <rlrevell> nope. there is no apache at all. i just followed http://ceph.com/docs/master/start/quick-ceph-deploy/#add-an-rgw-instance
[19:09] <alfredodeza> ceph-deploy rgw create uses civetweb
[19:09] <tw0fish> gotcha
[19:09] <tw0fish> Mika_c: is your gateway working then?
[19:10] * linuxkidd (~linuxkidd@63.79.89.17) has joined #ceph
[19:11] <tw0fish> Mika_c: i am trying to do something like what you are talking about. Creating my own gateway to interface with my storage cluster using apache/radosgw. i.e. http://ceph.com/docs/master/radosgw/
[19:11] <rlrevell> i am however missing several of the pools listed here http://ceph.com/docs/master/radosgw/config/#create-pools. the docs say that "Ceph Object Gateway will create pools automatically" but maybe something broke?
[19:11] <Mika_c> Humm...because i use s3cmd to test. when rgw working, i will use command "radosgw-admin user create" to create a new user
[19:12] <tw0fish> my problem stems from here, specifically.. http://docs.ceph.com/docs/master/radosgw/config/#add-a-gateway-configuration-to-ceph
[19:13] <Mika_c> ime I need to create these pool (
[19:13] <Mika_c> .rgw
[19:13] <Mika_c> .rgw.root
[19:13] <Mika_c> .rgw.control
[19:13] <Mika_c> .rgw.gc
[19:13] <Mika_c> .rgw.buckets
[19:13] <Mika_c> .rgw.buckets.index
[19:13] <Mika_c> .log
[19:13] <Mika_c> .intent-log
[19:13] <Mika_c> .usage
[19:13] <Mika_c> .users
[19:13] <Mika_c> .users.email
[19:13] <Mika_c> .users.swift
[19:13] <Mika_c> .users.uid
[19:13] <tw0fish> RHEL 7 does not come with Apache 2.4.9, yer the documentation thinks it does . :-/
[19:14] <rlrevell> all that was created by the ceph-deploy command were .rgw.root .rgw.control .rgw .rgw.gc .users.uid .users
[19:15] <Mika_c> NP, by manual like this "ceph osd pool create .rgw 128 128"
[19:17] * nsoffer (~nsoffer@84.94.199.6.cable.012.net.il) has joined #ceph
[19:17] * reed (~reed@198.8.80.61) Quit (Ping timeout: 480 seconds)
[19:23] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[19:24] * sleinen1 (~Adium@2001:620:0:82::102) has joined #ceph
[19:25] * mildan_ (~mildan@206.172.0.204) Quit (Remote host closed the connection)
[19:26] * mgolub (~Mikolaj@91.225.200.88) has joined #ceph
[19:27] * vbellur (~vijay@122.171.123.165) has joined #ceph
[19:29] <rlrevell> creating those buckets manually did not help. still error 405
[19:30] * nsoffer (~nsoffer@84.94.199.6.cable.012.net.il) Quit (Ping timeout: 480 seconds)
[19:31] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:31] <Mika_c> I'm not familar civetweb. Does civetweb have default site setting?
[19:31] <rlrevell> Mika_c: it appears to be completely undocumented.
[19:32] <rlrevell> Mika_c: well, not completely. just not in the ceph docs. looking
[19:32] * branto (~borix@ip-213-220-214-203.net.upcbroadband.cz) has left #ceph
[19:35] * imjpr (~imjpr@164.111.200.170) Quit (Ping timeout: 480 seconds)
[19:35] * itsjpr (~imjpr@164.111.200.170) Quit (Ping timeout: 480 seconds)
[19:36] * totalwormage (~CoMa@8Q4AAA8AL.tor-irc.dnsbl.oftc.net) Quit ()
[19:36] * nih (~Dysgalt@tor-exit.crashme.org) has joined #ceph
[19:36] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[19:37] <rlrevell> Mika_c: i can't tell. there's no civetweb.conf file present but it may be called something else. i'm not finding a lot of documentation on how it works.
[19:37] * nils__ (~nils@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[19:39] <Mika_c> http://cephnotes.ksperis.com/blog/2015/01/27/replace-apache-by-civetweb-on-the-radosgw
[19:40] <Mika_c> and one issue http://www.spinics.net/lists/ceph-users/msg17439.html
[19:41] <rlrevell> Mika_c: there's other stuff that's screwy. radosgw-admin user stats --uid=testuser just hangs forever for example
[19:41] * midnight_ (~midnightr@216.113.160.71) has joined #ceph
[19:43] <dephcon> can anyone explain to me what the err and clog logs are compared to the standard log and mon log?
[19:43] <Mika_c> Humm....
[19:43] <dephcon> yeah, lol
[19:44] * bandrus (~brian@213.sub-70-214-36.myvzw.com) has joined #ceph
[19:44] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[19:46] <Mika_c> I have totally no idea. Any log for civetweb???
[19:47] * midnightrunner (~midnightr@c-67-174-241-112.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:57] * jbautista- (~wushudoin@209.132.181.86) has joined #ceph
[20:03] * daviddcc (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[20:04] * fdmanana__ (~fdmanana@bl13-138-253.dsl.telepac.pt) has joined #ceph
[20:05] * wushudoin_ (~wushudoin@38.140.108.2) Quit (Ping timeout: 480 seconds)
[20:05] <rlrevell> Mika_c: there's a log for the radosgw process but it's pretty cryptic http://paste.openstack.org/show/260041/
[20:06] * jbautista- (~wushudoin@209.132.181.86) Quit (Ping timeout: 480 seconds)
[20:06] * nih (~Dysgalt@9S0AAAKJA.tor-irc.dnsbl.oftc.net) Quit ()
[20:06] * kalmisto (~measter@chulak.enn.lu) has joined #ceph
[20:06] * jbautista- (~wushudoin@38.140.108.2) has joined #ceph
[20:12] <Mika_c> This is too cryptic.
[20:12] <rlrevell> Mika_c: and none of the errors in it line up with my failing tests.
[20:14] <rlrevell> interestingly i did just get "Initialization timeout, failed to initialize" in the log 5 minutes after restarting... but I'm never sure if the ceph stuff restarts correctly as it doesn't play nice with upstart, complaining about unknown parameters and what not just doing service radosgw restart. i usually end up rebooting the node
[20:17] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) Quit (Quit: Ex-Chat)
[20:19] <Mika_c> What OS and version that you used for ceph???
[20:21] * mildan (~mildan@206.172.0.204) has joined #ceph
[20:22] <rlrevell> Mika_c: this is what gets logged with just a reboot, no attempt to access radosgw http://paste.openstack.org/show/260168/
[20:22] * bitserker (~toni@188.87.126.203) Quit (Ping timeout: 480 seconds)
[20:23] <rlrevell> Mika_c: ubuntu 14.04
[20:23] * ChrisNBl_ (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[20:23] * sleinen1 (~Adium@2001:620:0:82::102) Quit (Read error: Connection reset by peer)
[20:24] <rlrevell> and ceph 0.94.1
[20:24] * fdmanana__ (~fdmanana@bl13-138-253.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[20:25] * stiopa (~stiopa@cpc73828-dals21-2-0-cust630.20-2.cable.virginm.net) has joined #ceph
[20:26] <tw0fish> okay, i got my problem solved.. the start up script is apparently screwed up. i can get ceph-radsogw to start on a port rather than a socket by manually starting the daemon..
[20:26] <Mika_c> It's odd. "172.16.7.51:6789/0 pipe(0x7f0910000c00 sd=9 :0 s=1 pgs=0 cs=0 l=1 c=0x7f0910004ea0).fault"
[20:26] <tw0fish> no i just have to figure out why the startup script can't do it.
[20:26] <Mika_c> 172.16.7.51 is mon ip addres right???
[20:26] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Ping timeout: 480 seconds)
[20:26] <tw0fish> s/no/now
[20:27] <rlrevell> Mika_c: those are harmless, they just mean the cluster is not at quorum yet. the ones that i think are the problem are the ERROR lines
[20:27] * georgem (~Adium@fwnat.oicr.on.ca) has left #ceph
[20:30] * ChrisNBlum (~ChrisNBlu@dhcp-ip-128.dorf.rwth-aachen.de) has joined #ceph
[20:32] <Mika_c> Are you mean "ERROR can't get key: ret=-2" Looks like harmless. http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-June/040986.html
[20:33] * ChrisNBl_ (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Ping timeout: 480 seconds)
[20:36] * kalmisto (~measter@9S0AAAKLD.tor-irc.dnsbl.oftc.net) Quit ()
[20:36] * CorneliousJD|AtWork (~Ralth@185.77.129.11) has joined #ceph
[20:36] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[20:36] * Mika_c (~Mk@118-169-254-185.dynamic.hinet.net) Quit (Quit: Konversation terminated!)
[20:39] * mildan (~mildan@206.172.0.204) Quit (Quit: Leaving...)
[20:45] * madkiss (~madkiss@2001:6f8:12c3:f00f:a054:6b8a:1c0a:6f4c) has joined #ceph
[20:45] * vbellur (~vijay@122.171.123.165) Quit (Ping timeout: 480 seconds)
[20:49] * madkiss (~madkiss@2001:6f8:12c3:f00f:a054:6b8a:1c0a:6f4c) Quit ()
[20:51] * madkiss (~madkiss@2001:6f8:12c3:f00f:eca5:a55b:7447:a800) has joined #ceph
[20:51] <rlrevell> pretty close to out of ideas... does anyone out there have a working radosgw created using the new ceph-deploy method?
[20:55] * ganders (~root@190.2.42.21) Quit (Quit: WeeChat 0.4.2)
[20:55] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[20:57] <tw0fish> rlrevell: are you using hammer or giant?
[20:57] <rlrevell> hammer
[20:57] <rlrevell> the ceph-deploy method was only added in this version
[20:58] <tw0fish> k
[20:58] <tw0fish> i know just to do an install i had to specify '--release'
[20:58] <tw0fish> ceph-deploy install --release giant
[20:58] <tw0fish> that is why i was asking what ver you had, maybe you need to specify that when using the command. idk
[20:59] * fdmanana__ (~fdmanana@bl13-138-253.dsl.telepac.pt) has joined #ceph
[20:59] <tw0fish> i am running giant , myself.
[20:59] * linjan (~linjan@80.179.241.26) has joined #ceph
[20:59] <rlrevell> nah the docs say it should just be a matter of "ceph-deploy rgw create $hostname", create a user, bam, s3 should work. this is why i can't see how it can not work, because there's literally nothing to screw up
[21:00] <tw0fish> right.
[21:00] <tw0fish> i do see there isn't much to it looking at the docs
[21:00] <tw0fish> umm
[21:00] <tw0fish> what does your ceph.conf look like in regards to the gateway?
[21:01] <tw0fish> i added some things to mine to make logging more verbose, but not sure it applies to what you are doing.
[21:01] <tw0fish> debug ms = 1
[21:01] <tw0fish> debug rgw = 20
[21:01] <tw0fish> added those 2 things to the [global] and restarted the radosgw daemon
[21:01] <rlrevell> tw0fish: this is all it put in there http://paste.openstack.org/show/260212/
[21:02] <rlrevell> and it's apparently being ignored, because none of those files even exist
[21:02] <tw0fish> okay, add those 2 'debug' lines up top i pasted to you under [global] and restart the daemon
[21:02] <tw0fish> then look at the log to see if there is anymore insight as to what is going wrong
[21:02] <tw0fish> you should see it pinging all your monitor hosts and other stuff
[21:03] <tw0fish> it=radosgw daemon
[21:04] <rlrevell> tw0fish: yeah, its logging a lot more to /var/log/radosgw/ceph-client.rgw.vbo-ceph-admin.log
[21:04] <tw0fish> cool, try doing what you need to do and just look at hte logs, that is all i can think to do.
[21:04] <rlrevell> not sure why it doesn't use the log file specified in ceph.conf
[21:04] <tw0fish> hopefully that is of some hlep
[21:05] <tw0fish> i still haven't completely got my problem fixed yet either using the apache front end
[21:05] <tw0fish> heh
[21:05] * calvinx (~calvin@101.100.172.246) Quit (Quit: calvinx)
[21:05] <rlrevell> error 405, nothing logged
[21:05] <tw0fish> :(
[21:05] <rlrevell> yep
[21:06] <tw0fish> clearly the object gateway is the worst part of ceph
[21:06] <tw0fish> lol
[21:06] * CorneliousJD|AtWork (~Ralth@5NZAAC7X5.tor-irc.dnsbl.oftc.net) Quit ()
[21:06] <tw0fish> the other stuff worked with little problems
[21:06] * KeeperOfTheSoul (~Jaska@195.40.181.35) has joined #ceph
[21:06] <rlrevell> yeah i never had anywhere near this much trouble with any other feature
[21:06] <rlrevell> i think i'll rip it out and try again tomorrow, this is giving me a headache
[21:07] <rlrevell> thx for the help
[21:07] <tw0fish> np -- sorry i couldn't be of more help.
[21:09] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[21:10] * johanni (~johanni@173.226.103.101) Quit (Remote host closed the connection)
[21:13] * dalgaaf (uid15138@id-15138.charlton.irccloud.com) has joined #ceph
[21:15] <rlrevell> tw0fish: hmm, isn't it a big problem that /etc/ceph/keyring.radosgw.gateway doesn't exist?
[21:15] <tw0fish> yes!
[21:15] <tw0fish> heh
[21:15] <tw0fish> type 'ceph auth list'
[21:15] <rlrevell> well, the docs never said anything about it!
[21:15] <rlrevell> where do i get it?
[21:15] <tw0fish> see what keys you have
[21:16] <tw0fish> tell me if you have a radosgw key or only see keys for OSDs
[21:16] <rlrevell> the ceph-deploy command just said "run gatherkeys" which i did
[21:16] <tw0fish> you already have a storage cluster setup?
[21:16] <rlrevell> in /etc/ceph i just have ceph.client.admin.keyring and ceph.client.glance.keyring
[21:16] <rlrevell> yep, been working fine for months
[21:16] <tw0fish> type 'ceph auth list'
[21:17] <rlrevell> prints tons of stuff
[21:17] <tw0fish> do you see a key for client.radosgw.gateway?
[21:17] <rlrevell> yeah
[21:17] <rlrevell> client.bootstrap-rgw
[21:17] <rlrevell> key: AQCKLyVVHpbUHhAAoMAuarfzW/i0Nx+7EaZylg==
[21:17] <rlrevell> caps: [mon] allow profile bootstrap-rgw
[21:17] <tw0fish> okay so you do have one added then
[21:17] <rlrevell> oh wait
[21:17] <tw0fish> no that is bootstrap
[21:17] <rlrevell> client.rgw.vbo-ceph-admin
[21:17] <rlrevell> key: AQDr/W5VVNriORAACeD+pb8J6BTLgRhMMVyGHQ==
[21:17] <rlrevell> caps: [mon] allow rw
[21:17] <rlrevell> caps: [osd] allow rwx
[21:17] <tw0fish> you need one for client.radosgw.gateway
[21:17] * johanni_ (~johanni@173.226.103.101) Quit (Ping timeout: 480 seconds)
[21:18] <tw0fish> http://docs.ceph.com/docs/master/radosgw/config/#add-a-gateway-configuration-to-ceph
[21:18] * midnight_ (~midnightr@216.113.160.71) Quit (Remote host closed the connection)
[21:18] <tw0fish> go there and follow
[21:18] <tw0fish> no
[21:18] <tw0fish> go here i mean , http://docs.ceph.com/docs/master/radosgw/config/#create-a-user-and-keyring
[21:18] <tw0fish> after doing that, you should have what you need
[21:19] <tw0fish> *i think
[21:19] <tw0fish> heh
[21:19] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[21:20] * wushudoin_ (~wushudoin@209.132.181.86) has joined #ceph
[21:20] <rlrevell> same error.
[21:20] <tw0fish> meh
[21:20] <rlrevell> yeah
[21:21] <tw0fish> you see the key now in 'ceph auth list' at least?
[21:21] <rlrevell> yes
[21:21] <tw0fish> cool
[21:21] <rlrevell> let me reboot it, i suspect restart is broken
[21:21] <tw0fish> i am here reading about having a 'federated' gateway and that is almost lols
[21:22] <tw0fish> i am confident i will get this working, but what a mess
[21:22] <tw0fish> then actually setting up 2 behind a load balancer
[21:22] <tw0fish> that should be fun heh
[21:23] <tw0fish> i have my rgwdaemon up now
[21:23] <tw0fish> but still fialing the tests
[21:23] <tw0fish> swift -A http://localhost/auth/1.0 -U testuser:swift
[21:24] <tw0fish> i am tyring to use swift, though. not s3
[21:24] <tw0fish> Auth GET failed: http://localhost/auth/1.0 503 Service Unavailable
[21:24] <tw0fish> heh
[21:24] <rlrevell> i haven't even tried the swift tests yet, installing some python thing from git trashed my python environment again
[21:24] <tw0fish> :(
[21:25] * Concubidated (~Adium@irvine-dc.dreamhost.com) has joined #ceph
[21:25] <tw0fish> yeah that failed for me too because it doesn't use a proxy
[21:25] <tw0fish> 'easy_install' doesn't know how to use a proxy properly so i cant even connect to the site to get what i need
[21:25] <tw0fish> lol
[21:26] <tw0fish> i mean there are ways to cconfigure it to use a proxy , but w/e it is doing it doesn't agree with the proxies we have here.
[21:26] * nsoffer (~nsoffer@bzq-79-177-255-248.red.bezeqint.net) has joined #ceph
[21:27] <tw0fish> rlrevell: what kind of disks are you using for your OSDs and do you have SSDs anywhere?
[21:27] * jbautista- (~wushudoin@38.140.108.2) Quit (Ping timeout: 480 seconds)
[21:27] * LeaChim (~LeaChim@host86-163-124-72.range86-163.btcentralplus.com) has joined #ceph
[21:27] <rlrevell> old recycled stuff, i think they're 15K rpm some 600GB and some 300GB
[21:28] <tw0fish> i have 7200rpm SATA 6GB disks for the OSDs and trying to get SSDs for the journals at least
[21:28] <tw0fish> 15k -- nice
[21:28] <rlrevell> the idea is to use EOLed hardware as a ceph cluster for backups and stuff
[21:28] <tw0fish> cool
[21:28] <tw0fish> we are supposed to be using this for OpenStack
[21:28] <tw0fish> when it is ready
[21:28] <rlrevell> but, S3 is the easiest way to plug it into our front end backup interface
[21:28] * wushudoin_ (~wushudoin@209.132.181.86) Quit (Ping timeout: 480 seconds)
[21:28] <tw0fish> i see
[21:29] <rlrevell> yeah, that was the idea here too but we have a requirement to use containers and none of the openstack container drivers has the features we need
[21:29] <tw0fish> hrmm
[21:29] <rlrevell> they all either expect you to be able to rewrite your apps to be "cloudy", or just aren't ready for production use, or both
[21:29] <tw0fish> i will have to note that for when we try using containers
[21:30] <tw0fish> i know people want to use docker
[21:30] <rlrevell> doesn't fit our use case. we need it to appear like a VPS from the customer POV and can't require them to learn a bunch of new tech
[21:31] <tw0fish> ahh
[21:31] <tw0fish> i see what you mean
[21:31] <rlrevell> LXD should fit when it's production ready
[21:32] <tw0fish> cool, that is new to me. i will have to check that out.
[21:32] <tw0fish> i saw some really cool stuff done with coreos and fleet
[21:32] <tw0fish> using containers
[21:33] <tw0fish> we are very far away form the could here though
[21:33] <tw0fish> heh
[21:34] <tw0fish> my guess is mgmt wants to evolve things towrads that
[21:35] * fdmanana__ (~fdmanana@bl13-138-253.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[21:36] * KeeperOfTheSoul (~Jaska@5NZAAC7ZM.tor-irc.dnsbl.oftc.net) Quit ()
[21:36] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) has joined #ceph
[21:37] * wushudoin_ (~wushudoin@38.140.108.2) has joined #ceph
[21:41] * mongo (~gdahlman@voyage.voipnw.net) has joined #ceph
[21:44] <mongo> Hi, it it safe to restrict a user to an object_prefix in ceph-authtool for rbd devices?
[21:45] * midnightrunner (~midnightr@216.113.160.71) has joined #ceph
[21:46] <mongo> Example: ceph-authtool -n client.foo --cap mds 'allow' --cap osd 'allow rw pool=data object_prefix=myvol' --cap mon 'allow r' keyring
[21:47] <mongo> or would the block_name_prefix be more correct/work
[21:48] * itsjpr (~imjpr@138.26.125.8) has joined #ceph
[21:49] * imjpr (~imjpr@thing2.it.uab.edu) has joined #ceph
[21:49] * bandrus1 (~brian@213.sub-70-214-36.myvzw.com) has joined #ceph
[21:52] * bandrus (~brian@213.sub-70-214-36.myvzw.com) Quit (Ping timeout: 480 seconds)
[21:56] <rlrevell> tw0fish: i am going to kick myself if it ends up being as simple as http://ceph.com/docs/master/radosgw/troubleshooting/#methodnotallowed
[21:56] <rlrevell> which for some reason my earlier google searches and reading of ceph docs did not turn up
[22:01] <tw0fish> i found an easy way to do this
[22:01] <tw0fish> 015-06-03 16:01:14.448900 7f57fdfab700 2 req 128:0.000051:swift-auth:GET /auth/1.0:swift_auth_get:verifying op params
[22:01] <tw0fish> 2015-06-03 16:01:14.448901 7f57fdfab700 2 req 128:0.000052:swift-auth:GET /auth/1.0:swift_auth_get:executing
[22:01] <tw0fish> 2015-06-03 16:01:14.448912 7f57fdfab700 0 NOTICE: RGW_SWIFT_Auth_Get::execute(): bad swift key
[22:01] <tw0fish> turns out i am using a bad key
[22:02] <tw0fish> anyway, start up the daemon with -d
[22:02] <tw0fish> that is what i did
[22:02] <tw0fish> for example
[22:02] <tw0fish> /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.gateway -d
[22:02] <tw0fish> seems to be as good as doing an strace
[22:02] <tw0fish> just runs the daemon w/o putting it in the b/g
[22:03] * jbautista- (~wushudoin@209.132.181.86) has joined #ceph
[22:03] <tw0fish> maybe you can see more of what is going on that way
[22:06] <tw0fish> rlrevell: what OS are you using?
[22:06] * loft (~hyst@manning2.torservers.net) has joined #ceph
[22:06] <rlrevell> tw0fish: yeah, that logs nothing that coincides with the 405 error
[22:06] <rlrevell> ubuntu 14.04
[22:06] <tw0fish> k
[22:07] <rlrevell> it basically prints the same stuff as it was already logging in debug mode
[22:07] <tw0fish> it sounds like you aren't even reaching the gateway
[22:07] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[22:08] <tw0fish> since nothing is really logged when you attempt to connect to it..
[22:09] <rlrevell> nope, i sure am not... tcpdump on port 7480 prints nothing when I run the s3test.py
[22:09] <rlrevell> i wonder if that script expects it to be on port 80
[22:10] * wushudoin_ (~wushudoin@38.140.108.2) Quit (Ping timeout: 480 seconds)
[22:11] * jbautista- (~wushudoin@209.132.181.86) Quit (Ping timeout: 480 seconds)
[22:12] * jbautista- (~wushudoin@38.140.108.2) has joined #ceph
[22:12] <rlrevell> aaaand... that was the problem
[22:13] <rlrevell> why the docs provide a test script that expects port 80 to test a daemon that listens on port 7480... I have no idea. and why it logs HTTP error 405 instead of Connection Refused like a normal app I am also curious about.
[22:15] <tw0fish> lol
[22:15] <tw0fish> so it works?
[22:16] <tw0fish> i am almost there with mine i think
[22:16] <tw0fish> i have to create a systemd script for REHL 7 because this one it comes with is ridden with bugs
[22:16] <tw0fish> :(
[22:16] * Kupo1 (~tyler.wil@23.111.254.159) has left #ceph
[22:17] <tw0fish> the script won't start the radosgw on a port, only a unix socket. and the ver. of apache that comes with rhel 7 doesn't do sockets, only ports. it is like peanut butter with no jelly. lol
[22:19] <tw0fish> i should file a bug report for them to at least fix the documentation
[22:19] <tw0fish> we both should, really
[22:19] <tw0fish> the redhat documentation is no better, maybe even worse
[22:20] <tw0fish> referring to the documentation on redhat.com in comparison to the documentation on ceph.com
[22:20] <tw0fish> nice thing is you have ubuntu and that comes with apache2
[22:20] <rlrevell> yep, i think it was just a case of a very misleading error message and documentation that could use improvement
[22:21] <tw0fish> but you aren't having to deal with it anyway... ;)
[22:23] <rlrevell> are you able to upgrade to 0.94.1? all that apache stuff seems like a huge pain compared to the new way
[22:25] <tw0fish> that is a version of ceph?
[22:25] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[22:25] * yghannam (~yghannam@0001f8aa.user.oftc.net) Quit (Quit: Leaving)
[22:25] <tw0fish> or apache?
[22:25] * bandrus1 (~brian@213.sub-70-214-36.myvzw.com) Quit (Ping timeout: 480 seconds)
[22:25] <tw0fish> this is okay i got it working as far as the gateway is up, just a matter of straighening out the permissions
[22:26] <rlrevell> no, ceph 0.94
[22:26] <tw0fish> i am on 0.87
[22:26] <tw0fish> i thought that was the latest?
[22:26] * ismell_ (~ismell@host-64-17-89-216.beyondbb.com) has joined #ceph
[22:26] <rlrevell> nope http://docs.ceph.com/docs/master/release-notes/#v0-94-hammer
[22:26] <rlrevell> major RGW improvements
[22:27] <tw0fish> wow
[22:27] <tw0fish> i didn't even realize that
[22:27] * sleinen1 (~Adium@2001:620:0:82::104) has joined #ceph
[22:27] <tw0fish> someone handed this project over to me and said we are using giant,etc..
[22:27] <tw0fish> i am going to see why we don't get on the latest
[22:27] <tw0fish> i didn't realize giant wasn't the latest
[22:27] <tw0fish> heh
[22:27] <tw0fish> would have helped me to look at that
[22:28] <tw0fish> i am wondering if there are RPMs for hammer..
[22:28] <tw0fish> i would imagine there are
[22:28] <tw0fish> good suggestion!
[22:28] <rlrevell> it's been out a while so should be
[22:28] <tw0fish> i am going to look at it and see why i was being asked to use giant
[22:28] <tw0fish> the last thin i want to have to do is upgrade it AFTER i have it all setup and running
[22:29] <tw0fish> rather use the lateset and not have to worry about it for a while
[22:30] <tw0fish> i am thinking there is a good chance it comes with a better systemd script that may work right
[22:33] * ismell (~ismell@host-64-17-88-159.beyondbb.com) Quit (Ping timeout: 480 seconds)
[22:33] <seapasulli> a lot of people submitted issues when upgrading from giant to hammer when it came out so that may be one reason you're on giant
[22:33] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[22:33] <seapasulli> anyone encrypt ceph traffic over the wire? Or know any path to get this done?
[22:34] <tw0fish> seapasulli: seems to me it is already encrypted form what i have read ?
[22:34] <tw0fish> it uses keys and a system like Kerberos
[22:35] <tw0fish> that is just based on what i seen in going through the setup documentation and what not
[22:36] * loft (~hyst@5NZAAC722.tor-irc.dnsbl.oftc.net) Quit ()
[22:36] * sese_ (~neobenedi@politkovskaja.torservers.net) has joined #ceph
[22:36] <tw0fish> Important
[22:36] <tw0fish> The cephx protocol does not address data encryption in transport (e.g., SSL/TLS) or encryption at rest.
[22:36] <tw0fish> nevermind
[22:36] <seapasulli> I thought the same thing. Apparently that is not the case.
[22:36] <tw0fish> just read that now at https://ceph.com/docs/v0.79/rados/operations/authentication/
[22:37] <seapasulli> I mean really random 4m chunks of data going back and forth from a million different objects being barfed to a radosgw may mean nothing or if it is all SSNs or something it may be a big deal.
[22:37] * Hemanth (~Hemanth@117.192.246.189) Quit (Ping timeout: 480 seconds)
[22:38] <tw0fish> heh
[22:38] <tw0fish> yeah, i am still not even sure what goes across the wire at this point
[22:38] <tw0fish> for me it is whatever 'swift' traffic there is
[22:39] <tw0fish> which i would imagine is like some garbled binary code that is used for filesystem data
[22:39] <seapasulli> swift is just openstacks object storage thingy http://docs.openstack.org/developer/swift/
[22:39] <tw0fish> actually plain text of files stored on a virtual machine or something is probably not likely to be seen
[22:39] <tw0fish> not to mention you may use encryption on the filesystem
[22:40] <tw0fish> like if your openstack VM has LUKS or something
[22:40] <tw0fish> or bitlocker for Windows
[22:40] <tw0fish> you probably just can do crypto at the application layer
[22:40] <tw0fish> but idk for sure
[22:40] <tw0fish> depends on what you are using the gateway for i guess
[22:40] * yghannam (~yghannam@0001f8aa.user.oftc.net) has joined #ceph
[22:40] <tw0fish> if you are using RBD you probably would just do a LUKS filesystem or something
[22:41] * bandrus (~brian@172.sub-70-214-40.myvzw.com) has joined #ceph
[22:41] <seapasulli> I have disks encrypted but intercepted data in transit is the key here
[22:41] <tw0fish> well i know they have you setup the gateway with apache to do SSL
[22:42] <tw0fish> so i guess as long as you pass the traffic over https you are fine
[22:42] * sleinen1 (~Adium@2001:620:0:82::104) Quit (Ping timeout: 480 seconds)
[22:42] <seapasulli> Thats traffic from the gateway out. I mean traffic from the gateway to the rest of Ceph storage
[22:42] * nwf (~nwf@00018577.user.oftc.net) has joined #ceph
[22:42] <tw0fish> ohh i see what you mean
[22:42] * ChrisNBlum (~ChrisNBlu@dhcp-ip-128.dorf.rwth-aachen.de) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[22:42] <seapasulli> this means i can mitm and possibly grab data.
[22:42] <tw0fish> In Ceph v0.60 and later releases, Ceph supports dm-crypt on disk encryption. You may specify the --dmcrypt argument when preparing an OSD to tell ceph-deploy that you want to use encryption. You may also specify the --dmcrypt-key-dir argument to specify the location of dm-crypt encryption keys.
[22:42] <tw0fish> http://ceph.com/docs/master/rados/deployment/ceph-deploy-osd/
[22:43] <seapasulli> I have dmcrypt which was super buggy in firefly and giant. working in giant.
[22:43] <tw0fish> maybe that is what you would need
[22:43] <seapasulli> The issue is data in transit.
[22:43] <tw0fish> no surprise that looked like a PITA to set up
[22:43] <tw0fish> and what do you do if the keys are hosed?
[22:43] <tw0fish> you lose all your data
[22:43] <tw0fish> heh
[22:43] <seapasulli> yup
[22:43] <seapasulli> for that osd
[22:43] <tw0fish> i am not sure, then..
[22:44] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) has joined #ceph
[22:44] <tw0fish> i am heading home for the day.. catch you guys later...
[22:45] * tw0fish (~tw0fish@UNIX4.ANDREW.CMU.EDU) Quit (Quit: leaving)
[22:45] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) Quit ()
[22:45] <seapasulli> So say your object mysecret-ssn.txt has a pg of OSD 12, 75, and 128 and is stored in Ceph S3/Swift. You spawn a client and call to the gateway over ssl then the gateway calls to osd 12 for your object over plain text? How do I encrypt the traffic from radosgw to the host of osd12(I guess this would be all ceph client traffic at this point)?
[22:48] <seapasulli> Is there a way for me to use something like socat or some other ssl terminator to encapsulate all traffic on ceph osd ports to and from clients?
[22:48] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[22:50] <georgem> seapasulli: if you watch this presentation https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/storage-security-in-a-critical-enterprise-openstack-environment at 17:20 Sage says clearly that Ceph does't provide encryption of data in transit on the client or replication network
[22:50] <seapasulli> exactly so say I wanted to encrypt the wire. How would I go about that?
[22:51] <seapasulli> What would be the best direction to look at?
[22:51] <sage> right. you can use ssl to talk s3/swift to radosgw, but the intra-cluster traffic is not encrypted
[22:51] <sage> someone suggested ipsec as the best path forward...
[22:52] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[22:54] <seapasulli> indeed. I misunderstood the docs when I read this. Now I need to implement something :-/ I was looking at ipsec as a possibility. Thanks sage and georgem
[22:55] * mgolub (~Mikolaj@91.225.200.88) Quit (Quit: away)
[22:57] <georgem> seapasulli: as there are no built-in options right now in Ceph, you could use mitigation solutions in the meantime
[22:59] * johanni (~johanni@173.226.103.101) has joined #ceph
[23:00] * florz (nobody@2001:1a50:503c::2) Quit (Remote host closed the connection)
[23:01] * johanni_ (~johanni@173.226.103.101) has joined #ceph
[23:01] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[23:04] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Ping timeout: 480 seconds)
[23:04] * bene (~ben@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[23:05] * florz (nobody@2001:1a50:503c::2) has joined #ceph
[23:06] <georgem> seapassulli: although it seems pretty doable to set ipsec on all the ceph servers using these instructions: http://7u83.cauwersin.com/2014-04-06-creating-ipsec-transport-between-freebsd-and-linux
[23:06] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) Quit (Remote host closed the connection)
[23:06] * sese_ (~neobenedi@5NZAAC74V.tor-irc.dnsbl.oftc.net) Quit ()
[23:06] * Curt` (~pakman__@chulak.enn.lu) has joined #ceph
[23:07] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) has joined #ceph
[23:07] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[23:08] <seapasulli> thanks georgem. I was looking at ipsec and possibly portsec on the switch. It is technically a trusted network with only ceph data nodes and a gateway on them but if someone were to break into my DC, unplug a ceph node and plug in one of their devices that has the correct tagged vlan configured they could mitm the gateway and the rest of the ceph nodes.
[23:09] * sleinen1 (~Adium@2001:620:0:82::102) has joined #ceph
[23:09] <seapasulli> At this point I know it is all about layered security and marginal returns but when carrying sensitive information a stopgap at every layer is probably a good idea (depending on the sensitivity of your data I guess)
[23:10] <seapasulli> georgem: ubuntu has a nice guide with pictures too ^_^ haha still I was looking at ipsec or vpn the whole thing but was hoping I may stumble on a hidden or easier solution
[23:10] <lurbs> seapasulli: I tried IPsec on the Ceph networks, but at least with my test hardware wasn't able to get anywhere close to the 10 Gb/s line rate.
[23:10] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Read error: Connection reset by peer)
[23:11] <georgem> seapasulli: the most secure way would probably be to have the clients upload encrypted data...
[23:11] <seapasulli> indeed that would be the best way at this point.
[23:13] <seapasulli> ah thanks lurbs same. I am currently already having performance issues with standard unencrypted traffic as it is. As we fill up the cluster the speed seems to drop from near 10G to maybe 1-2G which is my first task. That said performance will need to come later depending.
[23:14] <seapasulli> For data already stored in the cluster though, ipsec seems like a simple solution for the time being. Even though that nutty said it's insecure
[23:16] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[23:16] <seapasulli> https://www.altsci.com/ipsec/
[23:16] <georgem> seapasulli: can you provide more info? what's the load ratio in the cluster? 50% full, more? did the read throughput go down from ~10 Gb/s to 1-2 Gb/s ?
[23:16] <georgem> seapasulli: or just the write one?
[23:18] <seapasulli> our initial writes for our cluster were closer to 30Gbps (as fast as our 3 gateways at 10Gbps could write/push it seemed). We just tested by spawning a bunch of vms and writing to the cluster as fast as possible over S3 while watching our aggregate switch throughput.
[23:19] * jks (~jks@178.155.151.121) Quit (Ping timeout: 480 seconds)
[23:19] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[23:21] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[23:22] * derjohn_mob (~aj@tmo-113-20.customers.d1-online.com) has joined #ceph
[23:24] * linjan (~linjan@80.179.241.26) Quit (Ping timeout: 480 seconds)
[23:24] * derjohn_mob (~aj@tmo-113-20.customers.d1-online.com) Quit (Read error: Connection reset by peer)
[23:25] * derjohn_mob (~aj@tmo-113-20.customers.d1-online.com) has joined #ceph
[23:28] * dalgaaf (uid15138@id-15138.charlton.irccloud.com) Quit (Quit: Connection closed for inactivity)
[23:31] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:32] * sleinen1 (~Adium@2001:620:0:82::102) Quit (Read error: Connection reset by peer)
[23:32] * tupper (~tcole@2001:420:2280:1272:8900:f9b8:3b49:567e) Quit (Ping timeout: 480 seconds)
[23:32] * nsoffer (~nsoffer@bzq-79-177-255-248.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[23:34] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[23:36] * Curt` (~pakman__@5NZAAC76U.tor-irc.dnsbl.oftc.net) Quit ()
[23:36] * yuastnav (~Coe|work@exit1.ipredator.se) has joined #ceph
[23:39] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[23:45] <seapasulli> is there a place where I can look up what all of the columns in a ceph osd log mean?
[23:46] * wushudoin_ (~wushudoin@209.132.181.86) has joined #ceph
[23:47] * dneary (~dneary@66.187.233.207) Quit (Ping timeout: 480 seconds)
[23:47] * imjustmatthew (~imjustmat@pool-74-110-227-240.rcmdva.fios.verizon.net) Quit (Read error: Connection reset by peer)
[23:53] * jbautista- (~wushudoin@38.140.108.2) Quit (Ping timeout: 480 seconds)
[23:55] * wushudoin_ (~wushudoin@209.132.181.86) Quit (Ping timeout: 480 seconds)
[23:55] * wushudoin_ (~wushudoin@38.140.108.2) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.