#ceph IRC Log

Index

IRC Log for 2016-09-13

Timestamps are in GMT/BST.

[0:03] <cetex> Anticimex: yeah. i know. been tweaking quite a bit.
[0:04] <cetex> Anticimex: Just noticed that we had quite old kernels so we're gonna upgrade tomorrow
[0:04] <cetex> I hope that will resolve some of our bad performance until we get the new node and can test jbod.
[0:09] * zigo (~quassel@182.54.233.6) Quit (Read error: Connection reset by peer)
[0:10] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[0:11] * zigo (~quassel@gplhost-3-pt.tunnel.tserv18.fra1.ipv6.he.net) has joined #ceph
[0:12] * zigo is now known as Guest200
[0:14] * Guest200 is now known as zigo_
[0:15] * fsimonce (~simon@host98-71-dynamic.1-87-r.retail.telecomitalia.it) Quit (Remote host closed the connection)
[0:17] * ricin (~tritonx@635AAAKB4.tor-irc.dnsbl.oftc.net) has joined #ceph
[0:27] * rendar (~I@host224-179-dynamic.49-79-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[0:37] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) Quit (Quit: doppelgrau)
[0:38] * kuku (~kuku@119.93.91.136) has joined #ceph
[0:40] * danieagle (~Daniel@187.74.69.89) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[0:40] * Brochacho (~alberto@97.93.161.13) Quit (Remote host closed the connection)
[0:41] * Brochacho (~alberto@97.93.161.13) has joined #ceph
[0:41] * Skaag1 (~lunix@65.200.54.234) has joined #ceph
[0:41] * dneary (~dneary@pool-96-233-46-27.bstnma.fios.verizon.net) has joined #ceph
[0:42] * bene2 (~bene@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[0:44] * Brochacho (~alberto@97.93.161.13) Quit (Remote host closed the connection)
[0:44] * Brochacho (~alberto@97.93.161.13) has joined #ceph
[0:44] * Skaag (~lunix@65.200.54.234) Quit (Ping timeout: 480 seconds)
[0:46] * Brochacho (~alberto@97.93.161.13) Quit (Remote host closed the connection)
[0:47] * ricin (~tritonx@635AAAKB4.tor-irc.dnsbl.oftc.net) Quit ()
[0:52] * Brochacho (~alberto@97.93.161.13) has joined #ceph
[0:55] * MrBy_ (~MrBy@85.115.23.42) has joined #ceph
[0:55] * MrBy (~MrBy@85.115.23.42) Quit (Read error: Connection reset by peer)
[1:04] * ShaunR (~ShaunR@staff.ndchost.com) has joined #ceph
[1:04] <ShaunR> What SSD's are you guys using these days and seeing good performance out of?
[1:06] <wak-work> for sata sm863's are nice
[1:11] * kristen (~kristen@jfdmzpr01-ext.jf.intel.com) Quit (Quit: Leaving)
[1:14] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[1:20] * wushudoin (~wushudoin@2601:646:8200:c9f0:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[1:22] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[1:25] * valeech (~valeech@97.93.161.13) Quit (Quit: valeech)
[1:26] * Rosenbluth (~pakman__@108.61.122.224) has joined #ceph
[1:26] * valeech (~valeech@97.93.161.13) has joined #ceph
[1:33] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[1:33] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[1:38] * cholcombe (~chris@2001:67c:1562:8007::aac:40f1) Quit (Ping timeout: 480 seconds)
[1:39] * tsg (~tgohad@192.55.54.44) Quit (Remote host closed the connection)
[1:40] * MrBy (~MrBy@85.115.23.42) has joined #ceph
[1:40] * MrBy_ (~MrBy@85.115.23.42) Quit (Read error: Connection reset by peer)
[1:43] * Jeffrey4l_ (~Jeffrey@119.251.221.27) Quit (Quit: Leaving)
[1:43] * Jeffrey4l (~Jeffrey@119.251.221.27) has joined #ceph
[1:50] * oms101_ (~oms101@p20030057EA025200C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:56] * Rosenbluth (~pakman__@108.61.122.224) Quit ()
[1:56] * Brochacho (~alberto@97.93.161.13) Quit (Quit: Brochacho)
[1:56] * valeech (~valeech@97.93.161.13) Quit (Quit: valeech)
[1:59] * oms101_ (~oms101@p20030057EA033B00C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:59] * ChrisHolcombe (~chris@97.93.161.13) Quit (Remote host closed the connection)
[2:00] * xarses (~xarses@64.124.158.3) Quit (Ping timeout: 480 seconds)
[2:04] * Skaag1 (~lunix@65.200.54.234) Quit (Quit: Leaving.)
[2:09] * xarses (~xarses@73.93.152.199) has joined #ceph
[2:21] * xarses (~xarses@73.93.152.199) Quit (Ping timeout: 480 seconds)
[2:23] * northrup (~northrup@75-146-11-137-Nashville.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[2:24] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Quit: Leaving.)
[2:24] * davidzlap (~Adium@2605:e000:1313:8003:5941:ef43:8487:29e2) has joined #ceph
[2:28] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[2:30] * davidzlap (~Adium@2605:e000:1313:8003:5941:ef43:8487:29e2) Quit (Quit: Leaving.)
[2:30] * davidzlap (~Adium@2605:e000:1313:8003:5941:ef43:8487:29e2) has joined #ceph
[2:34] * xarses (~xarses@73.93.152.207) has joined #ceph
[2:35] * davidzlap (~Adium@2605:e000:1313:8003:5941:ef43:8487:29e2) Quit ()
[2:35] * davidzlap (~Adium@2605:e000:1313:8003:5941:ef43:8487:29e2) has joined #ceph
[2:37] * MrBy_ (~MrBy@85.115.23.42) has joined #ceph
[2:37] * Jeffrey4l (~Jeffrey@119.251.221.27) Quit (Ping timeout: 480 seconds)
[2:39] * valeech (~valeech@71-83-169-82.static.azus.ca.charter.com) has joined #ceph
[2:42] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[2:42] * MrBy (~MrBy@85.115.23.42) Quit (Ping timeout: 480 seconds)
[2:45] * srk (~Siva@2605:6000:ed04:ce00:2c4e:c8a1:5b50:224b) has joined #ceph
[2:46] * MrBy_ (~MrBy@85.115.23.42) Quit (Ping timeout: 480 seconds)
[2:47] * Jeffrey4l (~Jeffrey@110.252.44.199) has joined #ceph
[2:50] * MrBy (~MrBy@85.115.23.42) has joined #ceph
[2:52] * Racpatel (~Racpatel@2601:87:3:31e3::2433) Quit (Quit: Leaving)
[2:53] * linuxkidd (~linuxkidd@ip70-189-202-62.lv.lv.cox.net) Quit (Quit: Leaving)
[2:54] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[3:00] * MrBy (~MrBy@85.115.23.42) Quit (Ping timeout: 480 seconds)
[3:00] * xarses (~xarses@73.93.152.207) Quit (Ping timeout: 480 seconds)
[3:01] * srk (~Siva@2605:6000:ed04:ce00:2c4e:c8a1:5b50:224b) Quit (Ping timeout: 480 seconds)
[3:02] * MrBy (~MrBy@85.115.23.42) has joined #ceph
[3:05] * mhack (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) Quit (Remote host closed the connection)
[3:16] * wak-work (~wak-work@2620:15c:2c5:3:819e:99ae:cc13:282d) Quit (Remote host closed the connection)
[3:16] * wak-work (~wak-work@2620:15c:2c5:3:7c9e:3261:bdc9:bdc9) has joined #ceph
[3:17] * tsg (~tgohad@134.134.139.76) has joined #ceph
[3:17] * jfaj (~jan@p57983686.dip0.t-ipconnect.de) has joined #ceph
[3:18] * masber (~masber@129.94.15.152) has joined #ceph
[3:18] <masber> hi
[3:18] <masber> is it possible to deploy ceph on mesos?
[3:24] * jfaj__ (~jan@p4FC25CA9.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[3:24] * yanzheng (~zhyan@125.70.21.187) has joined #ceph
[3:25] * davidzlap (~Adium@2605:e000:1313:8003:5941:ef43:8487:29e2) Quit (Quit: Leaving.)
[3:28] * garphy`aw is now known as garphy
[3:29] * shaunm (~shaunm@ms-208-102-105-216.gsm.cbwireless.com) has joined #ceph
[3:34] * valeech (~valeech@71-83-169-82.static.azus.ca.charter.com) Quit (Quit: valeech)
[3:36] * davidzlap (~Adium@2605:e000:1313:8003:2d51:bb9a:6c60:ebb) has joined #ceph
[3:38] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[3:38] * EinstCrazy (~EinstCraz@61.165.253.98) has joined #ceph
[3:41] * MrBy (~MrBy@85.115.23.42) Quit (Ping timeout: 480 seconds)
[3:41] * sebastian-w_ (~quassel@212.218.8.139) has joined #ceph
[3:41] * MrBy (~MrBy@85.115.23.42) has joined #ceph
[3:42] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[3:43] * sebastian-w (~quassel@212.218.8.138) Quit (Ping timeout: 480 seconds)
[3:44] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) has joined #ceph
[3:44] * sankarshan (~sankarsha@171.48.20.47) has joined #ceph
[3:44] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[3:57] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:58] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[3:58] * tsg (~tgohad@134.134.139.76) Quit (Remote host closed the connection)
[3:58] * tsg (~tgohad@134.134.139.76) has joined #ceph
[4:00] * MrBy (~MrBy@85.115.23.42) Quit (Read error: Connection reset by peer)
[4:00] * MrBy_ (~MrBy@85.115.23.42) has joined #ceph
[4:00] * brians (~brian@80.111.114.175) Quit (Read error: Connection reset by peer)
[4:01] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[4:03] * brians (~brian@80.111.114.175) has joined #ceph
[4:04] * kefu (~kefu@114.92.125.128) has joined #ceph
[4:09] * MrBy (~MrBy@85.115.23.42) has joined #ceph
[4:09] * MrBy_ (~MrBy@85.115.23.42) Quit (Read error: Connection reset by peer)
[4:11] * davidzlap (~Adium@2605:e000:1313:8003:2d51:bb9a:6c60:ebb) Quit (Ping timeout: 480 seconds)
[4:12] * doppelgrau (~doppelgra@132.252.235.172) has joined #ceph
[4:13] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[4:14] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[4:17] * northrup (~northrup@173.14.101.193) has joined #ceph
[4:17] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[4:20] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[4:21] * northrup (~northrup@173.14.101.193) Quit ()
[4:23] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[4:24] * northrup (~northrup@173.14.101.193) has joined #ceph
[4:28] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[4:28] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[4:30] * mattbenjamin (~mbenjamin@121.244.54.198) Quit (Ping timeout: 480 seconds)
[4:32] * tsg (~tgohad@134.134.139.76) Quit (Remote host closed the connection)
[4:43] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[4:43] * MrBy_ (~MrBy@85.115.23.38) has joined #ceph
[4:44] * MrBy__ (~MrBy@85.115.23.42) has joined #ceph
[4:45] <SamYaple> masber: yes, possible. not a good idea
[4:45] <SamYaple> masber: you have to know to much about the hardware, its the opposite of what you really want to use mesos for
[4:47] * squizzi (~squizzi@107.13.237.240) Quit (Quit: bye)
[4:49] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[4:50] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[4:51] * MrBy (~MrBy@85.115.23.42) Quit (Ping timeout: 480 seconds)
[4:52] * MrBy_ (~MrBy@85.115.23.38) Quit (Ping timeout: 480 seconds)
[4:53] * praveen (~praveen@122.172.66.43) has joined #ceph
[4:56] * tsg (~tgohad@134.134.139.82) has joined #ceph
[4:56] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[5:01] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[5:01] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[5:04] * chenmin (~chenmin@118.250.186.196) has joined #ceph
[5:07] * tsg (~tgohad@134.134.139.82) Quit (Remote host closed the connection)
[5:10] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[5:13] * jcsp (~jspray@121.244.54.198) Quit (Ping timeout: 480 seconds)
[5:16] * EinstCrazy (~EinstCraz@61.165.253.98) Quit (Remote host closed the connection)
[5:16] * EinstCrazy (~EinstCraz@61.165.253.98) has joined #ceph
[5:20] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[5:21] * Vacuum_ (~Vacuum@88.130.211.4) has joined #ceph
[5:28] * Vacuum__ (~Vacuum@88.130.209.206) Quit (Ping timeout: 480 seconds)
[5:29] <masber> SamYaple, why is that?
[5:31] * tsg (~tgohad@192.55.54.44) has joined #ceph
[5:32] <SamYaple> masber: ceph-osd containers have to live on the host that has the ceph-osd drives. ceph-mons can technically move if the hostname and ip address remain the same, but that seems like it could be become a problem
[5:32] * tsg (~tgohad@192.55.54.44) Quit (Remote host closed the connection)
[5:32] <SamYaple> additionalls you must use hostnetworking to make the ceph-osd containers work, or use ipv6
[5:33] <SamYaple> ipv4 with multiple ceph-osd on the same host wihtout host-networking will not work
[5:44] * wkennington (~wkenningt@c-71-204-170-241.hsd1.ca.comcast.net) has joined #ceph
[5:44] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Quit: Leaving.)
[5:45] * MrBy__ (~MrBy@85.115.23.42) Quit (Read error: Connection reset by peer)
[5:45] * MrBy (~MrBy@85.115.23.42) has joined #ceph
[5:46] * joshd (~jdurgin@125.16.34.66) has joined #ceph
[5:46] * mattbenjamin (~mbenjamin@125.16.34.66) has joined #ceph
[5:47] * tsg (~tgohad@192.55.54.44) has joined #ceph
[5:50] * valeech (~valeech@71-83-169-82.static.azus.ca.charter.com) has joined #ceph
[5:50] * garphy is now known as garphy`aw
[5:51] * sankarshan (~sankarsha@171.48.20.47) Quit (Quit: Are you sure you want to quit this channel (Cancel/Ok) ?)
[5:53] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[5:55] * MrBy (~MrBy@85.115.23.42) Quit (Ping timeout: 480 seconds)
[5:55] * MrBy (~MrBy@85.115.23.42) has joined #ceph
[6:03] * walcubi__ (~walcubi@p5795B45E.dip0.t-ipconnect.de) has joined #ceph
[6:04] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[6:04] * rdas (~rdas@121.244.87.116) has joined #ceph
[6:10] * bara (~bara@125.16.34.66) has joined #ceph
[6:10] * walcubi_ (~walcubi@p5795AFEA.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[6:14] * valeech (~valeech@71-83-169-82.static.azus.ca.charter.com) Quit (Quit: valeech)
[6:22] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Read error: Connection reset by peer)
[6:24] * kuku (~kuku@119.93.91.136) Quit (Quit: computer sleep)
[6:25] * kuku (~kuku@119.93.91.136) has joined #ceph
[6:36] * MrBy (~MrBy@85.115.23.42) Quit (Ping timeout: 480 seconds)
[6:39] * MrBy (~MrBy@85.115.23.42) has joined #ceph
[6:41] * tsg_ (~tgohad@134.134.139.76) has joined #ceph
[6:41] * tsg (~tgohad@192.55.54.44) Quit (Remote host closed the connection)
[6:48] * karnan (~karnan@106.51.141.117) has joined #ceph
[6:48] * MrBy (~MrBy@85.115.23.42) Quit (Ping timeout: 480 seconds)
[6:48] * MrBy (~MrBy@85.115.23.42) has joined #ceph
[6:57] * krypto (~krypto@59.97.45.78) has joined #ceph
[7:00] * Jeffrey4l_ (~Jeffrey@110.244.242.55) has joined #ceph
[7:02] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[7:03] * EinstCrazy (~EinstCraz@61.165.253.98) Quit (Remote host closed the connection)
[7:03] * EinstCrazy (~EinstCraz@61.165.253.98) has joined #ceph
[7:05] * EinstCrazy (~EinstCraz@61.165.253.98) Quit (Remote host closed the connection)
[7:07] * Jeffrey4l (~Jeffrey@110.252.44.199) Quit (Ping timeout: 480 seconds)
[7:10] * MrBy_ (~MrBy@85.115.23.42) has joined #ceph
[7:11] * MrBy (~MrBy@85.115.23.42) Quit (Read error: No route to host)
[7:14] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Quit: Leaving)
[7:17] * EinstCrazy (~EinstCraz@61.165.253.98) has joined #ceph
[7:20] * vata (~vata@96.127.202.136) Quit (Quit: Leaving.)
[7:22] * MrBy_ (~MrBy@85.115.23.42) Quit (Read error: Connection reset by peer)
[7:23] * praveen__ (~praveen@171.61.126.101) has joined #ceph
[7:24] * praveen (~praveen@122.172.66.43) Quit (Ping timeout: 480 seconds)
[7:31] * tsg (~tgohad@134.134.139.76) has joined #ceph
[7:33] * tsg_ (~tgohad@134.134.139.76) Quit (Remote host closed the connection)
[7:34] * jcsp (~jspray@125.16.34.66) has joined #ceph
[7:37] * EinstCrazy (~EinstCraz@61.165.253.98) Quit (Remote host closed the connection)
[7:46] * MrBy (~MrBy@85.115.23.42) has joined #ceph
[7:48] * doppelgrau_ (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) has joined #ceph
[7:52] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[8:00] * kefu (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[8:09] * MrBy (~MrBy@85.115.23.42) Quit (Ping timeout: 480 seconds)
[8:10] * MrBy (~MrBy@85.115.23.2) has joined #ceph
[8:13] * doppelgrau (~doppelgra@132.252.235.172) Quit (Quit: Leaving.)
[8:13] * doppelgrau_ is now known as doppelgrau
[8:26] * kefu (~kefu@114.92.125.128) has joined #ceph
[8:28] * MrBy (~MrBy@85.115.23.2) Quit (Ping timeout: 480 seconds)
[8:33] * MrBy (~MrBy@85.115.23.2) has joined #ceph
[8:34] * jclm (~jclm@77.95.96.78) has joined #ceph
[8:35] * jclm (~jclm@77.95.96.78) Quit ()
[8:38] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) Quit (Quit: doppelgrau)
[8:44] * anvil (~anvil@2a01:c9c0:a1:1000:0:aff:fe6b:ff) has joined #ceph
[8:44] <anvil> Hello
[8:44] <anvil> I'm currently using some rbd commands, like "rbd list --id someid --keyfile somekeyfile". The security would like us to
[8:45] * sleinen (~Adium@2001:620:0:2d:a65e:60ff:fedb:f305) has joined #ceph
[8:45] <anvil> avoid storing keys on disk. Would any one knows *one* way to do that, please ?
[8:45] <anvil> (the security *team*)
[8:45] <anvil> I was originally thinking about "rbd list --id someid --keyfile <(some magic command)" but that fails with a <(cat keyfile)...
[8:47] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[8:49] * krypto (~krypto@59.97.45.78) Quit (Quit: Leaving)
[8:57] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[9:03] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[9:04] * jfaj (~jan@p57983686.dip0.t-ipconnect.de) Quit (Quit: WeeChat 1.5)
[9:09] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) has joined #ceph
[9:11] * jfaj (~jan@p57983686.dip0.t-ipconnect.de) has joined #ceph
[9:12] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[9:13] * doppelgrau (~doppelgra@132.252.235.172) has joined #ceph
[9:22] * Hemanth (~hkumar_@103.228.221.183) has joined #ceph
[9:22] * Hemanth (~hkumar_@103.228.221.183) Quit (Remote host closed the connection)
[9:32] * Hemanth (~hkumar_@103.228.221.183) has joined #ceph
[9:33] <Hannes> Hi all, I am having following issue with our ceph cluster, when pushing data to it 1 osd is always starting to block the complete cluster. We have 17 OSD's. When all of them are up, it's osd.16 causing issues, when I stop that one, the issues move to osd.8.
[9:33] <Hannes> What is the best way to troubleshoot this issue?
[9:35] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[9:35] <Hannes> I looked on the internet and found it might be related to disk issues. I double checked and the disks seem to behave correctly.
[9:36] <Hannes> esp. the fact it's always osd.8 starting to block everything as long als osd.16 is down, seems so odd.
[9:36] <Hannes> when osd.16 is up, that's the one blocking everything
[9:38] <cetex> same amount of pg's per node?
[9:38] <cetex> otherwise the node with more pg's will have a higher load
[9:39] * rendar (~I@host74-36-dynamic.31-79-r.retail.telecomitalia.it) has joined #ceph
[9:40] * kefu (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[9:42] <T1w> depending on the type of disks/controller used the disk behind osd.16 and .8 could simply be silently failing - and requiring longer and longer to write data
[9:43] <Be-El> Hannes: which warning/error message do you get?
[9:43] * derjohn_mob (~aj@46.189.28.95) has joined #ceph
[9:44] * kefu (~kefu@114.92.125.128) has joined #ceph
[9:44] <Hannes> T1w: wel, smartctl and hpe (it's a dl180 with original HPE disks) seem to agree the disks are in a good state
[9:45] <Hannes> Be-El: 2016-09-13 07:45:02.917215 7f42b5fe1700 0 log_channel(cluster) log [WRN] : slow request 850.563890 seconds old, received at 2016-09-13 07:30:52.353221: osd_op(client.42685962.0:1 twoo__306884.esclf_S.201609011100-1200-6.gz [writefull 0~4194304] 21.873a3f90 RETRY=22 ondisk+retry+write+known_if_redirected e8647) currently waiting for missing object
[9:45] <T1w> smartctl could easily be wrong - if the disks take longer to complete a write due to replacement of sectors..
[9:47] <Hannes> Be-El: hmm, I do have 6 unfound objects
[9:47] <T1w> the difference between a desktop and raid disk is that a raid disk only uses up to 5-6 seconds to complete a command before reporting ok or fail back to the controller while a desktop disk easily could use a minute or more..
[9:47] <T1w> this could probably cause slow writes
[9:47] <Hannes> T1w: but not writes blocked fot 1000's of seconds?
[9:47] * TMM (~hp@dhcp-077-248-009-229.chello.nl) Quit (Quit: Ex-Chat)
[9:48] <T1w> no, that seems wrong on other levels than that
[9:48] * Hemanth (~hkumar_@103.228.221.183) Quit (Quit: Leaving)
[9:48] <T1w> but perhaps the disk just silently dropped the request
[9:48] <Hannes> T1w: but, wouldn't the HPE software catch errors on the disk?
[9:48] <T1w> no idea
[9:48] <Hannes> HPE software/hardware/firmware
[9:48] <T1w> probably depends on any number of things
[9:49] <T1w> (and I'm not familiar with HPE)
[9:49] <Be-El> Hannes: the problem is not the disk, but the unfound objects, as the message indicates
[9:50] <Hannes> Be-El: okay, I guess the quickfix is to mark them as lost forever?
[9:50] <Be-El> Hannes: that may result in a data loss
[9:50] <Hannes> the cluster has been in a very rough state and I presume they really are lost forever
[9:50] <Be-El> Hannes: did you have any problems with other osds recently?
[9:51] <Hannes> I added 6 osd's 2 weeks ago and the cluster has been unstable since.
[9:51] <Hannes> it was unstable due to not enough ram/swap space
[9:51] <IcePic> I wonder if "ceph -w" output of "slow request" really means the device was slow as opposed to "I was planning to do this I/O operation 850 seconds ago and havent been able to since"
[9:52] <Hannes> and when we added swapfiles, it went better but we kept having problems of hanging operations
[9:53] <Hannes> Be-El: the dataloss shouldn't be a big problem because the data on there is testingdata or are incomplete backups due to the instability
[9:53] * fsimonce (~simon@host98-71-dynamic.1-87-r.retail.telecomitalia.it) has joined #ceph
[9:53] <Be-El> IcePic: afaik it just means that the request was started some time ago and did not finish yet. there may be different reasons among disk related ones, e.g. network or a bad overall state
[9:53] <Be-El> Hannes: ok, in that case marking the objects as lost is the quick fix
[9:53] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[9:53] * bjozet (~bjozet@82.183.17.144) has joined #ceph
[9:54] <Hannes> Be-El: is there a way to know what pool the objects are part of?
[9:54] <Hannes> just to know what is affected?
[9:54] * dugravot6 (~dugravot6@l-p-dn-in-4a.lionnois.site.univ-lorraine.fr) Quit (Quit: Leaving.)
[9:54] * Hemanth (~hkumar_@103.228.221.183) has joined #ceph
[9:55] <Be-El> Hannes: not sure about it, but 21.873a3f90 looks like a PG id in the message above
[9:55] <Be-El> although its a funny one
[9:58] <Be-El> Hannes: do you have a pool with id 21, which is a EC pool?
[9:59] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[10:02] <Hannes> they are all EC pools
[10:03] <Be-El> Hannes: does 'ceph health detail' list more information about the unfound objects?
[10:04] * wkennington (~wkenningt@c-71-204-170-241.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[10:06] * dugravot6 (~dugravot6@l-p-dn-in-4a.lionnois.site.univ-lorraine.fr) has joined #ceph
[10:07] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[10:09] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) has joined #ceph
[10:10] <ledgr> Hello, i have removed a pool that consisted of only one OSD
[10:10] <ledgr> I have also removed that osd from crush map
[10:11] <ledgr> now I see that I have 128PGs stuck+active+remapped
[10:11] <ledgr> I don't need this pool and I want to remove those PGs
[10:11] <ledgr> how do I do that?
[10:11] <Hannes> Be-El: http://pastebin.com/Bestr7AV
[10:12] <Hannes> it does, I'm checking some more details about those pg's
[10:13] * DanFoster (~Daniel@office.34sp.com) has joined #ceph
[10:14] <Hannes> Be-El: there is indeed a pool with id 21, which is indeed an EC pool
[10:19] <Hannes> I found what files are affected... trying to remove the file itself... when trying to mark unfound as lost, he complained: Error EINVAL: pg has 2 unfound objects but we haven't probed all sources, not marking lost
[10:22] * chenmin_ (~chenmin@118.250.186.196) has joined #ceph
[10:24] * Hannes (~Hannes@hygeia.opentp.be) Quit (Quit: Changing server)
[10:28] * rakeshgm (~rakesh@106.51.28.220) has joined #ceph
[10:28] * chenmin (~chenmin@118.250.186.196) Quit (Ping timeout: 480 seconds)
[10:28] * chenmin_ is now known as chenmin
[10:28] * Hannes (~Hannes@hygeia.opentp.be) has joined #ceph
[10:29] <Hannes> Be-El: I was disconnected for a moment... did you have any answers for me?
[10:30] <Be-El> Hannes: you can use 'ceph pg X.Y query' to get an overview of the internal state. there's also a 'history' section containing all osds that may also hold objects for that pg
[10:31] * Dw_Sn (~Dw_Sn@00020a72.user.oftc.net) has joined #ceph
[10:31] <Be-El> Hannes: and before marking anything lost you should let the cluster settle. there are recovery operations runnings
[10:31] * derjohn_mob (~aj@46.189.28.95) Quit (Ping timeout: 480 seconds)
[10:32] <Be-El> i remember having lost objects messages during recovery/cluster extension running the hammer release
[10:35] * zigo_ is now known as zigo
[10:39] <Hannes> Be-El: okay... we'll wait :D
[10:47] * Hemanth (~hkumar_@103.228.221.183) Quit (Ping timeout: 480 seconds)
[10:56] * derjohn_mob (~aj@46.189.28.56) has joined #ceph
[11:00] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:6004:a51a:3c47:611d) has joined #ceph
[11:00] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[11:02] * tsg_ (~tgohad@fmdmzpr04-ext.fm.intel.com) has joined #ceph
[11:03] * gmoro (~guilherme@193.120.208.221) Quit (Remote host closed the connection)
[11:05] * tsg (~tgohad@134.134.139.76) Quit (Remote host closed the connection)
[11:06] * tsg (~tgohad@134.134.139.76) has joined #ceph
[11:06] * tsg_ (~tgohad@fmdmzpr04-ext.fm.intel.com) Quit (Remote host closed the connection)
[11:07] * gmoro (~guilherme@193.120.208.221) has joined #ceph
[11:17] * xENO_1 (~Harryhy@104.156.240.173) has joined #ceph
[11:19] * Hemanth (~hkumar_@103.228.221.183) has joined #ceph
[11:21] <IcePic> use the time to write up ceph cluster monitoring now that you have an error condition to practice on. ;)
[11:24] <Be-El> cluster monitoring....good point
[11:24] <Be-El> does anyone use graphite and the ceph plugin for monitoring?
[11:25] <Be-El> later...meeting time...
[11:27] * Dw_Sn (~Dw_Sn@00020a72.user.oftc.net) Quit (Ping timeout: 480 seconds)
[11:27] * mattch (~mattch@w5430.see.ed.ac.uk) has joined #ceph
[11:29] <ivve> anyong have a good understanding of last deep scrub vs deep scrub stamp ?
[11:30] <ivve> when dumping a pg
[11:30] * Dw_Sn (~Dw_Sn@forbin.ichec.ie) has joined #ceph
[11:30] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) Quit (Read error: Connection reset by peer)
[11:30] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) has joined #ceph
[11:31] <ivve> same with regular scrubs vs scrub stamp
[11:31] * Dw_Sn is now known as Guest241
[11:31] <ivve> or rather last_scrub
[11:32] * TMM (~hp@185.5.121.201) has joined #ceph
[11:36] * ashah (~ashah@103.16.70.200) has joined #ceph
[11:39] * tsg_ (~tgohad@134.134.139.76) has joined #ceph
[11:39] * tsg (~tgohad@134.134.139.76) Quit (Remote host closed the connection)
[11:41] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[11:47] * xENO_1 (~Harryhy@104.156.240.173) Quit ()
[11:52] * rakeshgm (~rakesh@106.51.28.220) Quit (Quit: Leaving)
[11:54] * LiamMon_ (~liam.monc@94.0.108.242) has joined #ceph
[11:59] * LiamMon (~liam.monc@90.196.42.2) Quit (Ping timeout: 480 seconds)
[12:01] * ledgr_ (~ledgr@88-119-196-104.static.zebra.lt) has joined #ceph
[12:01] * bniver (~bniver@125.16.34.66) has joined #ceph
[12:02] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) Quit (Read error: Connection reset by peer)
[12:12] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:6004:a51a:3c47:611d) Quit (Ping timeout: 480 seconds)
[12:14] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[12:14] * rwheeler (~rwheeler@125.16.34.66) has joined #ceph
[12:17] * walcubi__ is now known as walcubi
[12:20] * kees_ (~kees@2001:610:600:8774:3d73:cc06:5d74:6761) has joined #ceph
[12:21] * bene2 (~bene@2601:193:4101:f410:ea2a:eaff:fe08:3c7a) has joined #ceph
[12:23] <walcubi> Be-El, I wrote my own, based on ceph daemonperf
[12:24] <walcubi> I ran into some permissions issues on Jewel though, so I guess the ceph collectd plugin would suffer from the same problem?
[12:24] * tsg_ (~tgohad@134.134.139.76) Quit (Remote host closed the connection)
[12:24] * tsg (~tgohad@134.134.139.76) has joined #ceph
[12:24] <walcubi> I run collectd as a non-root user on my servers, at least.
[12:26] <walcubi> I think I had to add the monitoring user to the ceph group, chmod 755 /var/lib/ceph; and add umask 0002 to the mon and osd upstart configs.
[12:27] <kees_> Hey all, could someone do a quick sanity check on my shoppinglist for a ceph cluster? https://paste.ee/p/9spIJ
[12:28] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:6004:a51a:3c47:611d) has joined #ceph
[12:29] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[12:30] <ivve> anyong know the difference of last_deep_scrub vs deep_scrub_stamp when dumping PGs?
[12:31] <ivve> last is pretty obvious, but stamp?
[12:31] <ivve> what if the dates are different
[12:36] * LiamMon_ (~liam.monc@94.0.108.242) Quit (Ping timeout: 480 seconds)
[12:37] * LiamMon (~liam.monc@94.14.197.99) has joined #ceph
[12:46] * sleinen1 (~Adium@macsl.switch.ch) has joined #ceph
[12:49] * Sirrush (~hyst@178-175-128-50.static.host) has joined #ceph
[12:52] <walcubi> ivve, looks like last_deep_scrub is a reference to the last update. Whereas stamp is a reference to when scrubbing actually finished.
[12:53] * sleinen (~Adium@2001:620:0:2d:a65e:60ff:fedb:f305) Quit (Ping timeout: 480 seconds)
[12:53] <ivve> hmm
[12:53] <ivve> i feel stupid, what is it supposed to mean?
[12:53] <ivve> i k??nda understand what you mean but
[12:54] <ivve> yeah i start a deep, last gets update at once
[12:54] <ivve> but can a deepscrub be aborteD?
[12:54] <ivve> does it update just as the command is queued?
[12:54] <walcubi> Not really, as I understand it.
[12:54] <ivve> so just for instance
[12:55] <ivve> i have a pg that looks like this:
[12:55] <ivve> 69938'141310 2016-09-13 12:45:29.515006 69916'98474 2016-08-16 12:42:04.091076
[12:55] <walcubi> You can set nodeep-scrub, but any running scrubs will continue until they're done.
[12:55] <ivve> yea
[12:55] <ivve> and i have set my deep-scrub-interval at 2.4m seconds
[12:56] <ivve> (4w)
[12:56] <ivve> so it should start automatic deepscrub of this one
[12:56] <ivve> but what if the date looks like this
[12:56] <ivve> 2016-09-06 12:28:52.117061
[12:57] <ivve> that is last
[12:57] <ivve> and stamp is 2016-08-16 12:26:55.34095
[12:57] <ivve> im like.. what the hell?
[12:57] <ivve> if it scrubbed last week
[12:57] <ivve> why is it scrubbing today
[12:57] <ivve> instead of in 3 weeks
[12:57] * Lokta (~Lokta@carbon.coe.int) has joined #ceph
[12:58] <walcubi> scrubbing happens daily, deep scrubbing weekly by default IIRC.
[12:58] <ivve> yea both dates are DEEP
[12:59] <ivve> 1.1181 3873 0 0 0 0 16233066496 3010 3010 active+clean 2016-09-06 12:28:52.117061 69938'266247 69938:263536 [64,18] 64 [64,18] 64 69938'244072 2016-09-06 12:28:52.116586 69916'197637 2016-08-16 12:26:55.34095
[12:59] <ivve> so you can see regular scrub was done on the same day as deep
[12:59] <ivve> but stamp for deep is 08-16
[12:59] <ivve> and deep interval is set at 28days (or 2.4mill seconds)
[13:00] <nikbor> should the blockcg throttling limits work on ceph nodes?
[13:00] <ivve> super wierd behaviour
[13:06] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Quit: Leaving)
[13:06] * kefu (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[13:12] * chenmin_ (~chenmin@118.250.186.196) has joined #ceph
[13:18] * chenmin (~chenmin@118.250.186.196) Quit (Ping timeout: 480 seconds)
[13:18] * chenmin_ is now known as chenmin
[13:19] * Sirrush (~hyst@2RTAAABP4.tor-irc.dnsbl.oftc.net) Quit ()
[13:19] * Racpatel (~Racpatel@2601:87:3:31e3::2433) has joined #ceph
[13:27] * ledgr_ (~ledgr@88-119-196-104.static.zebra.lt) Quit (Remote host closed the connection)
[13:28] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) has joined #ceph
[13:33] * kefu (~kefu@114.92.125.128) has joined #ceph
[13:35] * tsg_ (~tgohad@fmdmzpr04-ext.fm.intel.com) has joined #ceph
[13:35] * tsg (~tgohad@134.134.139.76) Quit (Remote host closed the connection)
[13:36] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[13:42] * kefu (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[13:42] * kefu (~kefu@114.92.125.128) has joined #ceph
[13:44] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[13:56] * ashah (~ashah@103.16.70.200) Quit (Ping timeout: 480 seconds)
[13:58] * sleinen (~Adium@130.59.94.73) has joined #ceph
[14:00] * dneary (~dneary@pool-96-233-46-27.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[14:00] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[14:01] * sleinen2 (~Adium@2001:620:0:82::107) has joined #ceph
[14:04] * rwheeler (~rwheeler@125.16.34.66) Quit (Quit: Leaving)
[14:05] * sleinen1 (~Adium@macsl.switch.ch) Quit (Ping timeout: 480 seconds)
[14:07] * kefu (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[14:07] * tsg_ (~tgohad@fmdmzpr04-ext.fm.intel.com) Quit (Remote host closed the connection)
[14:07] * sleinen (~Adium@130.59.94.73) Quit (Ping timeout: 480 seconds)
[14:10] * doppelgrau (~doppelgra@132.252.235.172) Quit (Quit: Leaving.)
[14:15] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[14:16] * Nicho1as (~nicho1as@14.52.121.20) has joined #ceph
[14:19] * joshd (~jdurgin@125.16.34.66) Quit (Quit: Leaving.)
[14:23] * bniver (~bniver@125.16.34.66) Quit (Remote host closed the connection)
[14:29] * jcsp (~jspray@125.16.34.66) Quit (Ping timeout: 480 seconds)
[14:30] * mattbenjamin (~mbenjamin@125.16.34.66) Quit (Ping timeout: 480 seconds)
[14:30] * bara (~bara@125.16.34.66) Quit (Ping timeout: 480 seconds)
[14:31] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[14:32] * danieagle (~Daniel@187.35.187.153) has joined #ceph
[14:35] * rdas (~rdas@121.244.87.116) Quit (Ping timeout: 480 seconds)
[14:35] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[14:45] * garphy`aw is now known as garphy
[14:49] <anvil> "oracle..."
[14:49] <anvil> oups
[14:52] * ashah (~ashah@103.16.70.200) has joined #ceph
[14:52] * ashah (~ashah@103.16.70.200) Quit ()
[14:55] * rakeshgm (~rakesh@106.51.28.220) has joined #ceph
[14:56] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[14:58] * shaunm (~shaunm@ms-208-102-105-216.gsm.cbwireless.com) Quit (Quit: Ex-Chat)
[14:58] * mattbenjamin (~mbenjamin@121.244.54.198) has joined #ceph
[14:59] * shaunm (~shaunm@ms-208-102-105-216.gsm.cbwireless.com) has joined #ceph
[15:02] * rraja (~rraja@121.244.87.117) has joined #ceph
[15:02] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[15:06] * rwheeler (~rwheeler@202.62.94.195) has joined #ceph
[15:07] * Jourei (~sixofour@178-175-128-50.static.host) has joined #ceph
[15:07] * joshd (~jdurgin@202.62.94.195) has joined #ceph
[15:08] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[15:10] * bniver (~bniver@202.62.94.195) has joined #ceph
[15:12] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[15:15] * wjw-freebsd (~wjw@smtp.medusa.nl) has joined #ceph
[15:16] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[15:17] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[15:18] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[15:18] * jordan_c (~jconway@cable-192.222.246.54.electronicbox.net) Quit (Ping timeout: 480 seconds)
[15:22] * rdias (~rdias@2001:8a0:749a:d01:7d47:77c7:9af1:4502) Quit (Ping timeout: 480 seconds)
[15:22] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[15:22] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[15:22] * kefu (~kefu@114.92.125.128) has joined #ceph
[15:22] * rdias (~rdias@2001:8a0:749a:d01:45c0:31f1:4419:16bf) has joined #ceph
[15:24] * rakeshgm (~rakesh@106.51.28.220) Quit (Quit: Leaving)
[15:25] * jcsp (~jspray@121.244.54.198) has joined #ceph
[15:26] * Dw_Sn (~Dw_Sn@forbin.ichec.ie) has joined #ceph
[15:27] * Dw_Sn is now known as Guest258
[15:28] * Guest241 (~Dw_Sn@forbin.ichec.ie) Quit (Ping timeout: 480 seconds)
[15:32] <sep> any of you know how to get a osd running again when it's crashes on startup ? are there any tools one can run to get the osd operational again? or at the very least empty the disk of objects
[15:32] <btaylor> anyone know of any reputable companies other then Red Hat that offer support and/or consulting for ceph on Debian?
[15:34] <sep> btaylor, good question :) I'd like the answer for that as well :)
[15:34] * diver (~diver@95.85.8.93) has joined #ceph
[15:35] * erhudy (uid89730@id-89730.ealing.irccloud.com) has joined #ceph
[15:35] <btaylor> sep: as far as i know i???d just got thru the standard remove osd steps, then maybe add it right back in after you zap it
[15:35] <sep> btaylor, problem is i have 3 pg's down. and the objects are on the osd's that does not want to start.
[15:36] <sep> so i am reluctant to zap them
[15:36] * LiamMon_ (~liam.monc@90.200.95.109) has joined #ceph
[15:36] <btaylor> what error are you getting when you start them?
[15:37] <btaylor> are teh disks mounted in the right place? that???s been my problem so far
[15:37] * Jourei (~sixofour@26XAABWKZ.tor-irc.dnsbl.oftc.net) Quit ()
[15:38] * walcubi (~walcubi@p5795B45E.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[15:40] * diver_ (~diver@95.85.8.93) has joined #ceph
[15:41] <sep> the log is here ; http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-September/012961.html
[15:41] * LiamMon__ (~liam.monc@94.14.202.192) has joined #ceph
[15:41] <sep> they are mounted and i can read the files easily. but the startup failes somewhere during reading the osd
[15:41] <sep> i do not get any io errors in dmesg or in the log
[15:42] <btaylor> wait. you made 1 osd out of 5 drives?
[15:42] * LiamMon (~liam.monc@94.14.197.99) Quit (Ping timeout: 480 seconds)
[15:43] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) has joined #ceph
[15:43] <btaylor> ???software raid5 consisting of 5 3TB harddrives.??? so you are running software raid ontop of software raid?
[15:43] <btaylor> essentially?
[15:44] <btaylor> sep: ?
[15:44] <sep> running ceph osd on top of software raid yes
[15:44] <btaylor> i think that???s your problem.
[15:44] <sep> since the hardware i have does not have ram to run 36 osd's, but it does easily run 6 osd's
[15:45] * diver (~diver@95.85.8.93) Quit (Ping timeout: 480 seconds)
[15:45] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[15:45] * doppelgrau1 (~doppelgra@132.252.235.172) has joined #ceph
[15:45] <sep> well i have many of these osd's that does not have this problem
[15:45] <sep> so there must be something with these 2 that are not working.
[15:45] * vbellur (~vijay@71.234.224.255) Quit (Ping timeout: 480 seconds)
[15:45] * LiamMon_ (~liam.monc@90.200.95.109) Quit (Ping timeout: 480 seconds)
[15:46] <btaylor> i???d imagine it???s just a problem waiting to happen, and degrading performance because of the different software components trying to do their own thing and basically duplicating effort
[15:46] <sep> degrading performance is acceptable
[15:46] <sep> ofcourse i have 3x replica osd on top of raid5. so there is a additional level of waste.
[15:47] * squizzi_ (~squizzi@107.13.237.240) has joined #ceph
[15:47] <sep> but only running 10 out of 36 disks are a bigger waste
[15:48] <sep> i know it's not optimal. but it's what i got. and i think it can make some hard to reach bugs more visible.
[15:48] <sep> but i do not think this problem with the non starting osd is related to the raid5. since there are 15 other osd's not having this problem.
[15:50] * georgem (~Adium@206.108.127.16) has joined #ceph
[15:50] <Gugge-47527> sep: all the "No such file or directory" errors, are those files really missing?
[15:51] <Gugge-47527> maybe you could export the down pg's and import them on a new working osd
[15:51] <Gugge-47527> if the data is really there
[15:52] <Gugge-47527> read this to get ideas: http://ceph.com/community/incomplete-pgs-oh-my/
[15:53] <sep> yes those errors are correct. i have not found any with random sampeling
[15:54] * walcubi (~walcubi@p5099a7c3.dip0.t-ipconnect.de) has joined #ceph
[15:54] <sep> ahh that looks like a promising tool
[15:55] <sep> been googeling a lot the last 2 days but have not come across that one.
[15:57] <georgem> @loicd: can you please take a look at http://tracker.ceph.com/issues/15896 when you get the chance
[15:57] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[15:57] <sep> thanks a lot. this looks like it should be able to recover the 3 down pg's , once that is done the osd's can safely be zapped
[16:02] * squizzi_ (~squizzi@107.13.237.240) Quit (Ping timeout: 480 seconds)
[16:02] * wjw-freebsd (~wjw@smtp.medusa.nl) Quit (Ping timeout: 480 seconds)
[16:03] * karnan (~karnan@106.51.141.117) Quit (Quit: Leaving)
[16:03] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:06] * walcubi_ (~walcubi@p5795B317.dip0.t-ipconnect.de) has joined #ceph
[16:06] * vata (~vata@207.96.182.162) has joined #ceph
[16:07] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[16:07] <infernix> has anyone used nfs-ganesha-rgw? or mixed samba with vfs_ceph together with nfs-ganesha-ceph to allow both windows and linux clients access to the same data?
[16:08] <infernix> i've looked at the native windows cephfs client based on dokan, but that seems rather unmaintained
[16:08] * Drezil1 (~TehZomB@162.216.46.182) has joined #ceph
[16:10] <ira> infernix: I don't think multiprotocol is going to work reliably.
[16:11] * walcubi (~walcubi@p5099a7c3.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[16:17] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[16:18] * DanFoster (~Daniel@office.34sp.com) Quit (Quit: Leaving)
[16:21] * chenmin (~chenmin@118.250.186.196) Quit (Quit: Going offline, see ya! (www.adiirc.com))
[16:21] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[16:22] * valeech (~valeech@71-83-169-82.static.azus.ca.charter.com) has joined #ceph
[16:22] * wjw-freebsd (~wjw@smtp.medusa.nl) has joined #ceph
[16:25] * squizzi_ (~squizzi@107.13.237.240) has joined #ceph
[16:26] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[16:27] * Guest258 (~Dw_Sn@forbin.ichec.ie) Quit (Quit: leaving)
[16:28] * walcubi_ is now known as walcubi
[16:29] * joshd (~jdurgin@202.62.94.195) Quit (Quit: Leaving.)
[16:30] <walcubi> Tried out turning on skinny-metadata + nodesize=16k on one of the btrfs disks, and osd load and op latency (and filestore commit) is double that of all other disks.
[16:31] <walcubi> This is the opposite of what I had hoped for. Going to take it out and reformat using nodesize=32k, and hope that it wasn't the former that is causing problems.
[16:38] * Drezil1 (~TehZomB@162.216.46.182) Quit ()
[16:39] * xarses_ (~xarses@64.124.158.3) has joined #ceph
[16:40] * MrBy (~MrBy@85.115.23.2) Quit (Quit: Ex-Chat)
[16:43] * sankarshan (~sankarsha@121.244.87.117) has joined #ceph
[16:44] * wes_dillingham (~wes_dilli@140.247.242.44) has joined #ceph
[16:44] * kristen (~kristen@jfdmzpr01-ext.jf.intel.com) has joined #ceph
[16:44] * srk (~Siva@32.97.110.50) has joined #ceph
[16:45] * kristenc (~kristen@134.134.139.78) has joined #ceph
[16:45] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[16:46] * davidzlap (~Adium@rrcs-74-87-213-28.west.biz.rr.com) has joined #ceph
[16:46] * ledgr (~ledgr@88-222-11-185.meganet.lt) has joined #ceph
[16:46] * kristen (~kristen@jfdmzpr01-ext.jf.intel.com) Quit (Remote host closed the connection)
[16:52] * kees_ (~kees@2001:610:600:8774:3d73:cc06:5d74:6761) Quit (Remote host closed the connection)
[16:54] * bara (~bara@121.244.54.198) has joined #ceph
[16:55] * ntpttr_ (~ntpttr@192.55.54.38) has joined #ceph
[16:55] * bara_ (~bara@121.244.54.198) has joined #ceph
[16:57] * bara (~bara@121.244.54.198) Quit (Remote host closed the connection)
[16:58] * bara_ (~bara@121.244.54.198) Quit (Remote host closed the connection)
[16:58] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[17:00] * valeech (~valeech@71-83-169-82.static.azus.ca.charter.com) Quit (Quit: valeech)
[17:01] * wushudoin (~wushudoin@2601:646:8200:c9f0:2ab2:bdff:fe0b:a6ee) has joined #ceph
[17:01] * yanzheng (~zhyan@125.70.21.187) Quit (Quit: This computer has gone to sleep)
[17:01] * wushudoin (~wushudoin@2601:646:8200:c9f0:2ab2:bdff:fe0b:a6ee) Quit ()
[17:01] * wushudoin (~wushudoin@2601:646:8200:c9f0:2ab2:bdff:fe0b:a6ee) has joined #ceph
[17:05] * RaidSoft (~sixofour@exit0.radia.tor-relays.net) has joined #ceph
[17:08] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) Quit (Quit: Leaving.)
[17:11] * kefu (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[17:11] * kristenc (~kristen@134.134.139.78) Quit (Remote host closed the connection)
[17:11] * kefu (~kefu@li1445-134.members.linode.com) has joined #ceph
[17:12] * diver (~diver@216.85.162.34) has joined #ceph
[17:12] * krogon (~krogon@irdmzpr01-ext.ir.intel.com) Quit (Remote host closed the connection)
[17:13] * ntpttr_ (~ntpttr@192.55.54.38) Quit (Remote host closed the connection)
[17:16] * EinstCrazy (~EinstCraz@61.165.253.98) has joined #ceph
[17:17] * kefu (~kefu@li1445-134.members.linode.com) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:17] * ledgr (~ledgr@88-222-11-185.meganet.lt) Quit (Remote host closed the connection)
[17:18] * diver_ (~diver@95.85.8.93) Quit (Ping timeout: 480 seconds)
[17:21] * krogon (~krogon@irdmzpr01-ext.ir.intel.com) has joined #ceph
[17:22] * squizzi_ (~squizzi@107.13.237.240) Quit (Ping timeout: 480 seconds)
[17:22] * squizzi_ (~squizzi@107.13.237.240) has joined #ceph
[17:24] * grauzikas (grauzikas@78-56-222-78.static.zebra.lt) has joined #ceph
[17:24] * kristenc (~kristen@jfdmzpr06-ext.jf.intel.com) has joined #ceph
[17:25] <grauzikas> Hello, i`m integrating ceph with openstack and i`m confused a litle bit, can i use for example as admin node same as controller node for openstack?
[17:25] <grauzikas> what about monitor? it requires different node or may be can be used same controller or compute or any other nodes?
[17:25] * krogon (~krogon@irdmzpr01-ext.ir.intel.com) Quit ()
[17:26] <grauzikas> i`m planing to use OSD separate server with 24 ssd drives
[17:26] <grauzikas> at this moment only one, later i`ll add two more
[17:28] * davidzlap (~Adium@rrcs-74-87-213-28.west.biz.rr.com) Quit (Quit: Leaving.)
[17:28] * krogon (~krogon@irdmzpr01-ext.ir.intel.com) has joined #ceph
[17:30] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Quit: Leaving.)
[17:30] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[17:31] * EinstCrazy (~EinstCraz@61.165.253.98) Quit (Ping timeout: 480 seconds)
[17:31] * kefu (~kefu@114.92.125.128) has joined #ceph
[17:32] <walcubi> I thought that when marking an OSD as 'out' no more writes would be sent to it
[17:33] * EinstCrazy (~EinstCraz@61.165.253.98) has joined #ceph
[17:35] * RaidSoft (~sixofour@9J5AAAQ7U.tor-irc.dnsbl.oftc.net) Quit ()
[17:36] * kefu is now known as kefu|afk
[17:38] * danieagle (~Daniel@187.35.187.153) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[17:38] * EinstCrazy (~EinstCraz@61.165.253.98) Quit (Read error: Connection reset by peer)
[17:38] * EinstCrazy (~EinstCraz@61.165.253.98) has joined #ceph
[17:39] <erhudy> you can co-locate the monitors with the openstack control plane but be cautious of resource starvation, we occasionally have issues where monitor performance suffers when the control plane is under heavy load
[17:40] * wjw-freebsd (~wjw@smtp.medusa.nl) Quit (Ping timeout: 480 seconds)
[17:40] * steve (~steve@ip68-98-63-137.ph.ph.cox.net) has joined #ceph
[17:40] * steve is now known as Guest269
[17:41] <Guest269> Hi! I was wondering if anyone had any info. on setting up rgw with erasure encoded pools ? Do I just delete the .rgw.root and default.rgw.* pools and re-create them as ec pools ?
[17:42] <walcubi> can I speed up recovery with rsync or something?
[17:42] * krogon (~krogon@irdmzpr01-ext.ir.intel.com) Quit (Remote host closed the connection)
[17:42] <walcubi> The data being written to bad disk is not stopping
[17:43] <walcubi> Which is the complete opposite of what happened last time I marked a disk as out
[17:44] <grauzikas> ok thank you
[17:44] <grauzikas> also if i have only with ssd drives ceph
[17:44] * Guest209 is now known as herrsergio
[17:44] <diver> @Guest269, no. I suggest to keep other pools on replication
[17:44] * krogon (~krogon@irdmzpr01-ext.ir.intel.com) has joined #ceph
[17:44] <diver> and create only default.rgw.buckets.data with EC
[17:44] <diver> i.e.
[17:44] <diver> ceph osd erasure-code-profile set erasure-code-profile-k4-m2-host ruleset-failure-domain=host k=4 m=2
[17:44] <diver> ceph osd pool create default.rgw.buckets.data 768 768 erasure erasure-code-profile-k4-m2-host
[17:45] * mykola (~Mikolaj@91.245.74.120) has joined #ceph
[17:45] <grauzikas> ceph-deploy osd prepare server:disk:and what about journal? use same disk for journals
[17:45] <grauzikas> in a server i have same type ssd drives only
[17:45] <diver> @grauzikas, just leave as server:disk
[17:46] <grauzikas> ok
[17:46] <diver> and then it will create journal on the same SSD with the size you specified in the ceph.conf
[17:46] <grauzikas> thank you
[17:46] <diver> or 5G default
[17:46] * imcsk8 (~ichavero@189.155.163.170) has joined #ceph
[17:46] * Skaag (~lunix@65.200.54.234) has joined #ceph
[17:47] <imcsk8> hello, i have a prblem after an upgrade to 10.2.2: when i do a ceph osd tree i see my osd.0 down but cep-osd is running. how can i fix this?
[17:47] <diver> mine oneliner for SSD's in the pool was:
[17:47] <diver> ceph-deploy --overwrite-conf osd create ceph-node01:sdf ceph-node02:sdf ceph-node03:sdf ceph-node04:sdf ceph-node05:sdf ceph-node06:sdf
[17:47] <diver> @imcsk8, check logs first of all. probably auth error
[17:47] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) Quit (Quit: Leaving.)
[17:49] * diver (~diver@216.85.162.34) Quit (Remote host closed the connection)
[17:49] * diver (~diver@95.85.8.93) has joined #ceph
[17:49] <Guest269> diver: oh thanks - just restart the rgw service after re-creating the pool, or do I need to recreate the user too ?
[17:50] * moegyver (~moe@212.85.78.250) Quit (Ping timeout: 480 seconds)
[17:50] * ntpttr_ (~ntpttr@134.134.139.83) has joined #ceph
[17:50] * sankarshan (~sankarsha@121.244.87.117) Quit (Quit: Are you sure you want to quit this channel (Cancel/Ok) ?)
[17:50] <imcsk8> diver: thanks! i just found out that i forgot to change the user for the log files
[17:53] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[17:53] * ntpttr_ (~ntpttr@134.134.139.83) Quit ()
[17:54] * valeech (~valeech@97.93.161.13) has joined #ceph
[18:00] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[18:00] * bara (~bara@121.244.54.198) has joined #ceph
[18:00] * sudocat1 (~dibarra@192.185.1.19) has joined #ceph
[18:02] * sudocat (~dibarra@192.185.1.20) Quit (Read error: Connection reset by peer)
[18:02] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[18:04] * squizzi_ (~squizzi@107.13.237.240) Quit (Ping timeout: 480 seconds)
[18:05] * sudocat (~dibarra@192.185.1.20) Quit ()
[18:05] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[18:07] * QuantumBeep (~Scaevolus@76.164.224.66) has joined #ceph
[18:09] * sudocat1 (~dibarra@192.185.1.19) Quit (Ping timeout: 480 seconds)
[18:10] * linuxkidd (~linuxkidd@ip70-189-202-62.lv.lv.cox.net) has joined #ceph
[18:12] * Hemanth (~hkumar_@103.228.221.183) Quit (Remote host closed the connection)
[18:13] * jarrpa (~jarrpa@63.225.131.166) Quit (Ping timeout: 480 seconds)
[18:15] * EinstCrazy (~EinstCraz@61.165.253.98) Quit (Remote host closed the connection)
[18:16] * rendar (~I@host74-36-dynamic.31-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[18:17] * diver_ (~diver@95.85.8.93) has joined #ceph
[18:18] <diver_> @Guest269 no need to re-create the user. remove all the buckets\files you made, stop service, recreate the pool, start service back.
[18:18] <imcsk8> how can i fix the message "mds cluster is degraded" ?
[18:18] <Guest269> diver_: yep - got it running now - thank you very much :-)
[18:19] <diver_> np
[18:19] <imcsk8> i saw that i can run "ceph-mds -i mon0 -d --reset-journal 0" but i'm not shure if it's the correct procedure
[18:20] <diver_> EC shows great results for huge files, but with small files (less than 50KB) it has to much overhead
[18:20] <diver_> in my tests with 4-2 EC and 18KB files it had 3.2x overhead.
[18:21] <diver_> where by logic should be 1.5 (1\4 * (2+4))
[18:22] * ledgr (~ledgr@88-222-11-185.meganet.lt) has joined #ceph
[18:22] * rendar (~I@host221-46-dynamic.31-79-r.retail.telecomitalia.it) has joined #ceph
[18:22] <Guest269> diver_: we're looking at 6 million files/day of avg size of 2MB
[18:22] <Guest269> diver_: unfortunately, write speed is all that matters
[18:23] <diver_> ooo, then check out this thread: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-September/012987.html
[18:23] <Guest269> diver_: err.. - well, write speed and storage density (replication x3 would be a non-starter)
[18:23] <diver_> and this: http://www.slideshare.net/Red_Hat_Storage/ceph-performance-projects-leading-up-to-jewel-61050682
[18:23] <Guest269> oh cool - thanks!
[18:23] <wes_dillingham> Does the RBD Kernel driver work with exclusive lock and object map features now? And if so, which Kernel is required?
[18:23] <diver_> I'm preparing to do the blind bucket test
[18:24] <diver_> and then will check out the LVM-ssd-inodes backed configuration
[18:24] <diver_> looks promising
[18:24] * derjohn_mob (~aj@46.189.28.56) Quit (Ping timeout: 480 seconds)
[18:24] * diver (~diver@95.85.8.93) Quit (Ping timeout: 480 seconds)
[18:24] <diver_> I have same target - write intensive cluster with ~26M objects per dayt
[18:24] <diver_> and ~ file size in 140KB
[18:25] <Guest269> diver_: really - via rgw ?
[18:25] <diver_> yep
[18:25] <diver_> via rgw
[18:25] <Guest269> diver_: we'll have to compare notes if I get this approved then :-)
[18:25] <diver_> sure. check the config I posted
[18:25] <Guest269> diver_: I have to provide a cost analysis to see if we even pursue ceph
[18:26] <Guest269> diver_: which sucks as all my ceph has been rdb
[18:26] <diver_> ha, same. S3 replacement
[18:26] <diver_> AWS S3 I mean
[18:26] * TMM (~hp@185.5.121.201) Quit (Quit: Ex-Chat)
[18:30] <diver_> in that slideshare\mail Mark talks alot about the BlusStore
[18:30] * ledgr (~ledgr@88-222-11-185.meganet.lt) Quit (Ping timeout: 480 seconds)
[18:30] <diver_> BlueStore*
[18:30] <diver_> but it's very unstable
[18:30] <Guest269> diver_: oh - I'm still reading the email thread...
[18:30] <diver_> OSD's starts crashing under the small load in 200 upload\s already
[18:31] <Guest269> diver_: I might have to look at scaleIO + some S3 gateway, and skylable as well depending on what hte boss wants
[18:31] <diver_> mean that it looks really promising, but I;m afraid will be production ready in a few years only
[18:32] * davidzlap (~Adium@2605:e000:1313:8003:9989:1512:5027:5bb3) has joined #ceph
[18:32] * davidzlap1 (~Adium@2605:e000:1313:8003:b5cc:bb28:b9eb:a69b) has joined #ceph
[18:32] * davidzlap1 (~Adium@2605:e000:1313:8003:b5cc:bb28:b9eb:a69b) Quit ()
[18:34] <Guest269> diver_: has btrfs stabilized enough for production use with ceph ? last I looked ( a year ago?) ceph was recommending xfs still ?
[18:35] <diver_> o! then please share your experience. I read some thread on reddit about the list of s3 gateways and only skylable looked good (with ceph rgw), but I didn't have enough time to test it
[18:35] <diver_> btrfs don't have fsck as far as I know, it's dead end.
[18:35] <diver_> and in my tests it didn't give any benefits
[18:35] <diver_> just the same performance
[18:36] <diver_> so I decided to stay on the xfs
[18:37] * QuantumBeep (~Scaevolus@5AEAABNRL.tor-irc.dnsbl.oftc.net) Quit ()
[18:37] <Guest269> diver_: will do if we test skylable - min.io popped up too - but it has a different scaling model?
[18:37] <Guest269> diver_: leofs was mentioned too
[18:38] <Guest269> diver_: heh - before ceph, I was running MooseFS - we've come a long way! :-)
[18:39] <diver_> minio.io is not scalable now.... only one root point per instance
[18:39] <diver_> so it will require to be handled by the application
[18:39] <Guest269> diver_: oh? I thought you federated their 'instances' for scaling... but I only looked at it for 30 seconds
[18:40] <Guest269> thats a non-starter then
[18:40] <Guest269> diver_: have you tried scaleIO ?
[18:40] <diver_> but supports FreeBSD (that has stable ZFS). so for scales up to 1 server (<45TB) it should be good imho
[18:40] * davidzlap (~Adium@2605:e000:1313:8003:9989:1512:5027:5bb3) Quit (Ping timeout: 480 seconds)
[18:40] <diver_> nope
[18:40] * F|1nt (~F|1nt@85-170-91-210.rev.numericable.fr) has joined #ceph
[18:40] <Guest269> diver_: I'd like to eval that for VM block storage when i get a chance
[18:42] <diver_> check this: https://www.youtube.com/watch?v=OopRMUYiY5E
[18:42] <diver_> CERN Ceph expirience
[18:43] <diver_> on hammer \ rhel 6, but most of the load - VM's
[18:43] <diver_> quote: 'up to 3PB it just works'
[18:43] <Guest269> diver_: LOL - that pretty much sums up our experience 'it just works' :-)
[18:44] <diver_> until you try to put 1B objects :D
[18:44] <Guest269> diver_: I was even running CephFS in a read heavy wordpress cluster - never a problem :-)
[18:44] <Guest269> diver_: at that point, equipment budget will be my biggest problem :-P
[18:45] <diver_> yeah... small scale clusters doesn't have problems usually and works fine on defaults
[18:46] * squizzi_ (~squizzi@nat-pool-rdu-t.redhat.com) has joined #ceph
[18:46] * praveen__ (~praveen@171.61.126.101) Quit (Ping timeout: 480 seconds)
[18:48] <Guest269> yep - I've only done 10-20 nodes/50-75 OSDs - and never had to worry about them
[18:48] <Guest269> this new cluster is 25 nodes and 650 OSDs - so its a huge jump up
[18:55] <diver_> how much memory do you have
[18:56] <diver_> ? Mark in the http://www.slideshare.net/Red_Hat_Storage/ceph-performance-projects-leading-up-to-jewel-61050682 said about the syncfs and inodes cache. so two choices - or increase cache and get hit by the syncfs or decrease and avoid syncfs
[18:56] <Guest269> diver_: not sure - I think the boxes are spec'd with 64G(128G) per machine
[18:57] <Guest269> diver_: and at least 4x10GB ether - maybe 8 x
[18:57] <diver_> ah, you don't have it yet?
[18:57] <diver_> means that you're talking about the future production spec?
[18:57] * bara (~bara@121.244.54.198) Quit (Ping timeout: 480 seconds)
[18:58] * cholcombe (~chris@97.93.161.13) has joined #ceph
[18:58] <Guest269> diver_: yeah - assuming I can show a 'reasonable expectation' that it'll work cost effectively vs other solutions
[18:58] * jclm (~jclm@92.66.244.229) has joined #ceph
[18:58] * jclm (~jclm@92.66.244.229) Quit (Remote host closed the connection)
[18:58] <Guest269> (ie, they're trying to get me to do 'rough' performance guides for equipment I don't have/haven't seen! )
[18:59] <Guest269> diver_: hmm - looks like i need to switch to jmalloc for this stuff (looking at the slides you linked..)
[18:59] <diver_> yes
[18:59] <diver_> forgot to mention it
[19:00] <diver_> on rhel7 - install jmalloc library, uncomment line in /etc/sysconfig/ceph
[19:00] <diver_> and restart the OSD's
[19:01] <diver_> btw. how do you install the cluster?
[19:01] <diver_> puppet, chef? ansible?
[19:03] <Guest269> diver_: mix of ansible for system prep and ceph-deploy atm
[19:03] <Guest269> diver_: assuming I get the go-ahead, it will probably be installed via salt (or maybe ansible)
[19:03] <diver_> ceph-ansible playbook?
[19:04] <diver_> suggest to try ahttps://github.com/ceph/ceph-ansible
[19:04] <diver_> I really like it. re-install cluster takes 15 mins
[19:04] <Guest269> diver_: probably - I looked at it, and it didn't seem to want to support my SSD journal being on my root drive, so I just went with ceph-deploy
[19:04] <diver_> with 56 osd's
[19:04] * theancient (~jasonj@173-165-224-105-minnesota.hfc.comcastbusiness.net) has joined #ceph
[19:04] <Guest269> diver_: yeah - its a nice set of playbooks
[19:04] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Quit: Ex-Chat)
[19:04] <Guest269> diver_: but my test (home) cluster only has 1 ssd per node...
[19:05] <diver_> SSD on the root? mmm. it should if you have GPT on the root
[19:05] <diver_> then it will just add new partition
[19:05] <Guest269> diver_: yeah - had to get GPT part. table - so I went with ubuntu as I couldn't figure out how to kickstart a GPT
[19:05] <diver_> it's not a problem for that playbook. just make sure that SSD have same dev name on each node
[19:06] <Guest269> diver_: I had to sgdisk -t 4:45B0969E-9B03-4F30-B4C6-B4B80CEFF106 /dev/sda to get ceph to pickup the journal parts too
[19:06] <Guest269> diver_: oh - so the journal partition should NOT exist to begin with? that was probably my mistake
[19:07] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) Quit (Quit: Leaving.)
[19:07] * xarses (~xarses@172.56.39.158) has joined #ceph
[19:07] <diver_> no
[19:07] <diver_> ceph-deploy will create it automatically
[19:07] <diver_> and ceph-ansible uses ceph-deploy
[19:08] <Guest269> ahh ... it'll be easier when I have 'real' gear with dedicated drives...
[19:08] <diver_> http://pastebin.com/ijfPEB7B
[19:08] <diver_> here how my lsblk looks like
[19:08] <diver_> sde - journals
[19:08] <johnavp1989> Hi all I'm having an issue with Jewel on 16.04 that I can't figure out. Just did a complete rebuild and I have the same behavior
[19:08] <diver_> sdf - SSD-backed bucket indexes
[19:08] * xarses_ (~xarses@64.124.158.3) Quit (Ping timeout: 480 seconds)
[19:08] <diver_> but I will get rid of it if blind buckets helps...
[19:09] <johnavp1989> The cluster health stays on HEALTH_WARN and some of my PG's always stay in 75 active+remapped
[19:10] <Guest269> diver_: do you use CBT for your benchmarking ?
[19:10] <Guest269> diver_: or the intel java thing ?
[19:11] <diver_> johnavp1989, make sure that you have enough OSD's to match the placement policy. i.e. if you have 3x replica then you should have 3 OSD's available. and if you have 'host' failure domain then those 3 OSD's should be on the 3 different hosts
[19:11] <diver_> cosbench - yeah, tested that
[19:11] <diver_> it shows only current performance
[19:12] <diver_> but not the overall when you fill up the cluster
[19:12] <diver_> mean than when I recreate the cluster it gives 1400 uploads\s with 100 write and 120KB object
[19:12] <diver_> from 9 clients with 40 threads each
[19:12] <theancient> i have 4 OSD nodes, 4 drives/daemons each. i am only using radosgw with s3 api user, uploading lots of buckets, 5-40 items per bucket. i do not understand why it is not filling the backend osd disks more evenly http://pastebin.com/cyGdf34h
[19:12] <diver_> helps during the initial tuning
[19:12] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Remote host closed the connection)
[19:13] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[19:13] <diver_> but for upload test I use 1000 list of objects with s3cmd sync
[19:13] <diver_> in 40 threads from the mon servers. runs in docker.
[19:14] <diver_> files - random generated. upload is done to the path with with newly caclulated path
[19:14] <diver_> i.e.
[19:14] <diver_> Sep 13 07:53:12 localhost haproxy[3524]: 172.17.0.3:58242 [13/Sep/2016:07:53:12.124] s3~ rados-rgw/ceph-mon01 10/0/1/43/54 200 206 - - ---- 30/30/27/9/0 0/0 "PUT /2016-09-13-11h/a7d27f6631ad175e80c95fdfe2de4598/82317e2aa9a86a3b409c77aa06475588 HTTP/1.1"
[19:14] <diver_> Sep 13 07:53:12 localhost haproxy[3524]: 172.17.0.4:52216 [13/Sep/2016:07:53:12.122] s3~ rados-rgw/ceph-mon01 16/0/0/39/55 200 206 - - ---- 30/30/27/9/0 0/0 "PUT /2016-09-13-11h/782184ef69179f6719d8169cfcd90f3b/3567b0d4657ec8faf8cf4abe23cfb016 HTTP/1.1"
[19:14] <diver_> Sep 13 07:53:12 localhost haproxy[3524]: 172.17.0.5:47868 [13/Sep/2016:07:53:12.136] s3~ rados-rgw/ceph-mon03 15/0/1/27/43 200 206 - - ---- 30/30/26/10/0 0/0 "PUT /2016-09-13-11h/523c94157a88432fefff6a887b1d9bd1/0dd132d1fde6727ba1d7e1a8e67d3a99 HTTP/1.1"
[19:14] <diver_> and creates bucket each hour. can share simple script if you want
[19:17] * KindOne (kindone@h83.224.28.71.dynamic.ip.windstream.net) has joined #ceph
[19:17] <diver_> just started blind bucket upload test...let's see how it goes
[19:19] <diver_> theancient: can you show the ceph osd tree?
[19:19] * Brochacho (~alberto@97.93.161.13) has joined #ceph
[19:20] <theancient> http://pastebin.com/YR4w4UwC
[19:21] * sleinen2 (~Adium@2001:620:0:82::107) Quit (Ping timeout: 480 seconds)
[19:22] <diver_> and how many PG's do you have for buckets pool?
[19:24] <theancient> im using the default default.rgw.buckets.data that defaulted to pg_num 8. i dont fully understand the buckets -> pool -> pg thing yet, though im trying to. i increased that number to 128 just within the last hour thinking that might help, but it was a blind/stupid change
[19:25] <diver_> yes
[19:25] <diver_> 8 is too low
[19:26] <diver_> I would suggest 512
[19:26] <diver_> for 16 OSD's
[19:26] <theancient> i get an error doing that
[19:26] <diver_> yeah. if it doesn't have production data
[19:26] <theancient> ohhh it worked this time
[19:26] <diver_> a, then fine
[19:27] <theancient> this is a test cluster, it will eventually go in to production. does changing that number break things?
[19:27] <diver_> no, but it will rebalance the data
[19:27] <diver_> and cause high load
[19:27] <johnavp1989> diver_: I think you're right. I setup my ceph.conf before running ceph-deploy but it looks like it's been replaced by a default config
[19:27] <diver_> and perf degradation
[19:27] <theancient> so that should spread the existing data more evenly across the available OSDs?
[19:28] <diver_> yes. that's
[19:29] <diver_> the point
[19:29] <johnavp1989> diver_: on hammer i thought i had just place my ceph.conf in my home dir and ran ceph deploy. but maybe i'm remembering wrong... do i need to to put it in /etc/ceph?
[19:30] <theancient> diver_: thanks! i can already see a pretty large change to the distribution of data.
[19:30] * kefu|afk (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:30] <wes_dillingham> does cephfs require a replicated cache tier if you intend to use erasure coded pool or can you directly use an EC pool?
[19:33] <diver_> johnavp1989: /etc/ceph folder should have the current ceph.conf. if you want to distribute the ceph.conf then you can or use one of the nodes as a source (cd /etc/ceph/; ceph-deploy --overwrite-conf config push ceph-node{1..100})
[19:33] <diver_> wes_dillingham: replicated pool required only for RDB
[19:33] <diver_> for the object (RGW) you can use EC
[19:34] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[19:34] <wes_dillingham> thanks diver_
[19:35] <loicd> georgem: ack
[19:36] <diver_> Guest269: blind buckets didn't help much, still see a lot of reads on the spinning disks
[19:36] <diver_> but now rgw bucket index ssd pool is not used
[19:36] <diver_> so I can free up my ssds from that role
[19:37] * xarses (~xarses@172.56.39.158) Quit (Remote host closed the connection)
[19:37] * xarses (~xarses@172.56.39.158) has joined #ceph
[19:37] <loicd> georgem: although I filed http://tracker.ceph.com/issues/15896, I'm not the best person to ask for advice. The people who worked on https://github.com/ceph/ceph/pull/7712 are more likely to be knowledgeable.
[19:38] * sankarshan (~sankarsha@106.216.186.249) has joined #ceph
[19:39] <johnavp1989> diver_: anything I need to do for the changes to take affect?
[19:42] <diver_> em. restart ceph services. systemctl restart ceph.target on the systemd
[19:43] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[19:44] * doppelgrau1 (~doppelgra@132.252.235.172) Quit (Quit: Leaving.)
[19:44] * xarses_ (~xarses@64.124.158.3) has joined #ceph
[19:47] * sankarshan (~sankarsha@106.216.186.249) Quit (Quit: Leaving...)
[19:50] * cholcombe (~chris@97.93.161.13) Quit (Ping timeout: 480 seconds)
[19:50] * cholcombe (~chris@97.93.161.2) has joined #ceph
[19:50] * mykola (~Mikolaj@91.245.74.120) Quit (Ping timeout: 480 seconds)
[19:51] * Lokta (~Lokta@carbon.coe.int) Quit (Ping timeout: 480 seconds)
[19:51] * xarses (~xarses@172.56.39.158) Quit (Ping timeout: 480 seconds)
[19:54] * garphy is now known as garphy`aw
[19:58] <Guest269> diver_: thats too bad... maybe the ssd/inode thing will help
[19:59] <johnavp1989> diver_: That worked. I don't think it like me moving down from 3 replica's to 2 but i deleted and recreate the rbd pool and now it's health. thanks for you help
[19:59] <diver_> yes... I will try to disable selinux to see if that affects the performance
[20:00] * Nicho1as (~nicho1as@00022427.user.oftc.net) Quit (Quit: A man from the Far East; using WeeChat 1.5)
[20:00] <diver_> johnavp1989: np
[20:02] * mykola (~Mikolaj@91.245.74.120) has joined #ceph
[20:06] * srk (~Siva@32.97.110.50) Quit (Ping timeout: 480 seconds)
[20:07] * Skaag (~lunix@65.200.54.234) Quit (Quit: Leaving.)
[20:10] * Nephyrin (~utugi____@185.65.134.81) has joined #ceph
[20:13] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[20:14] <georgem> loicd: thanks, I'll ask my colleague to comment on https://github.com/ceph/ceph/pull/7712
[20:15] * Skaag (~lunix@65.200.54.234) has joined #ceph
[20:19] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Quit: Leaving)
[20:21] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[20:23] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:6004:a51a:3c47:611d) Quit (Ping timeout: 480 seconds)
[20:26] <grauzikas> hello again :), i have openstack ceph:
[20:26] <grauzikas> i have installed ceph admin on openstack controller node and cotroller node have only two interfaces (management for openstack and provider for neutron)
[20:26] <grauzikas> all my compute servers have additional 10gbe interfaces for ceph (i`m planing to install only one osd on each compute)
[20:26] <grauzikas> my ceph server have 40gbe interface in same as compute 10gbe interfaces network (i`m planning install on ceph server up to 24 ssd drives)
[20:26] <grauzikas> so my question, how to tell to ceph admin to use those 10/40 gbe interfaces to comunication compute servers and ceph servers (for example 10.0.0.0/24 is for ceph and is 10.0.1.0/24 is for management)?
[20:26] <grauzikas> when i was deployng ceph from ceph admin - it detected only controller
[20:26] <grauzikas> ips and networks.
[20:26] <grauzikas> should i connect openstack management switch (1gbps) to ceph switch (10/40gbe)?
[20:27] <grauzikas> or there is any other way to teel ceph what interface or network it must use for it
[20:27] <doppelgrau> grauzikas: ceph will use the network defined in the config
[20:27] <doppelgrau> grauzikas: the routing depends on your host-configuration
[20:28] <grauzikas> public network =
[20:28] <grauzikas> ?
[20:28] <grauzikas> i`m checking openstack manual and i can see only public network and private
[20:29] <grauzikas> ceph* manual
[20:30] <grauzikas> sorry public network and cluster network
[20:31] <wes_dillingham> you would put cluster_network = 10.0.0.0/24 in your config
[20:31] <wes_dillingham> cluster network is for OSD to OSD communication
[20:31] <grauzikas> wes_dillingham thank you
[20:31] <wes_dillingham> public_network = 10.0.1.0/24
[20:32] <wes_dillingham> public network is how mons, clients, mds etc communicate with OSDs
[20:32] * srk (~Siva@32.97.110.50) has joined #ceph
[20:35] <SamYaple> grauzikas: of note, the cluster network is not a requirement, but is a recommendation if you want to keep the performance impact to a minumum when replicating/recovering
[20:36] <grauzikas> i have in public network only 1gbps in cluster 10/40 so i need it :)
[20:36] <SamYaple> grauzikas: ok, but keep in mind the 1gb is going to be a huge limitation for you since thats the network the vms will talk over
[20:37] <grauzikas> one moment
[20:38] <grauzikas> https://snag.gy/rDI15j.jpg
[20:38] <grauzikas> this is my network
[20:38] <imcsk8> after an un grade to 10.2.2 i get this messga: https://paste.fedoraproject.org/427701/91873147/ when i run: cephfs-journal-tool journal reset --force can somebody give me a hand?
[20:39] <grauzikas> there is names named badly :) where object and block storages
[20:40] * Nephyrin (~utugi____@9J5AAARCL.tor-irc.dnsbl.oftc.net) Quit ()
[20:40] <grauzikas> also subnets incorect in management and ceph networks :)
[20:41] <SamYaple> grauzikas: blue should be your cluster_network and green should be your public_network
[20:42] <doppelgrau> grauzikas: public_network 10.0.0.1, nu private network
[20:43] <grauzikas> at this moment i`m thinking to use same network for both (green network) only if there will be problems with network capacity then i`m planing to add blue
[20:43] <doppelgrau> grauzikas: cluster network is only needed if you run out of bandwidth else
[20:44] <SamYaple> i wouldnt say it like that doppelgrau
[20:44] <SamYaple> recovery can normally consume as much bandwidth as you give it
[20:44] <SamYaple> cluster network allows recovery to go much faster in most cases
[20:44] <SamYaple> and (most importantly) impacts teh users the least
[20:45] <doppelgrau> SamYaple: replication also runs over the cluster network => writes are affected if the cluster_network has bandwidth-problems
[20:46] <doppelgrau> SamYaple: only improved the reads, but need double amounts of ports and you can get some ???new and funny??? error modes if one of both networks fail
[20:46] * Brochacho (~alberto@97.93.161.13) Quit (Quit: Brochacho)
[20:47] <SamYaple> doppelgrau: you seem to be leaving out the whole "security" part of it too. cluster networks are always recommended
[20:47] <grauzikas> ok, for example there is no blue network and i`m using only green and black (black is for openstack comunication) and if i`ll install ceph admin on controller0
[20:47] <grauzikas> do i gona get loads on black network ?
[20:48] <grauzikas> i mean for osd`s gona be used green network and for mon and admin black
[20:49] <doppelgrau> SamYaple: which security part? The clients allways can connect to the osds, only some part of the traffic (osd - osd) are routed over different infrastructure
[20:49] <SamYaple> grauzikas: if you set it up that way, black becomes your public network and green becomes your cluster network, black will be how all the vms connect to ceph (it will get saturated)
[20:49] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) Quit (Read error: Connection reset by peer)
[20:49] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) has joined #ceph
[20:49] <SamYaple> doppelgrau: all the replication traffic is unencrypted, hence why a segregated cluster network is recommended
[20:50] <doppelgrau> SamYaple: all traffic is unencryted, only authenticated
[20:51] <doppelgrau> SamYaple: in the diagram from grauzikas qemu (kvm/xen) will connect over the green network
[20:51] * oms101_ (~oms101@p20030057EA033B00C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[20:51] <SamYaple> doppelgrau: not with the way he is talking about it, having the mons on the black network
[20:51] <doppelgrau> SamYaple: ok, mons on the black doesn???t make sense
[20:52] <SamYaple> doppelgrau: we should stop talking at and past each other, i think were both on the same page (minus no security benefit for cluster network, can't DoS the osds with a cluster network, you can with a public only setup)
[20:54] <doppelgrau> SamYaple: how do you define a ???public only??? network, for me that is not connected to the internet, only to osds, mon and the clients (I guess qemu emulations disks for xen/kvm)) => no third party controlled component
[20:55] <SamYaple> doppelgrau: sure, if you have a 100% controlled environment and the clients are all controlled as well, then yea you can trust them all
[20:55] <SamYaple> if the client isnt 100% managed and trusted, with no cluster network that client could disrupt the heartbeats between the osds and BAM
[20:57] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[20:59] <doppelgrau> SamYaple: on the other hand, if the attacker had so much bandwith, simply kill the mons or with less bandwith use STP/APR/NDP to ???kill??? the mons
[20:59] <doppelgrau> ARP
[21:02] <SamYaple> its unlikely one client could kill the bandwidth to two mons. that would mean the client would have twice the bandwidth of the mons. DDoS is always a thing, but thats not as likely. the other stuff can be mitigated
[21:02] <SamYaple> but arguing that the cluster network has no benefit, security or otherwise, but its just for if you run out of bandwidth is silly
[21:03] <doppelgrau> SamYaple: but more sophisticated, arp-spoofing, changing topology with stp?
[21:03] <SamYaple> those arent sophsitcated, they are old school and easily prevented
[21:03] * valeech (~valeech@97.93.161.13) Quit (Quit: valeech)
[21:04] <doppelgrau> SamYaple: and if you lauch the DOS on the mon itseld, not stupid UDP, I would nearly bet that 10 GBit/s are enough to take out two mons
[21:05] <doppelgrau> checking auth takes ressources
[21:06] <SamYaple> alright, well im done here. Ive pointed out how there is benefit even if you are _not_ running out bandwidth for both security and performance for having a cluster network.
[21:06] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[21:07] <doppelgrau> I would not deny that, but I think the gain is so small, it does not make sense spending two additional 40 GBit Ports on that (and in addition getting some new failure modes)
[21:07] <doppelgrau> especially not in such a controlled eviroment
[21:09] * [USS]Lupo (~USSLupo@68-202-189-32.res.bhn.net) has joined #ceph
[21:10] <Guest269> SamYaple: oh hey - I worked with some of your ceph stuff for Kolla - nice to 'meet' you !
[21:10] <Guest269> SamYaple: your code/blog posts were really helpful!
[21:11] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[21:13] <SamYaple> Guest269: awe thanks. nice to hear that. the ceph stuff in kolla turned out alright. in the end there was always a few things that bugged me
[21:13] <grauzikas> also what servers requires root login, controller to all serves (because on it lives ceph admin) thats it? i`m asking because i`m using ubuntu and there root login by default is disabled
[21:14] <SamYaple> grauzikas: root login is disabled without a password by default, but ceph doesnt need or want root access anymore. just a ceph user with a properly setup sudoers file
[21:14] <grauzikas> today when i was trying to deploy it tryed to login as root ofcource may be there is a way how to tell to ceph-deploy what user it should use
[21:15] * F|1nt (~F|1nt@85-170-91-210.rev.numericable.fr) Quit (Quit: Oups, just gone away...)
[21:15] <doppelgrau> grauzikas: I do not use ceph-deploy, but with ansible and other config management-tools, you can use sudo
[21:16] * vend3r (~Cue@dd.85.7a9f.ip4.static.sl-reverse.com) has joined #ceph
[21:16] <grauzikas> ok, thank you
[21:18] * blairo (blairo@open.source.rocks.my.socks.firrre.com) has joined #ceph
[21:18] <SamYaple> grauzikas: that is all configured by your .ssh config file
[21:19] * CephFan1 (~textual@68-233-224-176.static.hvvc.us) has joined #ceph
[21:21] * diver (~diver@216.85.162.34) has joined #ceph
[21:23] * cathode (~cathode@50.232.215.114) has joined #ceph
[21:24] * huats (~quassel@stuart.objectif-libre.com) Quit (Ping timeout: 480 seconds)
[21:27] * garphy`aw is now known as garphy
[21:28] * diver_ (~diver@95.85.8.93) Quit (Ping timeout: 480 seconds)
[21:29] * diver (~diver@216.85.162.34) Quit (Ping timeout: 480 seconds)
[21:32] <imcsk8> hello i have problem, after an upgrade to 10.2.2 ceph-osd is running on my donde but when i do a ceph osd tree i get this: 0 1.67000 osd.0 down 0 1.00000 can somebody give me a hand??
[21:38] * [USS]Lupo (~USSLupo@68-202-189-32.res.bhn.net) has left #ceph
[21:39] * huats (~quassel@stuart.objectif-libre.com) has joined #ceph
[21:43] * diver (~diver@cpe-98-26-71-226.nc.res.rr.com) has joined #ceph
[21:43] * diver (~diver@cpe-98-26-71-226.nc.res.rr.com) Quit (Remote host closed the connection)
[21:43] * diver (~diver@cpe-2606-A000-111B-C12B-2C6B-337F-50B1-4EEC.dyn6.twc.com) has joined #ceph
[21:43] * diver (~diver@cpe-2606-A000-111B-C12B-2C6B-337F-50B1-4EEC.dyn6.twc.com) Quit (Remote host closed the connection)
[21:44] * diver (~diver@cpe-2606-A000-111B-C12B-2C6B-337F-50B1-4EEC.dyn6.twc.com) has joined #ceph
[21:44] * diver (~diver@cpe-2606-A000-111B-C12B-2C6B-337F-50B1-4EEC.dyn6.twc.com) Quit (Remote host closed the connection)
[21:44] * diver (~diver@cpe-2606-A000-111B-C12B-2C6B-337F-50B1-4EEC.dyn6.twc.com) has joined #ceph
[21:45] * diver_ (~diver@cpe-2606-A000-111B-C12B-C9D2-AF36-6219-A917.dyn6.twc.com) has joined #ceph
[21:45] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Quit: Leaving.)
[21:45] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[21:45] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[21:46] <johnavp1989> Hello does anyone have update information for ceph-fuse 10.2.2?
[21:46] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:46] * vend3r (~Cue@dd.85.7a9f.ip4.static.sl-reverse.com) Quit ()
[21:47] <johnavp1989> The man page appears to be outdated. If i run ceph-fuse --help the options don't match up with what's in the man page and the help output is not very.... well helpful
[21:47] <johnavp1989> Docs online that I found show the old options too
[21:52] * diver (~diver@cpe-2606-A000-111B-C12B-2C6B-337F-50B1-4EEC.dyn6.twc.com) Quit (Ping timeout: 480 seconds)
[21:53] * diver_ (~diver@cpe-2606-A000-111B-C12B-C9D2-AF36-6219-A917.dyn6.twc.com) Quit (Ping timeout: 480 seconds)
[21:56] * northrup (~northrup@173.14.101.193) Quit (Ping timeout: 480 seconds)
[22:03] * rraja (~rraja@121.244.87.117) Quit (Quit: Leaving)
[22:07] * garphy is now known as garphy`aw
[22:12] * thansen (~thansen@17.253.sfcn.org) has joined #ceph
[22:13] * mykola (~Mikolaj@91.245.74.120) Quit (Quit: away)
[22:17] * vbellur (~vijay@71.234.224.255) has joined #ceph
[22:20] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[22:21] * TMM (~hp@dhcp-077-248-009-229.chello.nl) has joined #ceph
[22:21] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) Quit (Quit: doppelgrau)
[22:25] * Discovery (~Discovery@109.235.52.6) has joined #ceph
[22:27] <imcsk8> johnavp1989: this should be the latest docs: http://docs.ceph.com/docs/jewel/cephfs/fuse/
[22:29] <johnavp1989> imcsk8: strange... this is what I see in the help http://paste.openstack.org/show/574808/
[22:36] <imcsk8> which version are you using?
[22:36] * diver (~diver@cpe-2606-A000-111B-C12B-888-45C1-C22A-59AD.dyn6.twc.com) has joined #ceph
[22:36] * dmick (~dmick@206.169.83.146) has joined #ceph
[22:37] * dmick (~dmick@206.169.83.146) has left #ceph
[22:38] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[22:45] * diver (~diver@cpe-2606-A000-111B-C12B-888-45C1-C22A-59AD.dyn6.twc.com) Quit (Remote host closed the connection)
[22:45] * diver (~diver@cpe-98-26-71-226.nc.res.rr.com) has joined #ceph
[22:46] * aNuposic (~aNuposic@192.55.54.38) has joined #ceph
[22:50] <aNuposic> Hi Folks, I am trying to use radisgw with Swift but when Keystone creates an object on radosgw it is unable to get temp_url_key. I followed https://github.com/openstack/ironic/blob/8e81b964a5fc815a7d42be749331571f4bb56be1/doc/source/deploy/radosgw.rst do i need to do some extra settings?
[22:51] * davidz (~davidz@2605:e000:1313:8003:4c6:c0a8:5969:efdb) has joined #ceph
[22:51] * northrup (~northrup@50-249-151-243-static.hfc.comcastbusiness.net) has joined #ceph
[22:53] * diver (~diver@cpe-98-26-71-226.nc.res.rr.com) Quit (Ping timeout: 480 seconds)
[22:55] * wkennington (~wkenningt@c-71-204-170-241.hsd1.ca.comcast.net) has joined #ceph
[22:56] * ledgr (~ledgr@88-222-11-185.meganet.lt) has joined #ceph
[23:00] * mattbenjamin (~mbenjamin@121.244.54.198) Quit (Ping timeout: 480 seconds)
[23:02] * bene2 (~bene@2601:193:4101:f410:ea2a:eaff:fe08:3c7a) Quit (Quit: Konversation terminated!)
[23:03] * ledgr (~ledgr@88-222-11-185.meganet.lt) Quit (Remote host closed the connection)
[23:03] * kristenc (~kristen@jfdmzpr06-ext.jf.intel.com) Quit (Remote host closed the connection)
[23:03] * ledgr (~ledgr@88-222-11-185.meganet.lt) has joined #ceph
[23:11] * ledgr (~ledgr@88-222-11-185.meganet.lt) Quit (Ping timeout: 480 seconds)
[23:12] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) Quit (Ping timeout: 480 seconds)
[23:12] * wes_dillingham (~wes_dilli@140.247.242.44) Quit (Ping timeout: 480 seconds)
[23:15] * diver (~diver@cpe-98-26-71-226.nc.res.rr.com) has joined #ceph
[23:16] * srk (~Siva@32.97.110.50) Quit (Ping timeout: 480 seconds)
[23:16] * diver_ (~diver@cpe-2606-A000-111B-C12B-710B-A724-C591-36D1.dyn6.twc.com) has joined #ceph
[23:17] * squizzi_ (~squizzi@nat-pool-rdu-t.redhat.com) Quit (Ping timeout: 480 seconds)
[23:18] * kristenc (~kristen@jfdmzpr01-ext.jf.intel.com) has joined #ceph
[23:19] * fsimonce (~simon@host98-71-dynamic.1-87-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[23:22] * doppelgrau (~doppelgra@132.252.235.172) has joined #ceph
[23:23] * diver (~diver@cpe-98-26-71-226.nc.res.rr.com) Quit (Ping timeout: 480 seconds)
[23:24] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Quit: Leaving)
[23:25] * srk (~Siva@32.97.110.50) has joined #ceph
[23:26] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[23:29] * huats (~quassel@stuart.objectif-libre.com) Quit (Ping timeout: 480 seconds)
[23:32] * CephFan1 (~textual@68-233-224-176.static.hvvc.us) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[23:37] * salwasser (~Adium@2601:197:101:5cc1:34a9:b556:f3be:51e6) has joined #ceph
[23:38] * jowilkin (~jowilkin@2601:644:4000:b0bf:ea2a:eaff:fe08:3f1d) has joined #ceph
[23:42] * wjw-freebsd2 (~wjw@smtp.digiware.nl) has joined #ceph
[23:42] * salwasser (~Adium@2601:197:101:5cc1:34a9:b556:f3be:51e6) Quit (Read error: Connection reset by peer)
[23:43] * salwasser (~Adium@2601:197:101:5cc1:34a9:b556:f3be:51e6) has joined #ceph
[23:45] * kristenc (~kristen@jfdmzpr01-ext.jf.intel.com) Quit (Quit: Leaving)
[23:47] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[23:49] * Whiskey (~whiskey@ip72-196-222-75.dc.dc.cox.net) has joined #ceph
[23:50] * Whiskey (~whiskey@ip72-196-222-75.dc.dc.cox.net) has left #ceph
[23:54] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[23:54] * jarrpa (~jarrpa@67-4-148-200.mpls.qwest.net) has joined #ceph
[23:56] * vata (~vata@207.96.182.162) Quit (Quit: Leaving.)
[23:57] * srk (~Siva@32.97.110.50) Quit (Ping timeout: 480 seconds)
[23:58] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.