#ceph IRC Log

Index

IRC Log for 2015-06-01

Timestamps are in GMT/BST.

[0:10] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[0:12] * yanzheng (~zhyan@125.71.109.76) has joined #ceph
[0:27] * johanni (~johanni@24.4.41.97) Quit (Ping timeout: 480 seconds)
[0:28] * Jourei (~PappI@8Q4AAA6SN.tor-irc.dnsbl.oftc.net) Quit ()
[0:31] * yanzheng (~zhyan@125.71.109.76) Quit (Quit: This computer has gone to sleep)
[0:38] * nectro (~chatzilla@cpe-65-30-51-41.wi.res.rr.com) has joined #ceph
[0:39] * phantomcircuit (~phantomci@smartcontracts.us) has joined #ceph
[0:48] * johanni (~johanni@24.4.41.97) has joined #ceph
[0:57] * LeaChim (~LeaChim@host86-163-124-72.range86-163.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[1:00] * flisky (~Thunderbi@118.186.147.10) has joined #ceph
[1:05] * macjack (~macjack@61.57.127.209) Quit (Remote host closed the connection)
[1:08] * flisky (~Thunderbi@118.186.147.10) Quit (Ping timeout: 480 seconds)
[1:14] * oms101_ (~oms101@p20030057EA35BC00C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:15] * oro (~oro@84-72-20-79.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:16] * burley (~khemicals@cpe-98-28-239-78.cinci.res.rr.com) Quit (Ping timeout: 480 seconds)
[1:23] * oms101_ (~oms101@p20030057EA701300C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:25] * burley (~khemicals@cpe-98-28-239-78.cinci.res.rr.com) has joined #ceph
[1:26] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (Read error: Connection reset by peer)
[1:26] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[1:28] * Vale (~Spessu@tor-exit1.arbitrary.ch) has joined #ceph
[1:33] * mtanski (~mtanski@65.244.82.98) Quit (Read error: Connection reset by peer)
[1:34] * johanni (~johanni@24.4.41.97) Quit (Ping timeout: 480 seconds)
[1:34] * mtanski (~mtanski@65.244.82.98) has joined #ceph
[1:58] * Vale (~Spessu@3DDAAAHDG.tor-irc.dnsbl.oftc.net) Quit ()
[2:02] * segutier (~segutier@209.156.240.2) has joined #ceph
[2:16] * chasmo77 (~chas77@158.183-62-69.ftth.swbr.surewest.net) Quit (Quit: It's just that easy)
[2:20] * branto (~borix@ip-213-220-214-203.net.upcbroadband.cz) has joined #ceph
[2:28] * johanni (~johanni@24.4.41.97) has joined #ceph
[2:28] * oracular (~sese_@aurora.enn.lu) has joined #ceph
[2:33] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[2:37] * segutier (~segutier@209.156.240.2) Quit (Quit: segutier)
[2:42] * nsoffer (~nsoffer@bzq-79-177-255-248.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[2:47] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[2:48] * markl (~mark@knm.org) Quit (Ping timeout: 480 seconds)
[2:55] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[2:58] * oracular (~sese_@9S0AAADCW.tor-irc.dnsbl.oftc.net) Quit ()
[2:59] * johanni_ (~johanni@24.4.41.97) has joined #ceph
[3:00] * Bwana (~Jourei@tor-exit-node-2.cs.usu.edu) has joined #ceph
[3:00] * johanni (~johanni@24.4.41.97) Quit (Ping timeout: 480 seconds)
[3:01] * fam_away is now known as fam
[3:15] * johanni_ (~johanni@24.4.41.97) Quit (Ping timeout: 480 seconds)
[3:17] * segutier (~segutier@209.156.240.2) has joined #ceph
[3:27] * sankarshan (~sankarsha@183.87.39.242) has joined #ceph
[3:28] * Bwana (~Jourei@3DDAAAHGH.tor-irc.dnsbl.oftc.net) Quit ()
[3:28] * airsoftglock (~zc00gii@198.23.202.71) has joined #ceph
[3:29] * evanjfraser (~quassel@122.252.188.1) has joined #ceph
[3:36] * nhm_ (~nhm@184-97-242-33.mpls.qwest.net) has joined #ceph
[3:37] * nhm (~nhm@184-97-242-33.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[3:38] * segutier (~segutier@209.156.240.2) Quit (Quit: segutier)
[3:40] * nectro (~chatzilla@cpe-65-30-51-41.wi.res.rr.com) Quit (Remote host closed the connection)
[3:45] * kefu (~kefu@114.86.210.96) has joined #ceph
[3:54] * segutier (~segutier@209.156.240.2) has joined #ceph
[3:58] * airsoftglock (~zc00gii@5NZAAC1JH.tor-irc.dnsbl.oftc.net) Quit ()
[3:58] * cyphase1 (~andrew_m@edwardsnowden1.torservers.net) has joined #ceph
[4:07] * segutier (~segutier@209.156.240.2) Quit (Quit: segutier)
[4:08] * georgem (~Adium@24.140.226.3) Quit (Quit: Leaving.)
[4:09] * segutier (~segutier@209.156.240.2) has joined #ceph
[4:16] * johanni (~johanni@24.4.41.97) has joined #ceph
[4:18] * kefu (~kefu@114.86.210.96) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[4:20] * kefu (~kefu@114.86.210.96) has joined #ceph
[4:27] * yanzheng (~zhyan@125.71.109.76) has joined #ceph
[4:28] * cyphase1 (~andrew_m@9S0AAADG1.tor-irc.dnsbl.oftc.net) Quit ()
[4:28] * lobstar (~starcoder@tor-exit.eecs.umich.edu) has joined #ceph
[4:31] * zhaochao (~zhaochao@124.202.190.2) has joined #ceph
[4:41] * DV_ (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[4:42] * segutier (~segutier@209.156.240.2) Quit (Quit: segutier)
[4:42] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) has joined #ceph
[4:42] * segutier (~segutier@209.156.240.2) has joined #ceph
[4:44] * lobstar (~starcoder@9S0AAADII.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[4:45] * biGGer (~dontron@spftor1e1.privacyfoundation.ch) has joined #ceph
[4:46] * ketor (~ketor@182.48.117.114) has joined #ceph
[4:51] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[4:53] * segutier (~segutier@209.156.240.2) Quit (Quit: segutier)
[4:55] * segutier (~segutier@209.156.240.2) has joined #ceph
[5:01] * OutOfNoWhere (~rpb@199.68.195.102) Quit (Ping timeout: 480 seconds)
[5:07] * deepsa (~Deependra@00013525.user.oftc.net) has joined #ceph
[5:07] * segutier (~segutier@209.156.240.2) Quit (Quit: segutier)
[5:08] * jwilkins (~jwilkins@2601:9:4580:f4c:ea2a:eaff:fe08:3f1d) has joined #ceph
[5:13] * segutier (~segutier@209.156.240.2) has joined #ceph
[5:13] * segutier (~segutier@209.156.240.2) Quit ()
[5:15] * biGGer (~dontron@9S0AAADJE.tor-irc.dnsbl.oftc.net) Quit ()
[5:15] * demonspork (~Guest1390@109.163.235.246) has joined #ceph
[5:21] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[5:24] * jwilkins (~jwilkins@2601:9:4580:f4c:ea2a:eaff:fe08:3f1d) Quit (Quit: Leaving)
[5:29] * Vacuum__ (~Vacuum@i59F79AB5.versanet.de) has joined #ceph
[5:36] * Vacuum_ (~Vacuum@89.247.158.243) Quit (Ping timeout: 480 seconds)
[5:45] * demonspork (~Guest1390@9S0AAADKU.tor-irc.dnsbl.oftc.net) Quit ()
[5:46] * phyphor (~smf68@tor-exit4-readme.dfri.se) has joined #ceph
[5:46] * haomaiwang (~haomaiwan@114.111.166.250) Quit (Read error: Connection reset by peer)
[5:47] * haomaiwang (~haomaiwan@114.111.166.250) has joined #ceph
[5:48] * squ (~Thunderbi@46.109.36.167) has joined #ceph
[5:49] * ketor (~ketor@182.48.117.114) Quit (Remote host closed the connection)
[5:53] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[5:59] * haomaiwang (~haomaiwan@114.111.166.250) Quit (Quit: Leaving...)
[6:00] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[6:02] * haomaiwang (~haomaiwan@114.111.166.250) has joined #ceph
[6:15] * phyphor (~smf68@7R2AABFXR.tor-irc.dnsbl.oftc.net) Quit ()
[6:15] * dug (~PeterRabb@176.10.104.240) has joined #ceph
[6:19] * ketor (~ketor@182.48.117.114) has joined #ceph
[6:25] * ketor (~ketor@182.48.117.114) Quit (Remote host closed the connection)
[6:25] * ketor (~ketor@182.48.117.114) has joined #ceph
[6:28] * aarontc (~aarontc@2001:470:e893::1:1) Quit (Ping timeout: 480 seconds)
[6:36] * fam is now known as fam_away
[6:36] * ketor (~ketor@182.48.117.114) Quit (Remote host closed the connection)
[6:38] * fam_away is now known as fam
[6:45] * dug (~PeterRabb@3DDAAAHM3.tor-irc.dnsbl.oftc.net) Quit ()
[6:46] * Mattress (~xENO_@lumumba.torservers.net) has joined #ceph
[6:47] * rdas (~rdas@121.244.87.116) has joined #ceph
[6:48] * rdas (~rdas@121.244.87.116) Quit ()
[6:49] * amote (~amote@121.244.87.116) has joined #ceph
[6:55] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[6:55] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[6:57] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:04] * aarontc (~aarontc@2001:470:e893::1:1) has joined #ceph
[7:06] * ketor (~ketor@182.48.117.114) has joined #ceph
[7:12] * DV (~veillard@2001:41d0:1:d478::1) Quit (Remote host closed the connection)
[7:15] * Mattress (~xENO_@9S0AAADOY.tor-irc.dnsbl.oftc.net) Quit ()
[7:16] * AotC (~MJXII@static-ip-85-25-103-119.inaddr.ip-pool.com) has joined #ceph
[7:21] * rotbeard (~redbeard@x5f74c8b8.dyn.telefonica.de) has joined #ceph
[7:25] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[7:27] * treenerd (~treenerd@2001:4dd0:ff00:809d:76e5:bff:feb7:bcfa) has joined #ceph
[7:27] * treenerd (~treenerd@2001:4dd0:ff00:809d:76e5:bff:feb7:bcfa) Quit ()
[7:36] * vbellur (~vijay@121.244.87.117) has joined #ceph
[7:41] * aj__ (~aj@88.128.80.157) has joined #ceph
[7:44] * kefu (~kefu@114.86.210.96) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[7:45] * AotC (~MJXII@9S0AAADQN.tor-irc.dnsbl.oftc.net) Quit ()
[7:45] * sixofour (~Xa@37.187.129.166) has joined #ceph
[7:46] * karnan (~karnan@121.244.87.117) has joined #ceph
[7:46] * Miouge (~Miouge@h-72-233.a163.priv.bahnhof.se) has joined #ceph
[7:48] * flisky (~Thunderbi@101.36.77.56) has joined #ceph
[7:52] * shohn (~shohn@dslb-094-223-167-060.094.223.pools.vodafone-ip.de) has joined #ceph
[7:59] * flisky (~Thunderbi@101.36.77.56) Quit (Quit: flisky)
[7:59] * shang (~ShangWu@175.41.48.77) has joined #ceph
[8:04] * nsoffer (~nsoffer@bzq-84-111-112-230.cablep.bezeqint.net) has joined #ceph
[8:05] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[8:07] * bobrik (~bobrik@83.243.64.45) has joined #ceph
[8:07] * overclk (~overclk@121.244.87.117) has joined #ceph
[8:10] * Sysadmin88 (~IceChat77@054527d3.skybroadband.com) Quit (Quit: We be chillin - IceChat style)
[8:10] * overclk (~overclk@121.244.87.117) Quit ()
[8:10] * overclk (~overclk@121.244.87.117) has joined #ceph
[8:12] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) has joined #ceph
[8:14] * dis (~dis@109.110.66.238) Quit (Ping timeout: 480 seconds)
[8:15] * sixofour (~Xa@3DDAAAHQC.tor-irc.dnsbl.oftc.net) Quit ()
[8:16] * Bwana (~tunaaja@nx-01.tor-exit.network) has joined #ceph
[8:19] * kefu (~kefu@114.86.210.96) has joined #ceph
[8:20] * Miouge (~Miouge@h-72-233.a163.priv.bahnhof.se) Quit (Quit: Miouge)
[8:22] * shang (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[8:27] * Miouge (~Miouge@h-72-233.a163.priv.bahnhof.se) has joined #ceph
[8:29] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[8:30] * kawa2014 (~kawa@2-229-47-79.ip195.fastwebnet.it) has joined #ceph
[8:31] * cok (~chk@2a02:2350:18:1010:44b4:2857:e4bd:f152) has joined #ceph
[8:35] * Miouge (~Miouge@h-72-233.a163.priv.bahnhof.se) Quit (Ping timeout: 480 seconds)
[8:36] * aj__ (~aj@88.128.80.157) Quit (Ping timeout: 480 seconds)
[8:37] * ketor (~ketor@182.48.117.114) Quit (Remote host closed the connection)
[8:37] * ketor (~ketor@182.48.117.114) has joined #ceph
[8:38] * shang (~ShangWu@175.41.48.77) has joined #ceph
[8:41] * shang_ (~ShangWu@175.41.48.77) has joined #ceph
[8:42] * ketor (~ketor@182.48.117.114) Quit (Remote host closed the connection)
[8:43] * ketor (~ketor@182.48.117.114) has joined #ceph
[8:45] * Bwana (~tunaaja@5NZAAC1V2.tor-irc.dnsbl.oftc.net) Quit ()
[8:45] * spidu_ (~Vidi@manning1.torservers.net) has joined #ceph
[8:46] * shang (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[8:49] * dis (~dis@109.110.66.238) has joined #ceph
[8:53] * frednass (~fred@dn-infra-12.lionnois.univ-lorraine.fr) has left #ceph
[8:55] * b0e (~aledermue@213.95.25.82) has joined #ceph
[8:57] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[8:59] * ketor (~ketor@182.48.117.114) Quit (Remote host closed the connection)
[9:00] * overclk (~overclk@121.244.87.117) has joined #ceph
[9:02] * mykola (~Mikolaj@91.225.201.137) has joined #ceph
[9:03] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[9:03] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[9:04] * dgurtner (~dgurtner@178.197.231.65) has joined #ceph
[9:05] * frednass (~fred@dn-infra-12.lionnois.univ-lorraine.fr) has joined #ceph
[9:06] * johanni (~johanni@24.4.41.97) Quit (Ping timeout: 480 seconds)
[9:06] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[9:07] * ketor (~ketor@182.48.117.114) has joined #ceph
[9:07] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[9:08] * thomnico (~thomnico@2a01:e35:8b41:120:8c43:c1e2:ddca:295c) has joined #ceph
[9:10] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[9:12] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[9:13] * ketor (~ketor@182.48.117.114) Quit (Remote host closed the connection)
[9:13] <Be-El> hi
[9:13] <SamYaple> hello Be-El
[9:13] * ketor (~ketor@182.48.117.114) has joined #ceph
[9:13] * overclk (~overclk@121.244.87.117) has joined #ceph
[9:15] * treenerd (~treenerd@2001:4dd0:ff00:809d:76e5:bff:feb7:bcfa) has joined #ceph
[9:15] * spidu_ (~Vidi@9S0AAADVG.tor-irc.dnsbl.oftc.net) Quit ()
[9:16] * PeterRabbit (~Hidendra@lumumba.torservers.net) has joined #ceph
[9:18] <Be-El> i'm looking for a way to list files that are stored in a certain pool in cephfs. i cleaned up from directories, but there are still some files left, and i need to find out what there files are
[9:21] * aj__ (~aj@fw.gkh-setu.de) has joined #ceph
[9:21] * ketor (~ketor@182.48.117.114) Quit (Ping timeout: 480 seconds)
[9:22] * analbeard (~shw@support.memset.com) has joined #ceph
[9:24] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[9:25] * zack_dolby (~textual@e0109-114-22-11-74.uqwimax.jp) has joined #ceph
[9:26] * Hemanth (~Hemanth@121.244.87.117) Quit (Quit: Leaving)
[9:27] * ketor (~ketor@182.48.117.114) has joined #ceph
[9:34] * morse_ (~morse@supercomputing.univpm.it) has joined #ceph
[9:34] * goberle (~goberle@195.154.71.151) Quit (Read error: Connection reset by peer)
[9:34] * huats_ (~quassel@stuart.objectif-libre.com) Quit (Read error: Connection reset by peer)
[9:34] * morse (~morse@supercomputing.univpm.it) Quit (Read error: Connection reset by peer)
[9:35] * huats (~quassel@stuart.objectif-libre.com) has joined #ceph
[9:35] * goberle (~goberle@mid.ygg.tf) has joined #ceph
[9:35] * oro (~oro@84-72-20-79.dclient.hispeed.ch) has joined #ceph
[9:40] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[9:42] * ketor (~ketor@182.48.117.114) Quit (Remote host closed the connection)
[9:42] * analbeard (~shw@support.memset.com) has left #ceph
[9:43] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[9:43] * zack_dolby (~textual@e0109-114-22-11-74.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[9:45] * PeterRabbit (~Hidendra@9S0AAADXT.tor-irc.dnsbl.oftc.net) Quit ()
[9:45] * pepzi (~Guest1390@tor-exit-readme.hands.com) has joined #ceph
[9:46] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[9:47] * fmanana (~fdmanana@bl13-144-52.dsl.telepac.pt) has joined #ceph
[9:48] * oro (~oro@84-72-20-79.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[9:53] * treenerd (~treenerd@2001:4dd0:ff00:809d:76e5:bff:feb7:bcfa) Quit (Ping timeout: 480 seconds)
[9:59] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) Quit (Quit: Leaving.)
[10:01] * treenerd (~treenerd@2001:4dd0:ff00:809d:76e5:bff:feb7:bcfa) has joined #ceph
[10:01] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) has joined #ceph
[10:01] <anorak> Be-El: the closest way I know is "ceph osd map Pool_name object_name". It is not exactly what you are looking for ....
[10:01] * sleinen (~Adium@130.59.94.127) has joined #ceph
[10:02] <anorak> this will only tell that in which PG is your data stored
[10:02] <Be-El> anorak: does cephfs support inode lookup? rados ls lists the objects in the pool, and afaik the first part of the objects' name is the inode id
[10:02] <anorak> perhaps a bit of scripting by replacing "Pool_name" could potentially give you better results
[10:03] * sleinen1 (~Adium@2001:620:0:82::101) has joined #ceph
[10:03] <anorak> Be-El: I am not sure about cephfs supporting inode. Like i said...that is the closest thing i know. :)
[10:05] <yanzheng> cephfs support lookup inode by indoor number
[10:06] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) has joined #ceph
[10:06] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[10:06] <Be-El> yanzheng: what's the command to do a lookup?
[10:09] * ketor (~ketor@182.48.117.114) has joined #ceph
[10:09] <yanzheng> no command, there is an API in libcephfs
[10:09] * sleinen (~Adium@130.59.94.127) Quit (Ping timeout: 480 seconds)
[10:10] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[10:10] * shang_ (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[10:12] * mykola (~Mikolaj@91.225.201.137) Quit (Remote host closed the connection)
[10:14] * sleinen1 (~Adium@2001:620:0:82::101) Quit (Ping timeout: 480 seconds)
[10:15] * pepzi (~Guest1390@5NZAAC11V.tor-irc.dnsbl.oftc.net) Quit ()
[10:18] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[10:19] * sleinen (~Adium@130.59.94.127) has joined #ceph
[10:20] * CorneliousJD|AtWork (~JWilbur@tor.metaether.net) has joined #ceph
[10:21] * ketor (~ketor@182.48.117.114) Quit (Remote host closed the connection)
[10:22] * sleinen1 (~Adium@2001:620:0:82::102) has joined #ceph
[10:24] * rotbeard (~redbeard@x5f74c8b8.dyn.telefonica.de) Quit (Quit: Leaving)
[10:27] * sleinen (~Adium@130.59.94.127) Quit (Ping timeout: 480 seconds)
[10:28] * ketor (~ketor@182.48.117.114) has joined #ceph
[10:29] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[10:31] * shohn (~shohn@dslb-094-223-167-060.094.223.pools.vodafone-ip.de) has left #ceph
[10:31] * shohn (~shohn@dslb-094-223-167-060.094.223.pools.vodafone-ip.de) has joined #ceph
[10:33] * rotbeard (~redbeard@x5f74c8b8.dyn.telefonica.de) has joined #ceph
[10:33] * ismell (~ismell@host-24-52-35-110.beyondbb.com) Quit (Ping timeout: 480 seconds)
[10:39] * ketor (~ketor@182.48.117.114) Quit (Remote host closed the connection)
[10:39] * ketor (~ketor@182.48.117.114) has joined #ceph
[10:48] * shang_ (~ShangWu@42-72-168-227.EMOME-IP.hinet.net) has joined #ceph
[10:50] * CorneliousJD|AtWork (~JWilbur@8Q4AAA6ZI.tor-irc.dnsbl.oftc.net) Quit ()
[10:50] * tuhnis (~dicko@6.tor.exit.babylon.network) has joined #ceph
[10:55] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[10:57] * cloud_vision (~cloud_vis@bzq-79-182-50-62.red.bezeqint.net) has joined #ceph
[10:58] * oro (~oro@2001:620:20:16:2d79:5b85:5de5:96c6) has joined #ceph
[10:59] * reistlin (52768069@107.161.19.109) has joined #ceph
[10:59] <reistlin> Hi all!
[10:59] <reistlin> Could someone explain me, how are snapshots work?
[11:00] <cloud_vision> Hi, im having issue unmapping rbd, basicaly nothing i tried helped to remove the mapped image(also causing load) im not sure if it happened due to XFS issue or a bug and just try to unmap the device with no success
[11:00] <cloud_vision> rbd unmap /dev/rbd1 --> rbd: unmap failed: (16) Device or resource busy
[11:00] <reistlin> If we have rbd image with object size 32 mb, that will be if we write only 4 kb in this object?
[11:01] * ketor (~ketor@182.48.117.114) Quit (Remote host closed the connection)
[11:01] * ketor (~ketor@182.48.117.114) has joined #ceph
[11:01] <cloud_vision> it has watchers but they are not cleared after reboot of the clients
[11:01] <cloud_vision> watcher=192.168.0.115:0/4088400941 client.2565727 cookie=1
[11:01] <cloud_vision> watcher=192.168.0.125:0/4117320146 client.9657480 cookie=1
[11:01] <cloud_vision> watcher=192.168.0.125:0/4117320146 client.9657480 cookie=2
[11:01] <reistlin> it will read 32 mb, write 32 mb and then wriet 4kb(COW)
[11:01] <reistlin> or just write 4kb?
[11:02] * shang_ (~ShangWu@42-72-168-227.EMOME-IP.hinet.net) Quit (Read error: Connection reset by peer)
[11:03] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[11:07] <cloud_vision> is there any way to force close watchers or unmap --force?
[11:08] <cloud_vision> cant get rid of the mapped rbd device :)
[11:10] * sleinen1 (~Adium@2001:620:0:82::102) Quit (Read error: Connection reset by peer)
[11:11] * NotExist (~notexist@kvps-180-235-255-92.secure.ne.jp) Quit (Remote host closed the connection)
[11:12] * ketor (~ketor@182.48.117.114) Quit (Remote host closed the connection)
[11:13] * vbellur (~vijay@121.244.87.124) has joined #ceph
[11:13] * ketor (~ketor@182.48.117.114) has joined #ceph
[11:13] * madkiss2 (~madkiss@2001:6f8:12c3:f00f:5c92:22c6:c9aa:839f) has joined #ceph
[11:15] * ketor (~ketor@182.48.117.114) Quit (Remote host closed the connection)
[11:17] * madkiss1 (~madkiss@2001:6f8:12c3:f00f:39af:b4c8:6d9b:324d) Quit (Ping timeout: 480 seconds)
[11:20] * thomnico (~thomnico@2a01:e35:8b41:120:8c43:c1e2:ddca:295c) Quit (Quit: Ex-Chat)
[11:20] * reistlin (52768069@107.161.19.109) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[11:20] * tuhnis (~dicko@9S0AAAD4J.tor-irc.dnsbl.oftc.net) Quit ()
[11:20] * VampiricPadraig (~Guest1390@9S0AAAD6A.tor-irc.dnsbl.oftc.net) has joined #ceph
[11:32] * Kingrat (~shiny@2605:a000:1607:4000:420:2dcf:ad70:6a96) Quit (Ping timeout: 480 seconds)
[11:33] * treenerd (~treenerd@2001:4dd0:ff00:809d:76e5:bff:feb7:bcfa) Quit (Ping timeout: 480 seconds)
[11:35] <cloud_vision> looks like a bug in kernel 3.9
[11:35] <cloud_vision> i mean is check with module on kernel 3.9
[11:36] * cok (~chk@2a02:2350:18:1010:44b4:2857:e4bd:f152) Quit (Quit: Leaving.)
[11:38] * cloud_vision (~cloud_vis@bzq-79-182-50-62.red.bezeqint.net) Quit (Quit: Leaving)
[11:38] * linjan (~linjan@195.110.41.9) has joined #ceph
[11:39] * sleinen (~Adium@130.59.94.127) has joined #ceph
[11:41] * sleinen1 (~Adium@2001:620:0:82::100) has joined #ceph
[11:41] * treenerd (~treenerd@2001:4dd0:ff00:809d:221:ccff:feb9:4549) has joined #ceph
[11:46] * ketor (~ketor@182.48.117.114) has joined #ceph
[11:47] * nsoffer (~nsoffer@bzq-84-111-112-230.cablep.bezeqint.net) Quit (Ping timeout: 480 seconds)
[11:47] * sleinen (~Adium@130.59.94.127) Quit (Ping timeout: 480 seconds)
[11:48] * trawler (~oftc-webi@195.234.136.12) has joined #ceph
[11:50] * VampiricPadraig (~Guest1390@9S0AAAD6A.tor-irc.dnsbl.oftc.net) Quit ()
[11:52] <trawler> Hey guys. I'm trying to re-format a ceph partition, after adding disks to a physical raid - but getting this warning from "ceph-deploy disk zap": but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes.
[11:53] <trawler> i've tried rebooting the server, but getting the same error. any ideas? (the partition is not mounted)
[11:55] * Deiz (~K3NT1S_aw@176.10.99.201) has joined #ceph
[11:56] <vikhyat> trawler: you can delete the partition manually from fdisk and then partprobe and try
[11:57] <trawler> fdisk doesn't see the partition, because it's GPT, so i tried doing the same with parted - still the same result
[11:58] <vikhyat> trawler: you can check dmesg or /var/log/messages and see what error you are getting
[12:00] * daviddcc (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) has joined #ceph
[12:00] * rendar (~I@host82-178-dynamic.36-79-r.retail.telecomitalia.it) has joined #ceph
[12:08] * Concubidated (~Adium@gw.sepia.ceph.com) has joined #ceph
[12:09] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[12:13] * ismell (~ismell@host-24-52-35-110.beyondbb.com) has joined #ceph
[12:15] * Concubidated (~Adium@gw.sepia.ceph.com) Quit (Remote host closed the connection)
[12:15] * kefu (~kefu@114.86.210.96) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[12:16] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[12:17] * kefu (~kefu@114.86.210.96) has joined #ceph
[12:21] * Concubidated (~Adium@gw.sepia.ceph.com) has joined #ceph
[12:24] * squ (~Thunderbi@46.109.36.167) Quit (Ping timeout: 480 seconds)
[12:24] * Deiz (~K3NT1S_aw@7R2AABF2E.tor-irc.dnsbl.oftc.net) Quit ()
[12:26] * hgjhgjh (~homosaur@UtopianNoise.tor-exit.sec.gd) has joined #ceph
[12:27] * kefu (~kefu@114.86.210.96) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[12:30] * kefu (~kefu@114.86.210.96) has joined #ceph
[12:34] * kefu (~kefu@114.86.210.96) Quit ()
[12:34] * OnTheRock (~overonthe@199.68.193.54) Quit (Read error: Connection reset by peer)
[12:41] * zack_dolby (~textual@e0109-114-22-11-74.uqwimax.jp) has joined #ceph
[12:45] * zack_dolby (~textual@e0109-114-22-11-74.uqwimax.jp) Quit (Read error: Connection reset by peer)
[12:45] * zack_dolby (~textual@e0109-114-22-11-74.uqwimax.jp) has joined #ceph
[12:46] * Rickus (~Rickus@office.protected.ca) Quit (Ping timeout: 480 seconds)
[12:49] * trawler (~oftc-webi@195.234.136.12) Quit (Remote host closed the connection)
[12:55] * hgjhgjh (~homosaur@9S0AAAD98.tor-irc.dnsbl.oftc.net) Quit ()
[12:55] * eXeler0n (~skney@bolobolo2.torservers.net) has joined #ceph
[12:56] * treenerd (~treenerd@2001:4dd0:ff00:809d:221:ccff:feb9:4549) Quit (Ping timeout: 480 seconds)
[13:04] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[13:18] * vbellur1 (~vijay@121.244.87.117) has joined #ceph
[13:19] * thomnico (~thomnico@2a01:e35:8b41:120:8c43:c1e2:ddca:295c) has joined #ceph
[13:22] * sleinen (~Adium@macsl.switch.ch) has joined #ceph
[13:24] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[13:25] * eXeler0n (~skney@5NZAAC2B8.tor-irc.dnsbl.oftc.net) Quit ()
[13:28] * sleinen1 (~Adium@2001:620:0:82::100) Quit (Ping timeout: 480 seconds)
[13:29] * TGF (~ItsCrimin@176.10.99.205) has joined #ceph
[13:33] * oblu- (~o@62.109.134.112) has joined #ceph
[13:33] * madkiss2 (~madkiss@2001:6f8:12c3:f00f:5c92:22c6:c9aa:839f) Quit (Quit: Leaving.)
[13:34] * vbellur1 (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:36] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Quit: Goodbye)
[13:36] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has joined #ceph
[13:36] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has left #ceph
[13:36] * ketor (~ketor@182.48.117.114) Quit (Remote host closed the connection)
[13:36] * ketor (~ketor@182.48.117.114) has joined #ceph
[13:36] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[13:37] * oblu| (~o@62.109.134.112) has joined #ceph
[13:37] * oblu (~o@62.109.134.112) Quit (Ping timeout: 480 seconds)
[13:40] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit ()
[13:42] * oblu- (~o@62.109.134.112) Quit (Ping timeout: 480 seconds)
[13:43] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[13:45] * DW-10297 (~Teduardo@57.0.be.static.xlhost.com) has joined #ceph
[13:46] <DW-10297> has anyone had much luck using i40e driver in ubuntu with ceph?
[13:46] * DW-10297 is now known as Teduardo
[13:48] * ketor (~ketor@182.48.117.114) Quit (Remote host closed the connection)
[13:59] * TGF (~ItsCrimin@5NZAAC2D3.tor-irc.dnsbl.oftc.net) Quit ()
[13:59] * CoMa (~qable@tor-exit-4.all.de) has joined #ceph
[14:03] * treenerd (~treenerd@2001:4dd0:ff00:809d:221:ccff:feb9:4549) has joined #ceph
[14:10] * Concubidated (~Adium@gw.sepia.ceph.com) Quit (Remote host closed the connection)
[14:12] * Kingrat (~shiny@2605:a000:161a:c022:41c7:7bda:46d4:67ed) has joined #ceph
[14:13] * imjpr (~imjpr@dsl017-120-156.bhm1.dsl.speakeasy.net) has joined #ceph
[14:13] * zhaochao (~zhaochao@124.202.190.2) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 38.0.1/20150519050911])
[14:24] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[14:28] * shang (~ShangWu@175.41.48.77) has joined #ceph
[14:29] * CoMa (~qable@8Q4AAA61M.tor-irc.dnsbl.oftc.net) Quit ()
[14:29] * PeterRabbit (~Bwana@2.tor.exit.babylon.network) has joined #ceph
[14:30] * imjpr (~imjpr@dsl017-120-156.bhm1.dsl.speakeasy.net) Quit (Ping timeout: 480 seconds)
[14:35] * mattronix (~quassel@mail.mattronix.nl) Quit (Remote host closed the connection)
[14:39] * sleinen1 (~Adium@2001:620:0:82::10a) has joined #ceph
[14:40] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) has joined #ceph
[14:43] * mattronix (~quassel@mail.mattronix.nl) has joined #ceph
[14:44] * kefu (~kefu@114.86.210.96) has joined #ceph
[14:44] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[14:44] * thomnico (~thomnico@2a01:e35:8b41:120:8c43:c1e2:ddca:295c) Quit (Ping timeout: 480 seconds)
[14:45] * sleinen (~Adium@macsl.switch.ch) Quit (Ping timeout: 480 seconds)
[14:49] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit (Quit: Leaving.)
[14:51] * kefu (~kefu@114.86.210.96) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[14:53] * kefu (~kefu@114.86.210.96) has joined #ceph
[14:53] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (Quit: Leaving.)
[14:53] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[14:54] * kefu (~kefu@114.86.210.96) Quit ()
[14:55] * thomnico (~thomnico@2a01:e35:8b41:120:8c43:c1e2:ddca:295c) has joined #ceph
[14:56] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[14:56] <doppelgrau> Hi
[14:56] * kefu (~kefu@114.86.210.96) has joined #ceph
[14:59] * PeterRabbit (~Bwana@5NZAAC2HQ.tor-irc.dnsbl.oftc.net) Quit ()
[15:00] * kefu (~kefu@114.86.210.96) Quit ()
[15:00] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) has joined #ceph
[15:01] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) has joined #ceph
[15:01] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[15:01] * kefu (~kefu@114.86.210.96) has joined #ceph
[15:01] <doppelgrau> I could need a short hint: I have a ceph-cluster where a server failed. The four osd are now marked down+out. Failuredomain is a rack.
[15:01] * dyasny (~dyasny@173.231.115.58) has joined #ceph
[15:01] * jrankin (~jrankin@d53-64-170-236.nap.wideopenwest.com) has joined #ceph
[15:02] <doppelgrau> The problem is, after 12 hours I can see no recovery going on anymore, I have 303 PG active+remapped and 676 PG active+undersized+degraded
[15:02] * karimb (~kboumedhe@87.pool85-52-16.dynamic.orange.es) has joined #ceph
[15:02] * tupper (~tcole@2001:420:2280:1272:8900:f9b8:3b49:567e) has joined #ceph
[15:03] <Be-El> doppelgrau: do you have some flags like noout or nodown set?
[15:03] <karimb> hi buddies, any reason why keystone wont work for radosgw within horizon, but does work fine for standard swift client ?
[15:04] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[15:05] <doppelgrau> Be-El: no (even made shure by running ceph osd unset noout; ceph osd unset nodown)
[15:05] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[15:06] <Be-El> doppelgrau: can you paste the output of ceph -s somewhere?
[15:06] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[15:07] <doppelgrau> Be-El: http://pastebin.com/mJPf6udu Thanks for taking a look
[15:08] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[15:09] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[15:09] <Be-El> doppelgrau: you mentioned using racks as failure domain. is the cluster currently able to fullfill the requirements of the corresponding crush rules (e.g. is it able to find enough osds distributed over the racks)?
[15:11] <doppelgrau> I???m quit sure (is there a way to check this). Since there are currently only three racks in use, one rack SSDs (primary copy) and two coppies on platter. In the Rack is another server with more than enough free space (three osds)ceph osd tree
[15:13] <doppelgrau> Just checked the disk-utilization on the server in the same rack (and verified in the osd tree that the mapping is right): only 10% diskspace ist used => no nearfull possible
[15:13] <Kruge_> hi
[15:13] <Be-El> doppelgrau: find out the id of one of the problematic pgs, and check the output of "ceph pg <id> query"
[15:14] <Be-El> doppelgrau: or upload it to some paste site
[15:14] <Be-El> doppelgrau: it should list the acting osds for the pg
[15:14] <Be-El> doppelgrau: full or near-full osd usually show up in ceph -s
[15:15] <Be-El> doppelgrau: and the pgs would have a different state
[15:15] <doppelgrau> Be-El: ok, the output is quite long, I take a look if I understand it alone, else I paste it. Thanks for your help till now
[15:15] <Be-El> da nich fuer ;-)
[15:18] * yanzheng (~zhyan@125.71.109.76) Quit (Quit: This computer has gone to sleep)
[15:18] <Kruge_> I've got a single rbd mapped onto a client which is presenting it via nfs. I'm seeing a slow but constant amount of write activity to it, even when there is nothing using the nfs export. Anyone got any ideas why that might be?
[15:18] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[15:19] * yanzheng (~zhyan@125.71.109.76) has joined #ceph
[15:19] * yanzheng (~zhyan@125.71.109.76) Quit ()
[15:20] <Kruge_> It's only eating up a megabyte every couple of minutes, but it'd be nice to know why it was happening
[15:20] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[15:21] <doppelgrau> Be-El: ok, I don???t get the problem alone. http://pastebin.com/JiZ8bKed I put an osd-tree befor the query to help mapping osd-IDs to Nodes/Racks. The query is from one PG ???active+undersized+degraded???
[15:21] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) has joined #ceph
[15:22] <doppelgrau> Kruge_: something touching files resulting in updates on ???atime????
[15:22] <Be-El> doppelgrau: can you also paste the crush rule for the affected pool?
[15:24] <doppelgrau> sure: http://pastebin.com/UeJLrcsB
[15:26] * segutier (~segutier@50.153.131.5) has joined #ceph
[15:26] <Be-El> doppelgrau: are all pgs of a pool affected, or just a part of it?
[15:27] <Be-El> doppelgrau: and do you use a size of 3 and min_size of 3?
[15:29] <doppelgrau> Be-El: only parts of the pool(s), size=3, min_size=2
[15:29] * amote (~amote@121.244.87.116) Quit (Quit: Leaving)
[15:30] <Be-El> doppelgrau: do you use any special crush tuneables or the optimal/legacy setting?
[15:30] * mattronix (~quassel@mail.mattronix.nl) Quit (Remote host closed the connection)
[15:31] * kefu (~kefu@114.86.210.96) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[15:31] * mattronix (~quassel@mail.mattronix.nl) has joined #ceph
[15:31] * mattronix (~quassel@mail.mattronix.nl) Quit (Remote host closed the connection)
[15:32] <doppelgrau> no special tunables, but not the latest (without straw2), since the kernel rbd-Driver in the used 4.0 kernel does not suport ist
[15:32] * mattronix (~quassel@mail.mattronix.nl) has joined #ceph
[15:33] * vbellur (~vijay@122.171.94.84) has joined #ceph
[15:35] * kefu (~kefu@114.86.210.96) has joined #ceph
[15:36] <Be-El> doppelgrau: i think the problem is indeed related to crush. crush probably has a hard time to find a second rack within the platter root. the weights define how data should be distributed, and crush traverses the tree in top down manner
[15:36] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Ping timeout: 480 seconds)
[15:37] <Be-El> doppelgrau: since more than half of your first rack is offline, crush might not be able to find an osd for all the pgs
[15:37] <Be-El> doppelgrau: do you plan to get the missing box up and running soon again?
[15:37] * alram (~alram@64.134.221.151) has joined #ceph
[15:38] * jashank42 (~jashan42@202.164.53.117) has joined #ceph
[15:39] <Be-El> doppelgrau: a temporary solution is setting the weights for the down osds to 0.0
[15:39] <doppelgrau> Be-El: not so soon, that I???m comfortable waiting till then (properbly at the weekend)
[15:39] <Be-El> doppelgrau: but this involves data movements, since all weights in the tree will be modified
[15:39] <Be-El> doppelgrau: but the pgs should get healthy afterwards
[15:40] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) has joined #ceph
[15:40] <doppelgrau> Be-El: I???ll test ist. better some load than the risk of stalling clients if an other osd fails
[15:41] <Be-El> doppelgrau: and everything i said is without warranty ;-)
[15:44] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[15:46] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[15:48] * segutier_ (~segutier@50.153.129.18) has joined #ceph
[15:49] * tw0fish (~tw0fish@UNIX3.ANDREW.CMU.EDU) has joined #ceph
[15:49] <doppelgrau> Be-El: too bad, after setting the weight to zero data movement starts (ToDo: Update kernel to 4.1 immediately after stable, I want strwa2) no more PGs are reported as ???active+undersized+degraded??? and I see revovery IO again. If you ever visit the ruhr area in germany, I own you a pint :)
[15:50] * segutier (~segutier@50.153.131.5) Quit (Ping timeout: 480 seconds)
[15:50] * segutier_ is now known as segutier
[15:50] * trawler (~oftc-webi@195.234.136.12) has joined #ceph
[15:52] <Be-El> Kruge_: does the write traffic also happen for other rbds (read: is it a generic problem), or is it specific for the nfs rbd? maybe knfs is doing some operations on it
[15:55] * zack_dolby (~textual@e0109-114-22-11-74.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[15:55] * tw0fish (~tw0fish@UNIX3.ANDREW.CMU.EDU) Quit (Quit: leaving)
[15:55] * tw0fish (~tw0fish@UNIX3.ANDREW.CMU.EDU) has joined #ceph
[15:57] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[15:57] * deepsa (~Deependra@00013525.user.oftc.net) Quit (Quit: Textual IRC Client: www.textualapp.com)
[15:57] * tw0fish (~tw0fish@UNIX3.ANDREW.CMU.EDU) Quit ()
[15:58] * tw0fish (~tw0fish@UNIX3.ANDREW.CMU.EDU) has joined #ceph
[15:58] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) has joined #ceph
[16:00] * segutier (~segutier@50.153.129.18) Quit (Quit: segutier)
[16:01] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[16:02] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) Quit ()
[16:03] <Kruge_> Be-El: I think it is an nfs-related symptom. When I stopped the nfs service, the write traffic stopped
[16:03] <Kruge_> I'm going to blame atime
[16:04] <Kruge_> (as doppelgrau suggested)
[16:06] * tw0fish (~tw0fish@UNIX3.ANDREW.CMU.EDU) Quit (Quit: leaving)
[16:07] * tw0fish (~tw0fish@UNIX3.ANDREW.CMU.EDU) has joined #ceph
[16:08] * wushudoin (~wushudoin@38.140.108.2) has joined #ceph
[16:11] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[16:17] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[16:17] * mattronix (~quassel@mail.mattronix.nl) Quit (Remote host closed the connection)
[16:17] * mattronix (~quassel@mail.mattronix.nl) has joined #ceph
[16:18] <tw0fish> I see an initd style script available to map/unmap an RBD device. I am wondering if anyone has a systemctl script like this for RHEL/CentOS 7?
[16:18] <tw0fish> s/systemctl/systemd/
[16:25] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[16:29] * kefu (~kefu@114.86.210.96) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[16:30] * Hemanth (~Hemanth@121.244.87.117) Quit (Ping timeout: 480 seconds)
[16:36] * kefu (~kefu@114.86.210.96) has joined #ceph
[16:36] * alram (~alram@64.134.221.151) Quit (Quit: leaving)
[16:42] * karimb (~kboumedhe@87.pool85-52-16.dynamic.orange.es) Quit (Remote host closed the connection)
[16:43] * wschulze1 (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) has joined #ceph
[16:43] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) Quit (Read error: Connection reset by peer)
[16:44] * linuxkidd (~linuxkidd@63.79.89.16) has joined #ceph
[16:45] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) has joined #ceph
[16:45] * trawler (~oftc-webi@195.234.136.12) Quit (Remote host closed the connection)
[16:46] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) Quit (Read error: Connection reset by peer)
[16:46] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) has joined #ceph
[16:48] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Remote host closed the connection)
[16:50] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Remote host closed the connection)
[16:51] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[16:52] * wschulze1 (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[16:52] * wschulze1 (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) has joined #ceph
[16:52] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[16:55] * wschulze2 (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) has joined #ceph
[16:55] * wenjunhuang__ (~wenjunhua@61.135.172.68) Quit (Ping timeout: 480 seconds)
[16:58] * davidz (~davidz@2605:e000:1313:8003:109:95b9:e1d1:ba04) has joined #ceph
[16:59] * ircolle (~ircolle@2601:1:a580:1735:ea2a:eaff:fe91:b49b) has joined #ceph
[16:59] * tw0fish (~tw0fish@UNIX3.ANDREW.CMU.EDU) Quit (Quit: leaving)
[16:59] * tw0fish (~tw0fish@UNIX3.ANDREW.CMU.EDU) has joined #ceph
[16:59] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[16:59] * Coestar (~xul@89.105.194.78) has joined #ceph
[17:00] * wenjunhuang__ (~wenjunhua@61.135.172.68) has joined #ceph
[17:01] * valeech (~valeech@pool-72-86-37-215.clppva.fios.verizon.net) has joined #ceph
[17:01] * wschulze1 (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[17:01] * valeech (~valeech@pool-72-86-37-215.clppva.fios.verizon.net) Quit ()
[17:02] * rlrevell1 (~leer@vbo1.inmotionhosting.com) has joined #ceph
[17:03] * valeech (~valeech@pool-72-86-37-215.clppva.fios.verizon.net) has joined #ceph
[17:04] * branto (~borix@ip-213-220-214-203.net.upcbroadband.cz) has left #ceph
[17:05] * gaveen (~gaveen@175.157.62.85) has joined #ceph
[17:05] * jwilkins (~jwilkins@2601:9:4580:f4c:ea2a:eaff:fe08:3f1d) has joined #ceph
[17:06] * bene (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[17:06] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Ping timeout: 480 seconds)
[17:08] * nhm (~nhm@184-97-242-33.mpls.qwest.net) has joined #ceph
[17:08] * ChanServ sets mode +o nhm
[17:09] * nhm_ (~nhm@184-97-242-33.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[17:11] * treenerd (~treenerd@2001:4dd0:ff00:809d:221:ccff:feb9:4549) Quit (Ping timeout: 480 seconds)
[17:12] * marrusl (~mark@cpe-24-90-46-248.nyc.res.rr.com) Quit (Quit: bye!)
[17:12] * marrusl (~mark@cpe-24-90-46-248.nyc.res.rr.com) has joined #ceph
[17:24] * gaveen (~gaveen@175.157.62.85) Quit (Read error: Connection reset by peer)
[17:24] * SkyEye (~gaveen@175.157.13.63) has joined #ceph
[17:24] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[17:24] * loris (~loris@62-193-45-2.as16211.net) has joined #ceph
[17:26] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[17:27] <loris> Hello! The boss wants to put all Ceph nodes' root filesystems on the network, or in a ramdisk, what do you think?
[17:27] * wushudoin_ (~wushudoin@209.132.181.86) has joined #ceph
[17:29] * loris is now known as harlequin
[17:29] * Coestar (~xul@9S0AAAESQ.tor-irc.dnsbl.oftc.net) Quit ()
[17:29] * Keiya (~raindog@static-ip-85-25-103-119.inaddr.ip-pool.com) has joined #ceph
[17:30] <doppelgrau> loris: Network sounds dangerous like a SPOF for me. ramdisk could work with external logging, alt least for OSDs, monitors without lokal storage seem hard too mee
[17:30] <doppelgrau> but take my advice not too serious, I had today a problem where I needed some advice :)
[17:31] * joshd (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) has joined #ceph
[17:32] * thomnico (~thomnico@2a01:e35:8b41:120:8c43:c1e2:ddca:295c) Quit (Quit: Ex-Chat)
[17:32] <harlequin> Every advice is always useful, thank you doppelgrau :)
[17:33] <Be-El> harlequin: it's a bad idea. if you loose the storage for the mons, you loose all data in your cluster
[17:33] <Be-El> harlequin: and yes, there've been people who trashed all their mons at once
[17:35] <harlequin> doppelgrau, Be-El: I've already told him that, to which he replied "let's only put OSDs on the network, then!"
[17:35] * wushudoin (~wushudoin@38.140.108.2) Quit (Ping timeout: 480 seconds)
[17:35] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:36] * wushudoin_ (~wushudoin@209.132.181.86) Quit (Ping timeout: 480 seconds)
[17:36] * alram (~alram@192.41.52.12) has joined #ceph
[17:36] <Be-El> harlequin: the root fs of osds is kind of disposable, since all relevant information is stored in the osd partition and the journal
[17:37] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[17:37] <harlequin> PXE boot, then network rootfs or rootfs in ramdisk, and logs on network too, via syslog
[17:38] <doppelgrau> harlequin: really dirty idea: 1 or two OSDs on each monitor, create a pool that contain only the OSDs on the monitors, rbd-Images in that pool dor the OSDs
[17:38] <doppelgrau> but I???d prefer the RAMDISK :)
[17:39] <Be-El> harlequin: for a dedicated storage box that setup should be ok. but i'm not sure whether ceph uses syslog, so you may need some way to write the ceph logs
[17:40] <harlequin> Be-El: I really think Ceph doesn't use syslog, I've already told him that :P
[17:41] <harlequin> So -> writable rootfs is really needed
[17:41] <Be-El> harlequin: and a reliable writable fs is needed
[17:41] <doppelgrau> (and even more some lokal storage, an USB-Thumbdrive would be better than some network-storage. Murphy makes sure that the PXE subtly fails after a large powerloss or something like that)
[17:42] * bjornar (~bjornar@109.247.131.38) Quit (Ping timeout: 480 seconds)
[17:43] * SkyEye (~gaveen@175.157.13.63) Quit (Remote host closed the connection)
[17:44] * vata (~vata@207.96.182.162) has joined #ceph
[17:44] * imjpr (~imjpr@138.26.125.8) has joined #ceph
[17:45] <harlequin> Be-El, doppelgrau: is there a way to 1) lose the rootfs of an OSD node 2) re-use the OSD disks from the lost node in another OSD node installation?
[17:45] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[17:45] * wushudoin_ (~wushudoin@38.140.108.2) has joined #ceph
[17:46] <Be-El> harlequin: the default way to start osds is a udev-rule that acts on a certain partition guid
[17:46] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Quit: ChatZilla 0.9.91.1 [Firefox 38.0.1/20150513174244])
[17:46] <Be-El> harlequin: if you do not use an external journal you should be able to move a disk between hosts
[17:46] <doppelgrau> harlequin: If you use the udev-rules-apporach to start the osds, just put them in another node
[17:47] * wschulze2 (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[17:47] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[17:47] * joshd (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) Quit (Quit: Leaving.)
[17:48] <harlequin> Be-El: yes, but is there a way to re-adopt the OSD disks in another node? Because the CRUSH map, the OSD map, ... they know the OSD.NNN to be attached to the now lost machine.
[17:48] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[17:49] <Be-El> harlequin: the osd reports all pgs upon start. the location in the crush tree is usually updated upon start, too
[17:49] <m0zes> harlequin: by default the osd location will be updated in the crush map on start.
[17:50] <harlequin> Be-El, doppelgrau, m0zes: is that true?! :O If so, that's magic, isn't it? :D Thank you very much for this information :)
[17:51] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[17:51] <harlequin> So... the udev rule will re-create the needed directories too?
[17:51] <Be-El> harlequin: the default simply sets root=default host=$(hostname -s)
[17:52] * linjan (~linjan@195.110.41.9) Quit (Ping timeout: 480 seconds)
[17:52] <Be-El> harlequin: no, it will mount the partition which contains the directories
[17:52] <Be-El> harlequin: the osd is self-contained in a single partition (if you do not use external journals)
[17:52] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Read error: Connection reset by peer)
[17:52] <doppelgrau> harlequin: ???Any sufficiently advanced technology is indistinguishable from magic.??? ;)
[17:52] <harlequin> I'm sorry, I meant: will the udev rule create the /var/lib/ceph/osd/ceph-NNN directory?
[17:52] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[17:52] * rlrevell1 (~leer@vbo1.inmotionhosting.com) Quit (Ping timeout: 480 seconds)
[17:53] <Be-El> harlequin: it should, yes
[17:53] <harlequin> doppelgrau: One of my preferred quotes :)
[17:53] * itsjpr (~imjpr@138.26.125.8) has joined #ceph
[17:54] <harlequin> Be-El: thank you very much :) I'll try this ASAP.
[17:55] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[17:56] <Be-El> harlequin: /lib/udev/rules.d/95-ceph-osd.rules invokes ceph-disk activate for all osd partitions
[17:56] * johanni (~johanni@129.210.115.36) has joined #ceph
[17:56] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[17:56] <harlequin> Be-El: Just perfect
[17:57] <Be-El> harlequin: have a look at the source code of ceph-disk and especially the documentation block about the activate command
[17:58] <harlequin> Be-El: I will
[17:59] * Keiya (~raindog@7R2AABF8D.tor-irc.dnsbl.oftc.net) Quit ()
[18:00] * Rickus (~Rickus@office.protected.ca) has joined #ceph
[18:05] * bkopilov (~bkopilov@bzq-79-182-6-22.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[18:06] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[18:06] * oro (~oro@2001:620:20:16:2d79:5b85:5de5:96c6) Quit (Remote host closed the connection)
[18:06] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) has joined #ceph
[18:08] * bkopilov (~bkopilov@bzq-79-178-52-86.red.bezeqint.net) has joined #ceph
[18:10] * nhm_ (~nhm@184-97-242-33.mpls.qwest.net) has joined #ceph
[18:12] * wenjunhuang__ (~wenjunhua@61.135.172.68) Quit (Ping timeout: 480 seconds)
[18:12] * nhm (~nhm@184-97-242-33.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[18:12] * bkopilov (~bkopilov@bzq-79-178-52-86.red.bezeqint.net) Quit (Read error: Connection reset by peer)
[18:14] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) Quit (Quit: Leaving)
[18:17] * c4tech (~c4tech@199.91.185.156) Quit ()
[18:20] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[18:21] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[18:25] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[18:26] * chasmo77 (~chas77@158.183-62-69.ftth.swbr.surewest.net) has joined #ceph
[18:27] * reed (~reed@198.8.80.61) has joined #ceph
[18:29] * Heliwr (~elt@50.7.159.196) has joined #ceph
[18:30] * naga (~oftc-webi@idp01webcache6-z.apj.hpecore.net) has joined #ceph
[18:31] * rwheeler (~rwheeler@nat-pool-bos-u.redhat.com) has joined #ceph
[18:31] * sleinen1 (~Adium@2001:620:0:82::10a) Quit (Ping timeout: 480 seconds)
[18:31] * rlrevell1 (~leer@vbo1.inmotionhosting.com) has joined #ceph
[18:31] <naga> ceph rbd commands not responding, stucking for long time
[18:32] * bkopilov (~bkopilov@bzq-79-178-52-86.red.bezeqint.net) has joined #ceph
[18:32] * lovejoy (~lovejoy@213.83.69.6) has joined #ceph
[18:32] * lovejoy (~lovejoy@213.83.69.6) Quit ()
[18:34] * Hemanth (~Hemanth@117.221.99.112) has joined #ceph
[18:36] * oro (~oro@84-72-20-79.dclient.hispeed.ch) has joined #ceph
[18:36] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Ping timeout: 480 seconds)
[18:38] * naga1 (~oftc-webi@idp01webcache1-z.apj.hpecore.net) has joined #ceph
[18:38] * dgurtner (~dgurtner@178.197.231.65) Quit (Ping timeout: 480 seconds)
[18:39] * naga (~oftc-webi@idp01webcache6-z.apj.hpecore.net) Quit (Quit: Page closed)
[18:40] <naga1> ceph rbd commands not responding, stucking for long time
[18:40] <naga1> can anybody help me out
[18:42] * alram (~alram@192.41.52.12) Quit (Ping timeout: 480 seconds)
[18:44] * kefu is now known as kefu|afk
[18:45] * harlequin (~loris@62-193-45-2.as16211.net) Quit (Quit: leaving)
[18:49] * shaileshd (~shaileshd@vpngac.ccur.com) Quit (Read error: Connection reset by peer)
[18:50] * alram (~alram@192.41.52.12) has joined #ceph
[18:50] * ganders (~root@190.2.42.21) has joined #ceph
[18:50] <ganders> CentOS 6.2, ceph install ok v0.87.2
[18:51] <ganders> then i try to run a modprobe rbd an a FATAL: Module rbd not found shows
[18:51] <ganders> is there any special dep that i need to install??
[18:55] * kefu|afk (~kefu@114.86.210.96) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:55] * rwheeler (~rwheeler@nat-pool-bos-u.redhat.com) Quit (Quit: Leaving)
[18:56] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[18:57] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[18:59] * Heliwr (~elt@3DDAAAIN4.tor-irc.dnsbl.oftc.net) Quit ()
[18:59] * offender (~Ian2128@216.218.134.12) has joined #ceph
[19:00] * rlrevell1 (~leer@vbo1.inmotionhosting.com) Quit (Remote host closed the connection)
[19:00] * sleinen1 (~Adium@2001:620:0:82::101) has joined #ceph
[19:01] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[19:02] * bitserker (~toni@188.87.126.203) has joined #ceph
[19:03] * johanni_ (~johanni@129.210.115.36) has joined #ceph
[19:05] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:07] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[19:09] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) Quit (Quit: Leaving.)
[19:09] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Read error: Connection reset by peer)
[19:09] * peppe (~peppe@129.192.170.252) has joined #ceph
[19:09] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[19:10] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) has joined #ceph
[19:11] <peppe> hello, im trying to install a small ceph cluster (following guidelines @ ceph.com) though came across an issue.
[19:11] <peppe> [admin-node][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
[19:11] <peppe> [ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: 'ceph'
[19:11] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[19:12] <med> peppe, not that familiar with centos/rhel but suspect you need to provide that repo.
[19:13] <med> ceph-deploy has an option to add a repo.
[19:14] <peppe> med, thanks. I manually configure it (/etc/yum.repos.d/ceph.repo
[19:14] <peppe> name=Ceph noarch packages
[19:14] <peppe> baseurl=http://ceph.com/rpm-hammer/el7/noarch
[19:14] <peppe> el7 - flag for Centos7
[19:14] <alfredodeza> peppe: you did that by hand?
[19:14] <johanni_> did you add this line to the top: [ceph]
[19:14] <alfredodeza> you are not supposed to do that
[19:15] <alfredodeza> that is probably the issue
[19:15] <alfredodeza> that error means that the repo file you have in there doesn't have a '[ceph]' section
[19:15] * kawa2014 (~kawa@2-229-47-79.ip195.fastwebnet.it) Quit (Quit: Leaving)
[19:15] <alfredodeza> which makes me assume ceph-deploy didn't create that file
[19:16] <peppe> Add the package to the yum repository by creating a new file at /etc/yum.repos.d/ceph.repo with the following content:
[19:16] <peppe> this is from wiki.ceph.com
[19:16] <peppe> the step above
[19:16] <doppelgrau> naga1: ist ceph -s ok?
[19:16] <alfredodeza> I had no idea there was a wiki.ceph.com with install instructions
[19:16] <peppe> so the file created had same contents as published @ wiki
[19:16] <alfredodeza> peppe: mind pasting the whole link
[19:16] <alfredodeza> so I can see what you actually read?
[19:16] <peppe> http://wiki.ceph.com/Guides/How_To/Create_Versionable_and_Fault-Tolerant_Storage_Devices_with_Ceph_and_VirtualBox
[19:17] <peppe> only difference is that I am using ESXi5.5 and not vBox
[19:17] <peppe> 5 VMs with Centos7
[19:17] <daviddcc> gand
[19:17] <alfredodeza> peppe: what step/section
[19:17] * tomc (~tomc@192.41.52.12) Quit (Ping timeout: 480 seconds)
[19:17] <daviddcc> ganders, i think rbd is not in 2.6.32 kernel
[19:17] <peppe> step#2 is where the repo file gets created
[19:17] <alfredodeza> ok so Step2
[19:17] <alfredodeza> yeah
[19:17] <peppe> but
[19:17] <alfredodeza> that is not an official guide
[19:17] <alfredodeza> sorry about that
[19:18] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[19:18] <alfredodeza> no idea what that is doing there, should probably get removed
[19:18] <peppe> im stuck at 4.5
[19:18] <peppe> ok
[19:18] <alfredodeza> peppe: the problem is that repo file is wrong
[19:18] <peppe> what is the official guide that I can follow?
[19:18] <alfredodeza> peppe: http://ceph.com/docs/master/start/
[19:18] <peppe> want create a small cluster to play with and learn more on ceph
[19:18] <alfredodeza> yep
[19:18] * midnightrunner (~midnightr@c-67-174-241-112.hsd1.ca.comcast.net) has joined #ceph
[19:19] <alfredodeza> IMO: throw those hosts away and start from the official guide
[19:19] <alfredodeza> sorry :((
[19:19] <peppe> okay. so I will start over.
[19:19] <naga1> cluster 514bb61e-80e1-11e4-9461-000c2966c4ff health HEALTH_WARN 89 pgs degraded; 67 pgs incomplete; 67 pgs stuck inactive; 192 pgs stuck unclean; 3 requests are blocked > 32 sec monmap e1: 1 mons at {naga1=10.1.195.54:6789/0}, election epoch 2, quorum 0 naga1 osdmap e18: 3 osds: 3 up, 3 in pgmap v36: 192 pgs, 3 pools, 0 bytes data, 0 objects 3173 MB used, 58236 MB / 61410 MB avail 36 active
[19:21] <peppe> @alfredoeza: minda that the same step exists
[19:21] <cephalobot`> peppe: Error: "alfredoeza:" is not a valid command.
[19:21] <peppe> alfredoeza: same step exists in the official procedure
[19:21] <peppe> http://ceph.com/docs/master/start/quick-start-preflight/
[19:21] <peppe> step#2 under Red Hat PM
[19:22] * midnight_ (~midnightr@216.113.160.71) has joined #ceph
[19:22] <peppe> if it is wrong then both Wifi and Official sites are wrong
[19:22] <peppe> my current ceph.repo is exactl the same as the one in the official procedure
[19:23] <doppelgrau> naga: I guess the problem is that your cluster is very unhealthy => the basis for the rbd-commands is missing
[19:23] <doppelgrau> naga1: check your crush-rules to get the PGs in a clean state
[19:23] <alfredodeza> peppe: different
[19:23] * Concubidated (~Adium@gw.sepia.ceph.com) has joined #ceph
[19:23] <alfredodeza> that file is for installing ceph-deploy
[19:23] <alfredodeza> not ceph
[19:23] <alfredodeza> :)
[19:24] <peppe> agreed. but i installed ceph-deploy
[19:24] <peppe> that was not the problem
[19:24] <peppe> even in the previous procedure
[19:24] <alfredodeza> we are talking slightly different things :) let me explain
[19:25] <naga1> can i know how to check it, i am new to ceph
[19:25] <alfredodeza> peppe: on the 'admin' node, where you install ceph-deploy, you need that ceph-noarch repo file so you can install ceph-deploy there. That is OK.
[19:25] <doppelgrau> naga1: are you running a 1 server-Installation?
[19:25] <naga1> yes
[19:25] <alfredodeza> peppe: on the rest of the nodes, you don't add that repo file
[19:25] <naga1> 1 mon and 3 osd's
[19:25] <peppe> agreed
[19:25] <peppe> I only have that in admin node
[19:25] <peppe> :)
[19:25] <alfredodeza> peppe: you use ceph-deploy from the 'admin' node to install ceph everywhere else, that should pull in the right repo file
[19:26] <peppe> thats the problem
[19:26] <peppe> it does not
[19:26] <alfredodeza> aha
[19:26] <alfredodeza> ok
[19:26] <peppe> you fixed a bug (very similar signature) 8 months ago
[19:26] <peppe> 8980
[19:26] <peppe> if I am not mistaken
[19:26] <doppelgrau> naga1: the default rules want to distribute the data to two or three different servers, so with only one server everything comes to an halt.
[19:27] <peppe> [ceph@admin-node ~]$ ceph-deploy install admin-node node1 node2 node3 node4
[19:27] <alfredodeza> that is your problem peppe
[19:27] <alfredodeza> you are installing 'ceph' in the admin-node
[19:27] <alfredodeza> you added manually a repo file in the admin-node
[19:27] <alfredodeza> that has a repo file that has only ceph-noarch
[19:28] <doppelgrau> naga1: Informations: http://ceph.com/docs/master/rados/operations/crush-map/ <- for your tests you might want to change in the rule the host to osd
[19:28] * midnightrunner (~midnightr@c-67-174-241-112.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:28] <peppe> what do you suggest?
[19:28] <naga1> if i use 3 servers for each osd, is it fine?
[19:28] <peppe> the repo file was needed to get deploy intstalled
[19:28] * xarses (~andreww@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:29] * offender (~Ian2128@8Q4AAA662.tor-irc.dnsbl.oftc.net) Quit ()
[19:29] * Kyso (~totalworm@8Q4AAA67K.tor-irc.dnsbl.oftc.net) has joined #ceph
[19:29] <doppelgrau> naga1: the crush-rules defines your failure-domains. So the larges ???item??? that might fail without (too) bad impact
[19:30] <alfredodeza> peppe: a couple of different things here, you may want to install ceph in the admin node, if so, you need to get rid of that repo file or make it so that it looks like a regular ceph repo file
[19:30] <doppelgrau> naga1: e.g. I have an installation where the failuredomain is set to racks, but servers is usually a good start
[19:30] <alfredodeza> peppe: either install it there, but use the right repo file or just don't install it on the admin-node
[19:30] <doppelgrau> naga1: but cou can set size for the pools to 1, that a single failing disk/host can cause dataloss, but it works with a single host
[19:30] <peppe> okay. what the right repo file looks like? Id like to get ceph install on admin node as well
[19:31] <naga1> ok
[19:31] <alfredodeza> peppe: install on node1 and then look at it?
[19:31] <peppe> ok
[19:31] <alfredodeza> I am not sure what version/arch/distro you are installing on
[19:31] <alfredodeza> so I can't tell you for sure
[19:31] <alfredodeza> easier is to install on one and see
[19:31] <alfredodeza> *or*
[19:31] <alfredodeza> remove the one you added and let ceph-deploy install the right one
[19:32] <peppe> ok
[19:32] <peppe> installing on nodes 1-4 now...it seems to be working :)
[19:33] <peppe> awkward part is that even official procedure requires the repo to be manually added in admin-node
[19:33] <alfredodeza> peppe: you usually don't install ceph on the admin node
[19:34] <peppe> why is that? any drawbacks?
[19:34] <alfredodeza> no, I don't think. It is just a matter of 'my admin node can be my computer, while my cluster is somewhere else'
[19:34] <peppe> oki.
[19:34] * joshd (~jdurgin@206.169.83.146) has joined #ceph
[19:35] <peppe> in my case I have 5 VMs (1 admin, 1 monitor, 2 osds and 1 client)
[19:35] <naga1> thanks for the information
[19:35] <peppe> all running on a powerful server
[19:36] <peppe> is there a way to contribute or improve the procedure in a way to clarify that if user wants ceph on admin-node that previously created repo file should be removed?
[19:42] * jashank42 (~jashan42@202.164.53.117) Quit (Ping timeout: 480 seconds)
[19:44] * cube (~cube@66.87.130.21) has joined #ceph
[19:47] * mykola (~Mikolaj@91.225.200.88) has joined #ceph
[19:49] * cube-phone (~cube@66.87.64.9) has joined #ceph
[19:52] * jashank42 (~jashan42@202.164.53.117) has joined #ceph
[19:53] * cube (~cube@66.87.130.21) Quit (Read error: Connection reset by peer)
[19:57] * cube-phone (~cube@66.87.64.9) Quit (Ping timeout: 480 seconds)
[19:59] * Kyso (~totalworm@8Q4AAA67K.tor-irc.dnsbl.oftc.net) Quit ()
[19:59] * AG_Clinton (~Swompie`@exit2.fr33tux.org) has joined #ceph
[20:01] * xarses (~andreww@12.164.168.117) has joined #ceph
[20:04] * gaveen (~gaveen@175.157.34.175) has joined #ceph
[20:05] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[20:05] * valeech (~valeech@pool-72-86-37-215.clppva.fios.verizon.net) Quit (Quit: valeech)
[20:05] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[20:06] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[20:07] <peppe> when activating OSD1 (using filesystem (/var/local/osd @ node2) not whole disk), keep getting - 2015-06-01 11:06:04.962369 7f57bc16e700 0 -- :/1002465 >> 10.125.146.54:6789/0 pipe(0x7f57b8027050 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f57b8023c90).fault
[20:08] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[20:08] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[20:08] <peppe> prep OSD step worked well
[20:08] <johanni_> I believe that means ceph failed to communicate to a monitor node
[20:08] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Ping timeout: 480 seconds)
[20:09] <peppe> ok
[20:09] <peppe> let me check
[20:09] * lpabon (~quassel@nat-pool-bos-t.redhat.com) has joined #ceph
[20:09] <peppe> i can ping monitor node from node2
[20:10] <johanni_> what does ceph mon stat or ceph -s say
[20:11] <peppe> 2015-06-01 11:11:29.348901 7f1989516700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
[20:11] <peppe> 2015-06-01 11:11:29.348904 7f1989516700 0 librados: client.admin initialization error (2) No such file or directory
[20:11] <peppe> Error connecting to cluster: ObjectNotFound
[20:12] <peppe> I do have 5 *.keyring
[20:12] <peppe> ceph.mon, ceph.client, ceph.bootstrap-osd
[20:12] <peppe> -mds
[20:12] <peppe> -rgw
[20:13] * mattronix (~quassel@mail.mattronix.nl) Quit (Remote host closed the connection)
[20:14] * naga1 (~oftc-webi@idp01webcache1-z.apj.hpecore.net) Quit (Quit: Page closed)
[20:15] * TheSov (~TheSov@cip-248.trustwave.com) has joined #ceph
[20:15] <peppe> the keyrings are under /my-cluster
[20:15] <peppe> though /var/lib/ceph/bootstrap-osd/ is empty :(
[20:15] <peppe> I wonder why?
[20:16] <TheSov> Hello all, im trying to install the latest version of ceph on my lab, im at the point of setting up the repo. echo deb http://ceph.com/debian-hammer/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list <--- is this correct?
[20:16] <TheSov> hammer is not listed in the instructions on ceph.com so im trying to figure it out
[20:17] <gleam> looks right
[20:17] <TheSov> thanks!
[20:17] <gleam> ubuntu p/q/r/t and debian squeeze/wheezy appear to have repos
[20:17] <TheSov> they do but it installs cuttlefish
[20:18] <TheSov> trying to get the latest ceph-deploy
[20:18] <TheSov> im on ubuntu server here
[20:18] <gleam> http://ceph.com/debian-hammer/pool/main/c/ceph-deploy/
[20:18] <johanni_> I believe that even if your OSD's are down, you should be able to run basic commands like "ceph -s" and "ceph mon stat" without failures. Try SSHing into the exact monitor node and verify that your keyring has an appropriate key and caphs mon = "allow *"
[20:18] <gleam> you're not getting one of those versions?
[20:18] * mattronix (~quassel@mail.mattronix.nl) has joined #ceph
[20:19] <TheSov> hmmm
[20:19] <TheSov> pardom me i confuse easily, but according to install instructions the first thing i do is add a key, then a repo, then install ceph-deploy
[20:20] <TheSov> should i just install cephdeploy from that site and proceed?
[20:21] <peppe> where caphs mon = allow can be found?
[20:21] <peppe> keys seems ok @ monitor /var/lib/ceph/bootstrap-osd
[20:22] <TheSov> the ceph-deploy currently on ubuntu is 1.4
[20:22] * mattronix (~quassel@mail.mattronix.nl) Quit (Remote host closed the connection)
[20:23] <johanni_> peppe: the key should be in /var/lib/ceph/mon/{hostname}/keyring
[20:23] <johanni_> [mon.]
[20:23] <johanni_> key = AQBsFWZVoPLXAxAAzMJsyWs2G+Delyyp+vyrmg==
[20:23] <johanni_> caps mon = "allow *"
[20:23] * rkeene makes a note of it
[20:23] * mattronix (~quassel@mail.mattronix.nl) has joined #ceph
[20:24] <peppe> thats correct
[20:24] <peppe> it has the correct key
[20:24] * visbits (~textual@8.29.138.28) has joined #ceph
[20:25] <peppe> cluster.conf has
[20:25] <peppe> auth_cluster_required = cephx
[20:25] <peppe> auth_service_required = cephx
[20:25] <peppe> auth_client_required = cephx
[20:25] <peppe> [ceph@admin-node my-cluster]$ ceph -s
[20:25] <peppe> 2015-06-01 11:25:40.565101 7f404ec87700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
[20:25] <peppe> 2015-06-01 11:25:40.565104 7f404ec87700 0 librados: client.admin initialization error (2) No such file or directory
[20:25] <peppe> Error connecting to cluster: ObjectNotFound
[20:28] * madkiss (~madkiss@46.189.28.90) has joined #ceph
[20:28] <peppe> any ideas?
[20:28] <johanni_> peppe: try specifying in your cluster.conf the mon host and keyrings under a section such as [mon.<insert_id>]
[20:28] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[20:29] <peppe> mon_initial_members = node1
[20:29] <peppe> mon_host = 10.125.146.54
[20:29] <peppe> the above is already in the conf
[20:29] <peppe> will specify the keyrings
[20:29] * bandrus (~brian@129.210.115.37) has joined #ceph
[20:29] * AG_Clinton (~Swompie`@3DDAAAITL.tor-irc.dnsbl.oftc.net) Quit ()
[20:29] * Xa (~Rehevkor@kbtr2ce.tor-relay.me) has joined #ceph
[20:31] * peppe (~peppe@129.192.170.252) Quit (Remote host closed the connection)
[20:31] * linjan (~linjan@213.8.240.146) has joined #ceph
[20:34] * itsjpr (~imjpr@138.26.125.8) Quit (Ping timeout: 480 seconds)
[20:34] * alram (~alram@192.41.52.12) Quit (Quit: leaving)
[20:34] * oro (~oro@84-72-20-79.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:35] * imjpr (~imjpr@138.26.125.8) Quit (Ping timeout: 480 seconds)
[20:35] <debian112> what is suggested for disk controller: HBA or RAID?
[20:35] * rotbeard (~redbeard@x5f74c8b8.dyn.telefonica.de) Quit (Quit: Leaving)
[20:35] * alram (~alram@192.41.52.12) has joined #ceph
[20:39] <TheSov> going through instructions step by step, created the monitors, created a cluster, went to "gatherkeys" and its erroring that it could not find keyring
[20:40] <TheSov> is that deprecated in hammer?
[20:44] * nsoffer (~nsoffer@bzq-79-177-255-248.red.bezeqint.net) has joined #ceph
[20:46] * vbellur (~vijay@122.171.94.84) Quit (Ping timeout: 480 seconds)
[20:51] <Anticimex> wasn't radosgw recently enhanced with "object versions" ?
[20:51] <Anticimex> or is that work-in-progress?
[20:51] <Anticimex> docs/master/radosgw says no to versions in both swift and s3 case
[20:55] <TheSov> can anyone tell me why when i create a cluster it adds keys to a keyring clustername.mon.keyring, but when i type gatherkeys it looks for /etc/ceph/ceph.client.admin.keyring?
[20:57] * daviddcc (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[20:59] * Xa (~Rehevkor@3DDAAAIVO.tor-irc.dnsbl.oftc.net) Quit ()
[20:59] * arsenaali (~Popz@ncc-1701-a.tor-exit.network) has joined #ceph
[21:01] * johanni_ (~johanni@129.210.115.36) Quit (Remote host closed the connection)
[21:02] * jcsp1 (~Adium@82-71-16-249.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[21:02] * jcsp (~Adium@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[21:04] * dneary (~dneary@pool-96-252-45-212.bstnma.fios.verizon.net) has joined #ceph
[21:08] * johanni (~johanni@129.210.115.36) Quit (Ping timeout: 480 seconds)
[21:10] * sleinen1 (~Adium@2001:620:0:82::101) Quit (Ping timeout: 480 seconds)
[21:14] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[21:20] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[21:23] * bitserker (~toni@188.87.126.203) Quit (Ping timeout: 480 seconds)
[21:29] * arsenaali (~Popz@3DDAAAIXB.tor-irc.dnsbl.oftc.net) Quit ()
[21:29] * Spikey (~Xerati@176.10.99.202) has joined #ceph
[21:31] <jiyer> <TheSov> when you run gatherkeys you need to pass "--cluster=<your_clustername>" as argument...
[21:33] * aj__ (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[21:34] * LeaChim (~LeaChim@host86-163-124-72.range86-163.btcentralplus.com) has joined #ceph
[21:40] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[21:43] * ganders (~root@190.2.42.21) Quit (Quit: WeeChat 0.4.2)
[21:51] * valeech (~valeech@pool-72-86-37-215.clppva.fios.verizon.net) has joined #ceph
[21:54] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[21:59] * Spikey (~Xerati@9S0AAAFDO.tor-irc.dnsbl.oftc.net) Quit ()
[21:59] * linjan (~linjan@213.8.240.146) Quit (Quit: ?????????? ?? ???? ??????)
[21:59] * Jourei (~kalmisto@politkovskaja.torservers.net) has joined #ceph
[22:06] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[22:07] * sleinen1 (~Adium@2001:620:0:82::103) has joined #ceph
[22:11] * mattronix (~quassel@mail.mattronix.nl) Quit (Remote host closed the connection)
[22:11] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[22:11] * mattronix (~quassel@mail.mattronix.nl) has joined #ceph
[22:14] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[22:23] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[22:25] <Anticimex> a ceph project milestone: http://virtualgeek.typepad.com/virtual_geek/2015/06/scaleio-free-and-frictionless-access-what-are-you-going-to-do.html
[22:26] <Anticimex> you made EMC release open source software
[22:29] * Jourei (~kalmisto@9S0AAAFFH.tor-irc.dnsbl.oftc.net) Quit ()
[22:29] * rendar (~I@host82-178-dynamic.36-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[22:32] * jcsp1 (~Adium@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[22:32] * rendar (~I@host82-178-dynamic.36-79-r.retail.telecomitalia.it) has joined #ceph
[22:33] * alram_ (~alram@192.41.52.12) has joined #ceph
[22:33] * erice_ (~eric@50.245.231.209) has joined #ceph
[22:33] * johanni (~johanni@129.210.115.36) has joined #ceph
[22:33] * midnightrunner (~midnightr@216.113.160.71) has joined #ceph
[22:33] * mgolub (~Mikolaj@91.225.200.88) has joined #ceph
[22:33] * georgem1 (~Adium@fwnat.oicr.on.ca) has joined #ceph
[22:34] * johanni_ (~johanni@129.210.115.36) has joined #ceph
[22:34] * ChrisNBl_ (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[22:36] * kuroneko_ (~kuroneko@2600:3c01::f03c:91ff:fe96:1bfe) has joined #ceph
[22:36] * ZyTer_ (~ZyTer@ghostbusters.apinnet.fr) has joined #ceph
[22:37] * dosaboy_ (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) has joined #ceph
[22:37] * mjevans_ (~mjevans@li984-246.members.linode.com) has joined #ceph
[22:37] * frickler_ (~jens@v1.jayr.de) has joined #ceph
[22:37] * dostrow_ (~dostrow@bunker.bloodmagic.com) has joined #ceph
[22:37] * dlan (~dennis@116.228.88.131) has joined #ceph
[22:37] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * LeaChim (~LeaChim@host86-163-124-72.range86-163.btcentralplus.com) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * alram (~alram@192.41.52.12) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * mykola (~Mikolaj@91.225.200.88) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * midnight_ (~midnightr@216.113.160.71) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * georgem (~Adium@fwnat.oicr.on.ca) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * brad_mssw (~brad@66.129.88.50) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * erice (~eric@50.245.231.209) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * Jahkeup (Jahkeup@gemini.ca.us.panicbnc.net) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * mjevans (~mjevans@li984-246.members.linode.com) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * kuroneko (~kuroneko@yayoi.sysadninjas.net) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * eternaleye (~eternaley@50.245.141.73) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * Psi-Jack (~psi-jack@lhmon.linux-help.org) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * dlan_ (~dennis@116.228.88.131) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * Vivek_ (~vivek@96.126.115.102) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * dostrow (~dostrow@bunker.bloodmagic.com) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * MaZ- (~maz@00016955.user.oftc.net) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * beardo (~beardo__@beardo.cc.lehigh.edu) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * ccourtaut (~ccourtaut@178.62.125.124) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * ZyTer (~ZyTer@ghostbusters.apinnet.fr) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * acaos_ (~zac@209.99.103.42) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * raso (~raso@deb-multimedia.org) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * frickler (~jens@v1.jayr.de) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * steveeJ (~steveeJ@virthost3.stefanjunker.de) Quit (magnet.oftc.net helix.oftc.net)
[22:37] * Vivek (~vivek@96.126.115.102) has joined #ceph
[22:37] * acaos (~zac@209.99.103.42) has joined #ceph
[22:38] * lpabon (~quassel@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[22:38] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[22:38] * LeaChim (~LeaChim@host86-163-124-72.range86-163.btcentralplus.com) has joined #ceph
[22:38] * jcsp (~Adium@0001bf3a.user.oftc.net) has joined #ceph
[22:38] * alram (~alram@192.41.52.12) has joined #ceph
[22:38] * mykola (~Mikolaj@91.225.200.88) has joined #ceph
[22:38] * midnight_ (~midnightr@216.113.160.71) has joined #ceph
[22:38] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[22:38] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[22:38] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[22:38] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[22:38] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[22:38] * eternaleye (~eternaley@50.245.141.73) has joined #ceph
[22:38] * dlan_ (~dennis@116.228.88.131) has joined #ceph
[22:38] * MaZ- (~maz@00016955.user.oftc.net) has joined #ceph
[22:38] * beardo (~beardo__@beardo.cc.lehigh.edu) has joined #ceph
[22:38] * acaos_ (~zac@209.99.103.42) has joined #ceph
[22:38] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) has joined #ceph
[22:38] * raso (~raso@deb-multimedia.org) has joined #ceph
[22:38] * frickler (~jens@v1.jayr.de) has joined #ceph
[22:38] * steveeJ (~steveeJ@virthost3.stefanjunker.de) has joined #ceph
[22:38] * markl (~mark@knm.org) has joined #ceph
[22:38] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Quit: Leaving)
[22:38] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Max SendQ exceeded)
[22:39] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Ping timeout: 480 seconds)
[22:39] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Ping timeout: 480 seconds)
[22:39] * Jahkeup (Jahkeup@gemini.ca.us.panicbnc.net) has joined #ceph
[22:39] * alram (~alram@192.41.52.12) Quit (Ping timeout: 480 seconds)
[22:39] * dlan_ (~dennis@116.228.88.131) Quit (Read error: No route to host)
[22:39] * Psi-Jack (~psi-jack@lhmon.linux-help.org) has joined #ceph
[22:39] * midnight_ (~midnightr@216.113.160.71) Quit (Ping timeout: 480 seconds)
[22:39] * oro (~oro@84-72-20-79.dclient.hispeed.ch) has joined #ceph
[22:39] * acaos_ (~zac@209.99.103.42) Quit (Read error: No route to host)
[22:39] * frickler (~jens@v1.jayr.de) Quit (Ping timeout: 480 seconds)
[22:39] * steveeJ (~steveeJ@virthost3.stefanjunker.de) Quit (Ping timeout: 480 seconds)
[22:39] * mykola (~Mikolaj@91.225.200.88) Quit (Ping timeout: 480 seconds)
[22:39] * eternaleye (~eternaley@50.245.141.73) Quit (Ping timeout: 480 seconds)
[22:39] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[22:39] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Ping timeout: 480 seconds)
[22:39] * steveeJ (~steveeJ@virthost3.stefanjunker.de) has joined #ceph
[22:39] * ccourtaut (~ccourtaut@178.62.125.124) has joined #ceph
[22:39] * eternaleye (~eternaley@50.245.141.73) has joined #ceph
[22:40] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[22:42] * peppe1 (~peppe1@129.192.170.252) has joined #ceph
[22:43] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[22:43] * dyasny (~dyasny@173.231.115.58) Quit (Ping timeout: 480 seconds)
[22:44] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[22:45] * wschulze (~wschulze@cpe-69-206-242-231.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[22:46] * mgolub (~Mikolaj@91.225.200.88) Quit (Quit: away)
[22:46] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[22:47] * tw0fish (~tw0fish@UNIX3.ANDREW.CMU.EDU) Quit (Quit: leaving)
[22:49] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Ping timeout: 480 seconds)
[22:50] * jrankin (~jrankin@d53-64-170-236.nap.wideopenwest.com) Quit (Quit: Leaving)
[22:59] * bene (~ben@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[23:00] * bene (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[23:03] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[23:04] * tupper (~tcole@2001:420:2280:1272:8900:f9b8:3b49:567e) Quit (Ping timeout: 480 seconds)
[23:04] * Borf (~Borf@3DDAAAI41.tor-irc.dnsbl.oftc.net) has joined #ceph
[23:04] * c4tech (~c4tech@199.91.185.156) has joined #ceph
[23:09] * bene2 (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[23:13] * bene (~ben@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[23:14] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Remote host closed the connection)
[23:23] * xarses (~andreww@12.164.168.117) Quit (Remote host closed the connection)
[23:24] * Hemanth (~Hemanth@117.221.99.112) Quit (Ping timeout: 480 seconds)
[23:25] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[23:26] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[23:26] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit ()
[23:28] * tupper (~tcole@108-83-203-37.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[23:29] <TheSov> jiyer, thanks!
[23:29] <TheSov> the online docs dont say that
[23:29] <TheSov> someone really needs to update those
[23:30] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[23:32] * kuroneko_ is now known as kuroneko
[23:33] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[23:33] * Hemanth (~Hemanth@117.192.229.218) has joined #ceph
[23:34] * Borf (~Borf@3DDAAAI41.tor-irc.dnsbl.oftc.net) Quit ()
[23:34] <johanni> peppe: So I thought about it, and realized. Either run sudo ceph -s or check the permissions of your keyring
[23:37] <johanni> peppe: Also check there is only one keyring. If there are multiple, you can get issues. Make sure you use the full path to /etc/ceph keyring
[23:37] * destrudo (~destrudo@64.142.74.180) Quit (Ping timeout: 480 seconds)
[23:37] * alram_ (~alram@192.41.52.12) Quit (Quit: Lost terminal)
[23:37] <aarontc> hey cephers! After an extended power outage, I have three OSDs that can't boot, saying: journal FileJournal::wrap_read_bl: safe_read_exact 2195581344~4196411 returned -5
[23:38] * alram (~alram@192.41.52.12) has joined #ceph
[23:38] <aarontc> has anyone experienced something similar, and if so, how did you resolve it?
[23:38] * TGF (~Roy@hessel0.torservers.net) has joined #ceph
[23:39] * madkiss (~madkiss@46.189.28.90) Quit (Quit: Leaving.)
[23:41] * destrudo (~destrudo@64.142.74.180) has joined #ceph
[23:41] * xarses (~andreww@12.164.168.117) has joined #ceph
[23:42] * valeech (~valeech@pool-72-86-37-215.clppva.fios.verizon.net) Quit (Quit: valeech)
[23:48] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[23:49] * jbautista- (~wushudoin@209.132.181.86) has joined #ceph
[23:49] * gregmark1 (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[23:50] * tupper (~tcole@108-83-203-37.lightspeed.rlghnc.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[23:51] * tupper (~tcole@108-83-203-37.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[23:55] * destrudo_ (~destrudo@64.142.74.180) has joined #ceph
[23:56] * destrudo (~destrudo@64.142.74.180) Quit (Read error: Connection reset by peer)
[23:56] * wushudoin_ (~wushudoin@38.140.108.2) Quit (Ping timeout: 480 seconds)
[23:57] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:57] * jbautista- (~wushudoin@209.132.181.86) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.