#ceph IRC Log

Index

IRC Log for 2016-06-01

Timestamps are in GMT/BST.

[0:01] * ntpttr_laptop (~ntpttr@192.55.54.44) has joined #ceph
[0:10] <MentalRay> oh I scared everyone :p
[0:12] * mattbenjamin1 (~mbenjamin@12.118.3.106) Quit (Ping timeout: 480 seconds)
[0:14] * fsimonce (~simon@host128-29-dynamic.250-95-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[0:17] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:8490:50ec:39e9:deca) Quit (Ping timeout: 480 seconds)
[0:26] * spate (~Mraedis@7V7AAFISY.tor-irc.dnsbl.oftc.net) Quit ()
[0:33] * al (d@niel.cx) Quit (Remote host closed the connection)
[0:34] * tdb (~tdb@myrtle.kent.ac.uk) Quit (Ping timeout: 480 seconds)
[0:35] * jamesw_u3d (~textual@108-248-86-161.lightspeed.austtx.sbcglobal.net) has joined #ceph
[0:36] * bene (~bene@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[0:36] * newbie46 (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Ping timeout: 480 seconds)
[0:37] * al (quassel@niel.cx) has joined #ceph
[0:38] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) has joined #ceph
[0:39] * jamesw_u3d (~textual@108-248-86-161.lightspeed.austtx.sbcglobal.net) has left #ceph
[0:39] * jamesw_u3d (~textual@108-248-86-161.lightspeed.austtx.sbcglobal.net) has joined #ceph
[0:46] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[0:47] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) has joined #ceph
[0:50] * jamesw_u3d (~textual@108-248-86-161.lightspeed.austtx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[0:50] * cathode (~cathode@50.232.215.114) Quit (Quit: Leaving)
[0:52] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Ping timeout: 480 seconds)
[0:52] * sw3 (sweaung@2400:6180:0:d0::66:100f) Quit (Ping timeout: 480 seconds)
[0:52] * sw3 (sweaung@2400:6180:0:d0::66:100f) has joined #ceph
[0:56] * sbfox (~Adium@vancouver.xmatters.com) has joined #ceph
[0:56] * sbfox (~Adium@vancouver.xmatters.com) Quit ()
[0:58] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[1:01] * Dinnerbone (~QuantumBe@193.189.117.180) has joined #ceph
[1:01] * madkiss (~madkiss@2001:6f8:12c3:f00f:edb1:7084:2283:e848) has joined #ceph
[1:07] * madkiss1 (~madkiss@ip5b414c62.dynamic.kabel-deutschland.de) Quit (Ping timeout: 480 seconds)
[1:19] * onyb (~ani07nov@112.133.232.18) Quit (Quit: raise SystemExit())
[1:19] * sudocat (~dibarra@192.185.1.20) Quit (Quit: Leaving.)
[1:23] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) Quit (Remote host closed the connection)
[1:25] * oms101 (~oms101@p20030057EA07D400C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:25] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[1:30] * Dinnerbone (~QuantumBe@4MJAAFSZM.tor-irc.dnsbl.oftc.net) Quit ()
[1:30] * MJXII (~FNugget@tor-exit-01.1d4.us) has joined #ceph
[1:33] * oms101 (~oms101@p20030057EA784E00C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:33] <ronrib> hummmmm Performing full device TRIM (238.42TiB)
[1:34] * dgurtner (~dgurtner@178.197.239.57) has joined #ceph
[1:35] * vata (~vata@cable-192.222.249.207.electronicbox.net) has joined #ceph
[1:40] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[1:46] * PaulCuzner (~paul@115-188-69-199.jetstream.xtra.co.nz) has joined #ceph
[1:46] * badone (~badone@66.187.239.16) Quit (Ping timeout: 480 seconds)
[1:48] * tdb (~tdb@myrtle.kent.ac.uk) has joined #ceph
[1:48] * abeck (~textual@109.202.107.10) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[1:52] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[1:53] * jamesw_u3d (~textual@108-248-86-161.lightspeed.austtx.sbcglobal.net) has joined #ceph
[2:00] * realitysandwich (~perry@b2b-78-94-59-114.unitymedia.biz) has joined #ceph
[2:00] * ntpttr_laptop (~ntpttr@192.55.54.44) Quit (Remote host closed the connection)
[2:00] * ntpttr_laptop (~ntpttr@192.55.54.44) has joined #ceph
[2:00] * MJXII (~FNugget@06SAADBAT.tor-irc.dnsbl.oftc.net) Quit ()
[2:00] * Phase (~Mousey@37.220.35.36) has joined #ceph
[2:01] * georgem (~Adium@104-222-119-175.cpe.teksavvy.com) has joined #ceph
[2:01] * Phase is now known as Guest2683
[2:02] * xarses (~xarses@64.124.158.100) Quit (Ping timeout: 480 seconds)
[2:02] * gauravbafna (~gauravbaf@122.167.78.143) has joined #ceph
[2:03] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[2:04] * georgem (~Adium@104-222-119-175.cpe.teksavvy.com) Quit ()
[2:08] * raeven (~raeven@h89n10-oes-a31.ias.bredband.telia.com) Quit (Ping timeout: 480 seconds)
[2:08] * rendar (~I@host77-178-dynamic.7-87-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[2:10] * gauravbafna (~gauravbaf@122.167.78.143) Quit (Ping timeout: 480 seconds)
[2:11] * raeven (~raeven@h89n10-oes-a31.ias.bredband.telia.com) has joined #ceph
[2:11] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[2:13] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[2:18] * bjornar (~bjornar@109.247.131.38) Quit (Ping timeout: 480 seconds)
[2:22] * brad (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[2:26] * bjornar (~bjornar@109.247.131.38) has joined #ceph
[2:27] * wushudoin (~wushudoin@2601:646:9501:d2b2:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[2:27] * badone (~badone@113.29.24.218) has joined #ceph
[2:30] * Guest2683 (~Mousey@06SAADBB8.tor-irc.dnsbl.oftc.net) Quit ()
[2:32] * ntpttr_laptop (~ntpttr@192.55.54.44) Quit (Remote host closed the connection)
[2:35] * brad (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Quit: WeeChat 1.4)
[2:36] * aboyle (~aboyle__@2001:470:8b2d:1e1c:d267:e5ff:feec:c62c) Quit (Ping timeout: 480 seconds)
[2:40] * aboyle (~aboyle__@ardent.csail.mit.edu) has joined #ceph
[2:42] * huangjun (~kvirc@113.57.168.154) has joined #ceph
[2:45] * dgurtner (~dgurtner@178.197.239.57) Quit (Ping timeout: 480 seconds)
[2:45] <ronrib> works much better: mkfs.btrfs /dev/rbd0 --nodiscard
[2:47] * georgem (~Adium@104-222-119-175.cpe.teksavvy.com) has joined #ceph
[2:47] * georgem (~Adium@104-222-119-175.cpe.teksavvy.com) Quit ()
[2:47] * georgem (~Adium@206.108.127.16) has joined #ceph
[2:48] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[2:52] * georgem1 (~Adium@104-222-119-175.cpe.teksavvy.com) has joined #ceph
[2:52] * georgem (~Adium@206.108.127.16) Quit (Read error: Connection reset by peer)
[2:53] * georgem1 (~Adium@104-222-119-175.cpe.teksavvy.com) Quit ()
[2:53] * georgem (~Adium@206.108.127.16) has joined #ceph
[2:54] * realitysandwich (~perry@b2b-78-94-59-114.unitymedia.biz) Quit (Ping timeout: 480 seconds)
[3:11] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) Quit (Quit: Leaving)
[3:13] * khyron (~khyron@200.77.224.239) Quit (Quit: The computer fell asleep)
[3:14] * gauravbafna (~gauravbaf@122.167.78.143) has joined #ceph
[3:22] * gauravbafna (~gauravbaf@122.167.78.143) Quit (Ping timeout: 480 seconds)
[3:28] * natarej (~natarej@101.188.54.14) has joined #ceph
[3:28] * natarej__ (~natarej@101.188.54.14) Quit (Read error: Connection reset by peer)
[3:29] * penguinRaider (~KiKo@182.18.155.15) Quit (Ping timeout: 480 seconds)
[3:33] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:39] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[3:39] * penguinRaider (~KiKo@146.185.31.226) has joined #ceph
[3:42] * kefu (~kefu@183.193.182.2) has joined #ceph
[3:57] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) Quit (Quit: WeeChat 1.5)
[4:01] * flisky (~Thunderbi@36.110.40.28) has joined #ceph
[4:02] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[4:03] * shyu (~shyu@218.241.172.114) has joined #ceph
[4:03] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) has joined #ceph
[4:04] * joshd (~jdurgin@206.169.83.146) Quit (Quit: Leaving.)
[4:04] * kefu_ (~kefu@114.92.122.74) has joined #ceph
[4:08] * kefu (~kefu@183.193.182.2) Quit (Ping timeout: 480 seconds)
[4:10] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Ping timeout: 480 seconds)
[4:13] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[4:20] * jamesw_u3d (~textual@108-248-86-161.lightspeed.austtx.sbcglobal.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[4:25] * rburkholder (~overonthe@199.68.193.54) Quit (Ping timeout: 480 seconds)
[4:25] * boredatwork (~overonthe@199.68.193.54) Quit (Ping timeout: 480 seconds)
[4:32] * antongri_ (~antongrib@pool-173-66-18-82.washdc.fios.verizon.net) has joined #ceph
[4:33] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[4:35] * Bwana (~cryptk@7V7AAFI43.tor-irc.dnsbl.oftc.net) has joined #ceph
[4:37] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) has joined #ceph
[4:39] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[4:39] * antongribok (~antongrib@216.207.42.140) Quit (Ping timeout: 480 seconds)
[4:42] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[4:50] * lurbs (user@uber.geek.nz) Quit (Remote host closed the connection)
[4:51] * lurbs (user@uber.geek.nz) has joined #ceph
[5:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[5:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[5:05] * Bwana (~cryptk@7V7AAFI43.tor-irc.dnsbl.oftc.net) Quit ()
[5:07] * Vacuum__ (~Vacuum@88.130.222.87) has joined #ceph
[5:09] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[5:09] * demonspork (~CydeWeys@turing.tor-exit.calyxinstitute.org) has joined #ceph
[5:10] * antongri_ (~antongrib@pool-173-66-18-82.washdc.fios.verizon.net) Quit (Quit: Leaving...)
[5:14] * Vacuum_ (~Vacuum@88.130.210.121) Quit (Ping timeout: 480 seconds)
[5:35] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[5:36] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) Quit (Remote host closed the connection)
[5:39] * demonspork (~CydeWeys@4MJAAFS8E.tor-irc.dnsbl.oftc.net) Quit ()
[5:39] * pepzi (~darks@46.183.216.180) has joined #ceph
[5:40] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[5:46] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[5:47] <IvanJobs> hi, cephers, I'm wondering, Can I use two different filesystems in a single ceph cluster for differenct OSDs?
[6:00] * jamespage (~jamespage@culvain.gromper.net) Quit (Quit: Coyote finally caught me)
[6:01] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[6:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[6:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[6:01] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[6:09] * deepthi (~deepthi@122.171.79.229) has joined #ceph
[6:09] * pepzi (~darks@7V7AAFI7E.tor-irc.dnsbl.oftc.net) Quit ()
[6:10] * Keiya1 (~Tenk@edwardsnowden0.torservers.net) has joined #ceph
[6:11] * antongribok (~antongrib@216.207.42.140) has joined #ceph
[6:20] * overclk (~quassel@117.202.96.214) has joined #ceph
[6:25] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[6:25] * natarej (~natarej@101.188.54.14) Quit (Read error: Connection reset by peer)
[6:25] * natarej (~natarej@101.188.54.14) has joined #ceph
[6:27] * natarej (~natarej@101.188.54.14) Quit (Read error: Connection reset by peer)
[6:28] * natarej (~natarej@101.188.54.14) has joined #ceph
[6:29] * natarej (~natarej@101.188.54.14) Quit (Read error: Connection reset by peer)
[6:29] * natarej (~natarej@101.188.54.14) has joined #ceph
[6:36] * krypto (~krypto@G68-90-105-201.sbcis.sbc.com) has joined #ceph
[6:37] * shyu (~shyu@218.241.172.114) Quit (Ping timeout: 480 seconds)
[6:39] * Keiya1 (~Tenk@4MJAAFTAG.tor-irc.dnsbl.oftc.net) Quit ()
[6:40] * TGF1 (~Grum@4MJAAFTBN.tor-irc.dnsbl.oftc.net) has joined #ceph
[6:50] <[arx]> yes
[6:52] * natarej_ (~natarej@149.56.5.89) has joined #ceph
[6:54] * matejz (~matejz@element.planetq.org) has joined #ceph
[6:58] * natarej (~natarej@101.188.54.14) Quit (Ping timeout: 480 seconds)
[7:00] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[7:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[7:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[7:02] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Quit: Leaving)
[7:07] * deepthi (~deepthi@122.171.79.229) Quit (Read error: Connection reset by peer)
[7:07] * prallab (~prallab@216.207.42.137) has joined #ceph
[7:07] * TheSov (~TheSov@108-75-213-57.lightspeed.cicril.sbcglobal.net) Quit (Read error: Connection reset by peer)
[7:07] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[7:08] * TheSov (~TheSov@108-75-213-57.lightspeed.cicril.sbcglobal.net) has joined #ceph
[7:09] * EinstCrazy (~EinstCraz@58.247.117.134) has joined #ceph
[7:09] * TGF1 (~Grum@4MJAAFTBN.tor-irc.dnsbl.oftc.net) Quit ()
[7:09] * KungFuHamster (~rikai@ns316491.ip-37-187-129.eu) has joined #ceph
[7:12] * vata (~vata@cable-192.222.249.207.electronicbox.net) Quit (Quit: Leaving.)
[7:13] * rakeshgm (~rakesh@106.51.26.213) Quit (Remote host closed the connection)
[7:13] * linjan_ (~linjan@176.195.175.108) has joined #ceph
[7:14] * matejz (~matejz@element.planetq.org) Quit (Quit: matejz)
[7:17] * shyu (~shyu@218.241.172.114) has joined #ceph
[7:21] * kefu_ is now known as kefu
[7:22] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:24] * deepthi (~deepthi@106.206.145.35) has joined #ceph
[7:38] * natarej__ (~natarej@101.188.54.14) has joined #ceph
[7:39] * KungFuHamster (~rikai@4MJAAFTCM.tor-irc.dnsbl.oftc.net) Quit ()
[7:40] * gauravbafna (~gauravbaf@49.32.0.124) has joined #ceph
[7:40] * Maza (~osuka_@91.219.236.136) has joined #ceph
[7:41] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[7:41] * nass5 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[7:43] * IvanJobs_ (~ivanjobs@103.50.11.146) has joined #ceph
[7:46] * natarej_ (~natarej@149.56.5.89) Quit (Ping timeout: 480 seconds)
[7:46] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) has joined #ceph
[7:46] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[7:46] * nass5 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) has joined #ceph
[7:50] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Ping timeout: 480 seconds)
[7:52] * vbellur (~vijay@71.234.224.255) Quit (Ping timeout: 480 seconds)
[7:55] * linjan_ (~linjan@176.195.175.108) Quit (Ping timeout: 480 seconds)
[7:55] * PaulCuzner (~paul@115-188-69-199.jetstream.xtra.co.nz) has left #ceph
[8:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[8:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[8:01] * nass5 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[8:02] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[8:04] * vbellur (~vijay@2601:18f:700:55b0:5e51:4fff:fee8:6a5c) has joined #ceph
[8:06] * Lokta (~Lokta@carbon.coe.int) has joined #ceph
[8:09] * Maza (~osuka_@06SAADBPT.tor-irc.dnsbl.oftc.net) Quit ()
[8:10] * shylesh__ (~shylesh@121.244.87.118) has joined #ceph
[8:11] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[8:15] * nass5 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) has joined #ceph
[8:15] * Miouge (~Miouge@188.188.76.209) has joined #ceph
[8:16] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) has joined #ceph
[8:19] * prallab (~prallab@216.207.42.137) Quit ()
[8:23] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[8:24] * NTTEC (~nttec@122.53.162.158) has joined #ceph
[8:25] * vbellur (~vijay@2601:18f:700:55b0:5e51:4fff:fee8:6a5c) Quit (Ping timeout: 480 seconds)
[8:25] <NTTEC> Hello every one
[8:25] * EinstCrazy (~EinstCraz@58.247.117.134) Quit (Remote host closed the connection)
[8:26] <NTTEC> I'd like to ask a question regarding osd
[8:28] <NTTEC> I recently installed ceph and it was my first time using ceph and studied it. but upon reaching the osd prepare and activate. I founf out that my osd is always down.
[8:28] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[8:28] * NTTEC (~nttec@122.53.162.158) Quit (Remote host closed the connection)
[8:29] * smokedmeets (~smokedmee@c-73-158-201-226.hsd1.ca.comcast.net) Quit (Quit: smokedmeets)
[8:29] * dvanders (~dvanders@dvanders-pro.cern.ch) has joined #ceph
[8:31] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) has joined #ceph
[8:33] * northrup (~northrup@173.14.101.193) has joined #ceph
[8:35] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[8:35] * matejz (~matejz@141.255.254.208) has joined #ceph
[8:36] * matejz is now known as matej22211
[8:36] * smokedmeets (~smokedmee@c-73-158-201-226.hsd1.ca.comcast.net) has joined #ceph
[8:39] <northrup> I know it's a long shot, but is anyone awake in the ceph channel?
[8:40] <Be-El> sipping the first coffee....
[8:40] * DoDzy (~Sliker@06SAADBSO.tor-irc.dnsbl.oftc.net) has joined #ceph
[8:40] <northrup> I've got a node that says it's time lagged, however I'm syncing it and it's other two peers with NTP and it's NOT time lagged
[8:40] <BranchPredictor> ... so not exactly awake, but close.
[8:41] <Be-El> northrup: are virtual machines involved (either the nodes itself or the ntp server)?
[8:41] <northrup> Yes, they're in Azure's cloud (not by my choice)
[8:42] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[8:43] <Be-El> northrup: i had similar problems with our monitors running as kvm VMs. the time accuracy of vm is not good
[8:44] <Be-El> northrup: if the deviation is not large (the warning should include it), you can adjust the warning threshold
[8:44] <northrup> Be-El I'm not sure what to do, we don't have physical hardware... and I'm leery to increase the tolerance of the timing beyond the default .05 ms
[8:45] <northrup> What I'm getting is this:
[8:45] <northrup> HEALTH_WARN mds ceph-mon1 is laggy
[8:45] <northrup> mds.ceph-mon1 at 10.1.0.53:6800/4984 is laggy/unresponsive
[8:45] <northrup> no actual time delta
[8:46] * smokedmeets (~smokedmee@c-73-158-201-226.hsd1.ca.comcast.net) Quit (Quit: smokedmeets)
[8:47] <Be-El> ah, it's the mds, not one of the mons
[8:47] <Be-El> the mds may be busy or overloaded
[8:49] * northrup head -> desk
[8:49] * karnan (~karnan@121.244.87.117) has joined #ceph
[8:50] <northrup> ok - the other engineer who's working on this never wired the scripts to autostart... so the ceph MDS service was not started after a reboot
[8:50] * rotbeard (~redbeard@aftr-109-90-233-106.unity-media.net) has joined #ceph
[8:51] <Be-El> yes, that's also an explanation for 'unresponsive' ;-)
[8:54] * vbellur (~vijay@2601:18f:700:55b0:5e51:4fff:fee8:6a5c) has joined #ceph
[8:55] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[8:57] * dvanders (~dvanders@dvanders-pro.cern.ch) Quit (Ping timeout: 480 seconds)
[8:59] * yatin (~yatin@161.163.44.8) has joined #ceph
[8:59] * ram (~oftc-webi@static-202-65-140-146.pol.net.in) Quit (Ping timeout: 480 seconds)
[9:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[9:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[9:01] * dvanders (~dvanders@2001:1458:202:ed::101:124a) has joined #ceph
[9:04] * pabluk_ is now known as pabluk
[9:09] * yatin (~yatin@161.163.44.8) Quit (Remote host closed the connection)
[9:09] * DoDzy (~Sliker@06SAADBSO.tor-irc.dnsbl.oftc.net) Quit ()
[9:10] * Borf (~cryptk@tor-exit-1.h4x0.red) has joined #ceph
[9:12] * analbeard (~shw@support.memset.com) has joined #ceph
[9:13] * i_m (~ivan.miro@deibp9eh1--blueice4n1.emea.ibm.com) has joined #ceph
[9:16] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[9:18] <matej22211> hey guys
[9:20] * fsimonce (~simon@host128-29-dynamic.250-95-r.retail.telecomitalia.it) has joined #ceph
[9:20] <etienneme> Hi!
[9:23] * flisky (~Thunderbi@36.110.40.28) Quit (Ping timeout: 480 seconds)
[9:24] * northrup (~northrup@173.14.101.193) Quit (Quit: Textual IRC Client: www.textualapp.com)
[9:26] * nttec (~AndChat32@119.94.170.178) has joined #ceph
[9:27] <nttec> Anyone here?
[9:34] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) Quit (Quit: Leaving)
[9:34] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) has joined #ceph
[9:37] * antongri_ (~antongrib@216.207.42.140) has joined #ceph
[9:38] * AndChat|329121 (~AndChat32@122.53.162.158) has joined #ceph
[9:39] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[9:39] * Borf (~cryptk@06SAADBT9.tor-irc.dnsbl.oftc.net) Quit ()
[9:40] * Throlkim (~basicxman@politkovskaja.torservers.net) has joined #ceph
[9:42] * ade (~abradshaw@nat-pool-str-t.redhat.com) has joined #ceph
[9:43] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph
[9:43] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[9:43] * nttec (~AndChat32@119.94.170.178) Quit (Ping timeout: 480 seconds)
[9:44] * antongribok (~antongrib@216.207.42.140) Quit (Ping timeout: 480 seconds)
[9:47] * beck (~textual@213.152.161.45) has joined #ceph
[9:48] * NTTEC (~nttec@122.53.162.158) has joined #ceph
[9:51] * beck is now known as abeck
[9:52] * linjan_ (~linjan@86.62.112.22) has joined #ceph
[9:59] * DanFoster (~Daniel@2a00:1ee0:3:1337:44a6:c3d5:84e2:371c) has joined #ceph
[10:00] * povian (~povian@211.189.163.250) has joined #ceph
[10:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[10:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[10:01] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[10:08] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[10:09] * Throlkim (~basicxman@7V7AAFJGM.tor-irc.dnsbl.oftc.net) Quit ()
[10:09] * Pirate (~zviratko@strasbourg-tornode.eddai.su) has joined #ceph
[10:12] * yatin (~yatin@161.163.44.8) has joined #ceph
[10:17] * yatin (~yatin@161.163.44.8) Quit (Remote host closed the connection)
[10:20] * karnan (~karnan@121.244.87.117) Quit (Ping timeout: 480 seconds)
[10:21] * dgurtner (~dgurtner@178.197.232.251) has joined #ceph
[10:21] * yatin (~yatin@161.163.44.8) has joined #ceph
[10:24] * Mika_c (~Mika@122.146.93.152) has joined #ceph
[10:27] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) has joined #ceph
[10:27] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) Quit ()
[10:27] * hybrid512 (~walid@195.200.189.206) has joined #ceph
[10:27] * hybrid512 (~walid@195.200.189.206) Quit ()
[10:27] * hybrid512 (~walid@195.200.189.206) has joined #ceph
[10:27] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) has joined #ceph
[10:28] * karnan (~karnan@121.244.87.117) has joined #ceph
[10:30] * allaok (~allaok@machine107.orange-labs.com) has left #ceph
[10:32] * NTTEC (~nttec@122.53.162.158) Quit (Remote host closed the connection)
[10:34] * nttec (~AndChat32@119.94.170.178) has joined #ceph
[10:36] * joao (~joao@8.184.114.89.rev.vodafone.pt) has joined #ceph
[10:36] * ChanServ sets mode +o joao
[10:37] * jluis (~joao@8.184.114.89.rev.vodafone.pt) Quit (Ping timeout: 480 seconds)
[10:37] * andrei__1 (~andrei@37.220.104.190) has joined #ceph
[10:37] <andrei__1> hello guys
[10:38] <andrei__1> having a bit of an issue with my cluster
[10:38] * AndChat|329121 (~AndChat32@122.53.162.158) Quit (Ping timeout: 480 seconds)
[10:38] <andrei__1> had about 66K slow requests today
[10:38] <andrei__1> they are intermittent
[10:38] <andrei__1> come and go
[10:38] <andrei__1> it seems that on one of the osd servers all ceph-osd processess start consuming 200%+ cpu
[10:39] <andrei__1> I am running Jewel on ubuntu 14.04
[10:39] <andrei__1> 30 osds in total between 3 osd servers
[10:39] * jole (~oftc-webi@x4e377ff3.dyn.telefonica.de) has joined #ceph
[10:39] <andrei__1> backed by 2 x 3710 ssds for each osd server
[10:39] * Pirate (~zviratko@06SAADBW5.tor-irc.dnsbl.oftc.net) Quit ()
[10:40] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[10:40] <andrei__1> all servers can ping / ssh each other
[10:40] <andrei__1> no errors on the network interfaces
[10:40] <andrei__1> pleasee someone help me with identifying the issues
[10:40] <andrei__1> i don't want to restart ceph-osd processes, but rather find out the cause and fix it
[10:41] <andrei__1> as I have intermittent slow requests for the past couple of years
[10:41] <andrei__1> and it's driving me crazy
[10:42] <bloatyfloat> andrei__1: the logs will tell you why requests are slow
[10:43] <bloatyfloat> may be dependent on other osds, but its probably some hardware issue that you need to deal with
[10:43] <andrei__1> yeah, i've got way too much data to go through. like since the morning (uk time) i've had over 65K slow requests
[10:43] * LeaChim (~LeaChim@host86-168-126-119.range86-168.btcentralplus.com) has joined #ceph
[10:43] <bloatyfloat> Check network cables, switch performance
[10:43] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[10:44] <andrei__1> bloatyfloat, network cables/switch performance issues typically show up on the interface
[10:44] <bloatyfloat> if it's all in one unit, and you've restarted without issue before, do it
[10:44] <andrei__1> like errors/drops/etc
[10:44] <andrei__1> nothing like that
[10:44] <andrei__1> bloatyfloat, if i do restart, the problem will likely to go away until a few weeks later
[10:44] <bloatyfloat> poor quality cable has a connection but slow throughput if you're using copper
[10:45] <andrei__1> i want to identify and correct while it's happening now
[10:45] <bloatyfloat> full switch buffers still send traffic, but in bursty manners
[10:45] <bloatyfloat> Unless you are changing the amount of reads/writes in your cluster your problem is always there
[10:45] <andrei__1> bloatyfloat, I am on infiniband, still copper, but the cables are legit and crazy expensive
[10:45] * povian (~povian@211.189.163.250) Quit (Ping timeout: 480 seconds)
[10:46] <bloatyfloat> don't forget that blocked ops are blocked for 32 seconds before they are logged
[10:46] <andrei__1> bloatyfloat, my cluster is pretty static, apart from several TB of snapshots over the weekend
[10:46] <andrei__1> IB interface has absolutely no errors
[10:46] * nttec (~AndChat32@119.94.170.178) Quit (Ping timeout: 480 seconds)
[10:46] <bloatyfloat> I would recommend verifying network performance on the infiniband, to be safe
[10:47] <andrei__1> bloatyfloat, i've done this like at least a few dozen of times over the course of two years.
[10:47] <bloatyfloat> But as I say, start by checking the logs of the osds that are warning as slow
[10:47] <andrei__1> including extensive tests that were running over months
[10:47] <andrei__1> not a single error
[10:47] <andrei__1> while lods of slow requests during the time that the network tests were running
[10:48] <andrei__1> everytime this happens, ppl recommend to look at the network
[10:48] <andrei__1> it doesn't seem like its the network issue to be honest
[10:48] <andrei__1> i will check the logs for the reason of slow reqs
[10:48] <bloatyfloat> Whats your ratio of SSD journals to OSDs?
[10:49] <jole> Hi guys! I really need help :/ Situation: Ceph 9.2.0, after running "reweight-by-utilization", cluster got stuck in HEALTH_WARN, with "4 pgs stuck unclean" (but active+remapped). Tried with restarting/taking "out" OSDs where problematic PGs are located, but no success.
[10:49] <jole> Any ideas what I could do? This seems like a bug to me.
[10:50] <andrei__1> bloatyfloat, it's 1/5 using intel 3710
[10:50] <andrei__1> majority of slow requests are either currently waiting for subops from <osd>
[10:50] <bloatyfloat> OK, that's fine
[10:50] <andrei__1> or currently waiting for rw locks
[10:50] <bloatyfloat> ok, so check the logs on <osd>
[10:50] <bloatyfloat> what sata disks are you using?
[10:51] <andrei__1> i am using sas 3TB
[10:51] <andrei__1> with lsi raid card
[10:53] <bloatyfloat> OK, it's not entirely dissimilar to the hardware here, only we use sata
[10:53] * krypto (~krypto@G68-90-105-201.sbcis.sbc.com) Quit (Remote host closed the connection)
[10:53] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) has joined #ceph
[10:53] <andrei__1> bloatyfloat, looking at the logs, there are no other types of slow requests
[10:53] <andrei__1> just those two types
[10:54] * krypto (~krypto@106.51.28.254) has joined #ceph
[10:54] * rendar (~I@host148-178-dynamic.7-87-r.retail.telecomitalia.it) has joined #ceph
[10:54] <andrei__1> bloatyfloat, from what i recall earlier, these types of slow requests typically indicate a network issue, is this right?
[10:55] <andrei__1> bloatyfloat, are you also on infiniband or ethernet?
[10:55] <bloatyfloat> 10G, we also saw similar errors when using consumer ssd journals
[10:56] <bloatyfloat> so it may be disk level
[10:56] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) Quit (Quit: Leaving.)
[10:57] <bloatyfloat> but I also seem to remember that large s3 buckets with non-sharded indexes may cause this, so if you have object storage usage, this may be something to prod
[10:57] <andrei__1> bloatyfloat, we've recently replaced the consumer ssds to the enterprise once
[10:57] <andrei__1> and it didn't really make any difference in terms of the slow requests
[10:57] <andrei__1> they are intermittent
[10:57] <andrei__1> like i've not had any slow request for the last 5 days
[10:57] <andrei__1> and today i've had over 65K and they are keep on coming
[10:58] * nttec (~AndChat32@119.94.170.178) has joined #ceph
[10:59] <andrei__1> bloatyfloat, could you please tell me a bit more on the s3 issue please? We do use s3, but not extensively. mainly to backup some vms.
[10:59] <andrei__1> i don't believe we have buckets over 600gb / bucket
[10:59] <andrei__1> with total s3 storage use of around 2TB
[10:59] <bloatyfloat> large numbers of files (10m+) in a bucket created before index sharding was enabled (rather than large amounts of data)
[11:00] <bloatyfloat> was written to very frequently as well, which didn't help
[11:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[11:01] * penguinRaider (~KiKo@146.185.31.226) Quit (Ping timeout: 480 seconds)
[11:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[11:01] <nttec> How to bring up osd? My osd is always down.
[11:01] <bloatyfloat> I think I tracked this back by identifying the pools that were involved in the osds with r/w lock
[11:04] <bloatyfloat> you can get that from the object prefixes in the osd (and possibly mon) logs off the top of my head
[11:04] * MentalRay (~MentalRay@107.171.161.165) has joined #ceph
[11:04] * i_m (~ivan.miro@deibp9eh1--blueice4n1.emea.ibm.com) Quit (Ping timeout: 483 seconds)
[11:06] <nttec> I followed a how to setup via youtube Mohammed i think is the user. But im stuck with this problem.
[11:07] <Be-El> nttec: check the network connectivity between the osd(s) and the monitors
[11:08] <nttec> Be-El: I did turnoff the iptables in my ubunti machine
[11:08] <Be-El> jole: you might try to trigger the actual backfill by manually changing the weight of one involved osd a little bit
[11:09] <Be-El> jole: what's the content of the osd log?
[11:09] <Be-El> eh...nttec
[11:09] <Be-El> nttec: just upload the log or the relevant parts to a pastebin
[11:10] * Architect (~Mattress@06SAADBZN.tor-irc.dnsbl.oftc.net) has joined #ceph
[11:12] * AndChat|329121 (~AndChat32@122.53.162.158) has joined #ceph
[11:12] * NTTEC_ (~nttec@122.53.162.158) has joined #ceph
[11:12] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[11:13] <NTTEC_> test
[11:13] <jole> One second, let me find the related line from the logs
[11:13] * povian (~povian@211.189.163.250) has joined #ceph
[11:13] <Be-El> jole: that log part was for nttec
[11:14] <andrei__1> bloatyfloat, thanks for your help. i don't think s3 is the issue here. they have nowhere near that amount of files. I've not heard of index sharding before. need to do some reading
[11:14] <andrei__1> bloatyfloat, do you often see slow requests on your cluster?
[11:15] <NTTEC_> Be-El: http://pastebin.com/67sMWyQr
[11:15] * yatin (~yatin@161.163.44.8) Quit (Remote host closed the connection)
[11:16] <jole> Be-El: sorry then :)
[11:17] <Be-El> NTTEC_: there should be log files in /var/log/ceph on hosts ceph03/ceph04. they contain the log of the actual OSD process. please put one of them (or the relevant part) in a pastebin
[11:17] * Vacuum_ (~Vacuum@i59F796F6.versanet.de) has joined #ceph
[11:17] <jole> Be-El: so you're basically suggest that I run "ceph osd reweight <problematic_OSD> <wieght_value>", correct?
[11:17] * nttec (~AndChat32@119.94.170.178) Quit (Ping timeout: 480 seconds)
[11:17] * povian_ (~povian@pa3ae73.tokynt01.ap.so-net.ne.jp) has joined #ceph
[11:18] <Be-El> jole: yes. use a very little deviation from the current weight to reduce change-induced backfilling, e.g. +/- 0.1
[11:19] <jole> Be-El: Okay, thanks! I'll try that.
[11:20] <NTTEC_> Be-El: here's the log on one of my osd http://pastebin.com/Z7A5VByq
[11:21] * povian (~povian@211.189.163.250) Quit (Ping timeout: 480 seconds)
[11:22] * TMM (~hp@185.5.121.201) has joined #ceph
[11:23] <Be-El> NTTEC_: 2016-06-01 16:05:57.843024 7fd7232e88c0 -1 osd.0 0 (36) File name too long
[11:24] * Vacuum__ (~Vacuum@88.130.222.87) Quit (Ping timeout: 480 seconds)
[11:26] <Be-El> NTTEC_: i would propose to recreate these OSDs with a different filesystem. xfs or btrfs should be fine, ext4 is problematic due to the attr limits
[11:26] <Be-El> -> off for lunch
[11:29] * dgurtner_ (~dgurtner@194.230.155.137) has joined #ceph
[11:29] * marcan_ (marcan@marcansoft.com) has joined #ceph
[11:29] <NTTEC_> Be-El: this would mean to reformat the osd server?
[11:31] * dgurtner (~dgurtner@178.197.232.251) Quit (Ping timeout: 480 seconds)
[11:32] * krypto (~krypto@106.51.28.254) Quit (Quit: Leaving)
[11:33] * hellertime (~Adium@a72-246-0-10.deploy.akamaitechnologies.com) has joined #ceph
[11:34] <raeven> NTTEC_: Yes
[11:34] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[11:34] <raeven> or atleast the osd drives
[11:34] <hellertime> I'm stuck with both mds0 and mds-1 being "Beind on trimming" but I have no active mds sessions. I tried running 'ceph daemon mds.0 flush journal' but it just hangs ??? should I expect that operation to take some time to complete?
[11:35] * marcan (marcan@marcansoft.com) Quit (Ping timeout: 480 seconds)
[11:35] * marcan_ is now known as marcan
[11:36] <NTTEC_> raeven: what I did is setup 2 machine and installed it with ubuntu, so then I have to reinstall ubuntu and convert it to xfs, is there a different way that I can convert it to xfs without reinstalling it?
[11:36] * yatin (~yatin@161.163.44.8) has joined #ceph
[11:37] <raeven> NTTEC_: You should only need to reformat the drives that has the osd attached to them.
[11:37] <raeven> There is no way to convert between filesystems
[11:37] <raeven> not that i am aware of
[11:37] <NTTEC_> I see. thx. will do this.
[11:37] * MentalRay (~MentalRay@107.171.161.165) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[11:38] * kawa2014 (~kawa@5.90.37.80) has joined #ceph
[11:39] * Architect (~Mattress@06SAADBZN.tor-irc.dnsbl.oftc.net) Quit ()
[11:40] * branto (~branto@nat-pool-brq-t.redhat.com) has joined #ceph
[11:40] * AG_Clinton (~Linkshot@politkovskaja.torservers.net) has joined #ceph
[11:40] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:e4fc:ae1f:ea84:d75) has joined #ceph
[11:46] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[11:47] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Ping timeout: 480 seconds)
[11:48] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:e4fc:ae1f:ea84:d75) Quit (Ping timeout: 480 seconds)
[11:50] * rraja (~rraja@121.244.87.117) has joined #ceph
[11:50] * geli12 (~geli@1.136.97.79) has joined #ceph
[11:52] <hellertime> ok. I just stopped all my mds. restarted one, and still I can't either flush the journal or complete the journal trimming??? I've not seen this type of error persist so badly before , anyone else?
[11:54] * bl3d (~bl3d@62.116.219.97) has joined #ceph
[11:55] * povian_ (~povian@pa3ae73.tokynt01.ap.so-net.ne.jp) Quit (Remote host closed the connection)
[11:56] <hellertime> can there be clients attached to an mds, that dont appear in 'session ls'?
[11:57] * NTTEC_ (~nttec@122.53.162.158) Quit (Remote host closed the connection)
[11:59] * dvanders (~dvanders@2001:1458:202:ed::101:124a) Quit (Ping timeout: 480 seconds)
[12:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[12:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[12:01] * MrBy is now known as MrBy2
[12:03] * dvanders (~dvanders@dvanders-pro.cern.ch) has joined #ceph
[12:03] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[12:09] * AG_Clinton (~Linkshot@7V7AAFJLI.tor-irc.dnsbl.oftc.net) Quit ()
[12:10] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[12:10] * mhuang (~mhuang@58.100.85.155) has joined #ceph
[12:14] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[12:15] * Miouge_ (~Miouge@188.189.76.220) has joined #ceph
[12:16] * NTTEC (~nttec@122.53.162.158) has joined #ceph
[12:18] * andrei__1 (~andrei@37.220.104.190) Quit (Ping timeout: 480 seconds)
[12:19] * kawa2014 (~kawa@5.90.37.80) Quit (Ping timeout: 480 seconds)
[12:19] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[12:20] * Miouge (~Miouge@188.188.76.209) Quit (Ping timeout: 480 seconds)
[12:20] * Miouge_ is now known as Miouge
[12:24] * andrei__1 (~andrei@37.220.104.190) has joined #ceph
[12:25] * ram (~oftc-webi@static-202-65-140-146.pol.net.in) has joined #ceph
[12:25] * branto (~branto@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[12:30] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[12:30] <ram> Hi I am configuring Ceph Cluster. I was geeting the following issue.
[12:30] <ram> $ ceph -w cluster e18eab6e-0d30-4c21-abfe-10d67b205eb7 health HEALTH_WARN 8 pgs stuck inactive; 8 pgs stuck unclean monmap e1: 1 mons at {ceph=192.168.2.101:6789/0}, election epoch 1, quorum 0 ceph osdmap e24: 2 osds: 2 up, 2 in pgmap v71: 256 pgs, 10 pools, 1408 bytes data, 44 objects 13970 MB used, 75674 MB / 94489 MB avail 8 creating 248 active+clean
[12:31] <ram> How to resolve this issue
[12:32] * realitysandwich (~perry@b2b-78-94-59-114.unitymedia.biz) has joined #ceph
[12:32] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[12:33] <Mika_c> Hi ram, can you print the result of "$ceph osd crush rule dump"
[12:34] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[12:34] <Mika_c> ram, How many replica of a pool ?
[12:38] * branto (~branto@213.175.37.12) has joined #ceph
[12:38] * NTTEC (~nttec@122.53.162.158) Quit (Remote host closed the connection)
[12:40] * _s1gma (~FierceFor@hessel3.torservers.net) has joined #ceph
[12:42] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[12:44] <bloatyfloat> andrei__1: sorry for the slow delay, we've not had them since we replaced the consumer SSDs and enabled bucket index sharding
[12:45] <bloatyfloat> one note about sharding is that I don't think it can be applied to existing buckets (that may have changed)
[12:45] <andrei__1> bloatyfloat, no worries! do you have any links re index sharding?
[12:45] <bloatyfloat> andrei__1: http://ceph.com/planet/radosgw-big-index/
[12:46] <andrei__1> thanks!
[12:46] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[12:47] <andrei__1> bloatyfloat, so did you have to recreate your buckets?
[12:47] <andrei__1> and migrate the data?
[12:48] <bloatyfloat> We only recreated the problematic one, yes, but it resolved the issue, and also improved performance when listing the bucket etc
[12:48] * abeck (~textual@213.152.161.45) Quit (Quit: Textual IRC Client: www.textualapp.com)
[12:48] * povian (~povian@pa3ae73.tokynt01.ap.so-net.ne.jp) has joined #ceph
[12:49] <ram> Mika_c: Hi.
[12:49] <ram> $ ceph osd crush rule dump [ { "rule_id": 0, "rule_name": "replicated_ruleset", "ruleset": 0, "type": 1, "min_size": 1, "max_size": 10, "steps": [ { "op": "take", "item": -1, "item_name": "default"}, { "op": "chooseleaf_firstn", "num": 0, "type": "host"}, { "op": "emit"}]}]
[12:51] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[12:52] <ram> Mika_c: I have given osd_pool_default_size = 4 in ceph.conf
[12:52] * yatin (~yatin@161.163.44.8) Quit (Remote host closed the connection)
[12:53] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[12:54] <Mika_c> ram, But you just have 2 osd. How about "ceph osd tree"
[12:54] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit ()
[12:54] * NTTEC (~nttec@122.53.162.158) has joined #ceph
[12:55] * branto (~branto@213.175.37.12) has left #ceph
[12:56] * jole (~oftc-webi@x4e377ff3.dyn.telefonica.de) Quit (Quit: Page closed)
[12:56] * ram_ (~oftc-webi@static-202-65-140-146.pol.net.in) has joined #ceph
[12:56] <Mika_c> ram, The setting "osd_pool_default_size = 4" mean you need at least 4 host and each host have 1 osd.
[12:58] <Mika_c> ram, You may need to add 2 more hosts and osds. Or... reduce the replica size.
[12:58] * Mika_c (~Mika@122.146.93.152) Quit (Quit: Leaving)
[12:58] <ram_> Mika_c: Oh. Thank you very much.
[12:59] * mhuang (~mhuang@58.100.85.155) Quit (Quit: This computer has gone to sleep)
[12:59] * povian_ (~povian@pa3ae73.tokynt01.ap.so-net.ne.jp) has joined #ceph
[13:00] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:e4fc:ae1f:ea84:d75) has joined #ceph
[13:00] * ram (~oftc-webi@static-202-65-140-146.pol.net.in) Quit (Ping timeout: 480 seconds)
[13:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[13:01] * povian__ (~povian@211.189.163.250) has joined #ceph
[13:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[13:01] * NTTEC (~nttec@122.53.162.158) Quit (Remote host closed the connection)
[13:01] * povian__ (~povian@211.189.163.250) Quit ()
[13:07] * povian (~povian@pa3ae73.tokynt01.ap.so-net.ne.jp) Quit (Ping timeout: 480 seconds)
[13:07] * povian_ (~povian@pa3ae73.tokynt01.ap.so-net.ne.jp) Quit (Ping timeout: 480 seconds)
[13:08] * rakeshgm (~rakesh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:08] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Quit: Leaving)
[13:09] * _s1gma (~FierceFor@4MJAAFTPM.tor-irc.dnsbl.oftc.net) Quit ()
[13:10] * AndChat|329121 (~AndChat32@122.53.162.158) Quit (Ping timeout: 480 seconds)
[13:13] * andrei__1 (~andrei@37.220.104.190) Quit (Ping timeout: 480 seconds)
[13:13] * rotbeard (~redbeard@aftr-109-90-233-106.unity-media.net) Quit (Ping timeout: 480 seconds)
[13:15] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Remote host closed the connection)
[13:16] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[13:17] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[13:17] * andrei__1 (~andrei@37.220.104.190) has joined #ceph
[13:18] * rakeshgm (~rakesh@121.244.87.118) has joined #ceph
[13:18] * ram_ (~oftc-webi@static-202-65-140-146.pol.net.in) Quit (Quit: Page closed)
[13:18] * krish (~oftc-webi@static-202-65-140-146.pol.net.in) has joined #ceph
[13:19] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[13:20] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[13:22] * vanham (~vanham@12.199.84.146) has joined #ceph
[13:23] * bene (~bene@2601:18c:8501:41d0:ea2a:eaff:fe08:3c7a) has joined #ceph
[13:24] * dgurtner_ (~dgurtner@194.230.155.137) Quit (Ping timeout: 480 seconds)
[13:24] * Hemanth (~hkumar_@121.244.87.118) has joined #ceph
[13:24] <krish> Hi. I want to configure rados gateway using http://docs.ceph.com/docs/master/install/install-ceph-gateway/.
[13:25] <krish> "$ceph-deploy install --rgw ceph-mon"
[13:25] * shyu (~shyu@218.241.172.114) Quit (Ping timeout: 480 seconds)
[13:26] <krish> It was giving issue like ceph-deploy: error: unrecognized arguments: --rgw
[13:27] * abbi (~oftc-webi@static-202-65-140-151.pol.net.in) has joined #ceph
[13:30] <krish> abbi: Hi. are you getting the same issue while configuring radosgw?
[13:30] <abbi> while configuring federated gateway, facing issues with radosgw-agent -c region-data-sync.conf it
[13:31] * yatin (~yatin@161.163.44.8) has joined #ceph
[13:32] * yk (~yatin@161.163.44.8) has joined #ceph
[13:32] * yatin (~yatin@161.163.44.8) Quit (Read error: Connection reset by peer)
[13:33] <abbi> <yatin> seeing the below error
[13:33] <abbi> <yatin>ERROR:root:Could not retrieve region map from destination
[13:39] <s3an2> anyone changed straw_calc_version from 0 to 1 - trying to get an idea of how much recovery I may see.
[13:40] * Dinnerbone (~CoZmicShR@chulak.enn.lu) has joined #ceph
[13:41] <krish> Hi. I am configuring rados gateway using http://docs.ceph.com/docs/master/install/install-ceph-gateway/. "$ceph-deploy install --rgw ceph-mon" It was giving issue like ceph-deploy: error: unrecognized arguments: --rgw . Please anyone tell me how to resolve this issue.
[13:42] <vikhyat> krish: upgrade your ceph-deploy version
[13:44] <abbi> while configuring federated gateway, facing issues with radosgw-agent -c region-data-sync.conf it, ERROR:root:Could not retrieve region map from destination
[13:47] * yk (~yatin@161.163.44.8) Quit (Remote host closed the connection)
[13:47] * denaitre (~oftc-webi@squid1-loi.cpub.univ-nantes.fr) has joined #ceph
[13:48] <abbi> Hi, while configuring federated gateway, facing issues with radosgw-agent -c region-data-sync.conf it, ERROR:root:Could not retrieve region map from destination
[13:48] <krish> vikhyat: Thank you.
[13:48] * huangjun (~kvirc@113.57.168.154) Quit (Ping timeout: 480 seconds)
[13:52] * shylesh__ (~shylesh@121.244.87.118) Quit (Remote host closed the connection)
[13:55] * b0e (~aledermue@213.95.25.82) has joined #ceph
[13:56] * nass5 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) Quit (Quit: Leaving.)
[13:57] * inf_b (~o_O@dot1x-170-217.wlan.uni-giessen.de) has joined #ceph
[13:57] * nass5 (~fred@dn-infra-12.lionnois.site.univ-lorraine.fr) has joined #ceph
[13:59] * inf_b (~o_O@dot1x-170-217.wlan.uni-giessen.de) Quit (Remote host closed the connection)
[13:59] * rotbeard (~redbeard@aftr-109-90-233-106.unity-media.net) has joined #ceph
[14:00] * garphy`aw is now known as garphy
[14:00] * Hemanth (~hkumar_@121.244.87.118) Quit (Ping timeout: 480 seconds)
[14:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[14:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[14:02] * branto (~branto@213.175.37.12) has joined #ceph
[14:02] * dgurtner (~dgurtner@178.197.232.251) has joined #ceph
[14:05] <IvanJobs_> hi cephers, I saw a new concept in ceph for myself, that is ceph FD cache, what does it mean?
[14:06] * bene (~bene@2601:18c:8501:41d0:ea2a:eaff:fe08:3c7a) Quit (Quit: Konversation terminated!)
[14:09] * karnan (~karnan@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:09] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[14:09] * Dinnerbone (~CoZmicShR@7V7AAFJQF.tor-irc.dnsbl.oftc.net) Quit ()
[14:10] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) has joined #ceph
[14:10] * W|ldCraze (~Arcturus@06SAADB67.tor-irc.dnsbl.oftc.net) has joined #ceph
[14:10] * branto (~branto@213.175.37.12) Quit (Ping timeout: 480 seconds)
[14:10] <IvanJobs_> does anyone known this? ceph FD cache? ceph FileStore with SSD cache?
[14:11] <denaitre> hi guys, i found this mail from dachary last year, saying erasure coding is not possible with cephfs, is it still the case? http://article.gmane.org/gmane.comp.file-systems.ceph.user/19358
[14:12] <BranchPredictor> IvanJobs_: where did you saw it?
[14:13] <IvanJobs_> BranchPredictor: for example: http://tracker.ceph.com/issues/6629
[14:13] * Hemanth (~hkumar_@121.244.87.117) has joined #ceph
[14:13] * IvanJobs_ (~ivanjobs@103.50.11.146) Quit ()
[14:18] <flaf> Hi. I have a automatic cephfs mount at boot in fstab via ceph-fuse with Infernalis. All was ok. I have upgraded the client to Jewel and now the mount at boot works no longer. After boot, cephfs is not mounted and if I just launch ???mount /mnt??? (/mnt is the mount point of my cephfs), all is OK. Do you have an idea? I'm on Ubuntu Trusty. Here is my line in fstab: http://paste.alacon.org/41379
[14:20] * karnan (~karnan@121.244.87.117) has joined #ceph
[14:20] * mattbenjamin1 (~mbenjamin@12.118.3.106) has joined #ceph
[14:21] * branto (~branto@nat-pool-brq-t.redhat.com) has joined #ceph
[14:21] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[14:24] * rakeshgm (~rakesh@121.244.87.118) Quit (Ping timeout: 480 seconds)
[14:26] * andrei__1 (~andrei@37.220.104.190) Quit (Ping timeout: 480 seconds)
[14:27] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[14:28] <Be-El> flaf: trusty uses upstart, so there should be some information in /var/log/upstart/mountall.log
[14:29] * branto (~branto@nat-pool-brq-t.redhat.com) has left #ceph
[14:29] <Be-El> denaitre: cephfs cannot use ec pool directly, that's correct afaik. ec pool do not support partial writes, e.g. overwrite a part of a file
[14:29] * scuttle|afk is now known as scuttlemonkey
[14:30] <Be-El> denaitre: but you can use a cache tier pool in front of the ec pool
[14:31] * branto (~branto@nat-pool-brq-t.redhat.com) has joined #ceph
[14:31] * andrei__1 (~andrei@37.220.104.190) has joined #ceph
[14:33] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[14:36] <The_Ball> I'm getting a few of these on my OSDs: sdb1: rw=0, want=11710557328, limit=11710557327
[14:36] <The_Ball> Is this a known issue? is osd-prepare creating incorrect partition tables?
[14:37] * gauravbafna (~gauravbaf@49.32.0.124) Quit (Remote host closed the connection)
[14:37] <Lokta> Hi everyone ! quick question : When you have two osd on the same server (and other osds on other servers) crush prevents data from being only on this server
[14:38] <Lokta> is there a way to have the same behaviour but with iscsi mounts ?
[14:38] <The_Ball> Parted reports /dev/sdb as 11721045168s and the partition start as 10487808s end as 11721045134s
[14:39] <The_Ball> So the limit is the partition size, does the OSD do any sort of probing, reading past the end?
[14:39] <Lokta> for example if i have 2 osd with a local osd and a iscsi osd is there a way to make sure that if the san dies i wont have stale pg ?
[14:39] <flaf> Be-El: thx for the info. Indeed, I have ???ceph mount failed with (1) Operation not permitted??? in mountall.log but I don't understand this error because the manual command ???mount /mnt??? works perfectly.
[14:39] <Be-El> flaf: maybe the network is not available at that time
[14:39] * W|ldCraze (~Arcturus@06SAADB67.tor-irc.dnsbl.oftc.net) Quit ()
[14:40] * MatthewH12 (~Zeis@7V7AAFJTL.tor-irc.dnsbl.oftc.net) has joined #ceph
[14:41] <Be-El> Lokta: ceph does not known about the underlying block device (except for filesystem specific code in osd filestore)
[14:41] <flaf> I have put _netdev in mount options. And you think that this error is a consequence of a network not yet UP?
[14:42] <Be-El> Lokta: if you adjust your crush map and crush rulesets to reflect the hardware setup, you should be able to survive a SAN failure
[14:42] <Be-El> flaf: it is just one possible explanation
[14:42] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[14:43] <flaf> I will test a manual mount without network (it's just a testing VM)...
[14:43] <Be-El> The_Ball: i've seen similar message on centos 7.2 and ceph hammer release. but i don't know whether it affected the operation
[14:44] * rotbeard (~redbeard@aftr-109-90-233-106.unity-media.net) Quit (Ping timeout: 480 seconds)
[14:44] <The_Ball> Be-El, are you also seeing "libceph: osd8 ip:port socket closed (con state OPEN)" on clients?
[14:44] <Be-El> The_Ball: for cephfs clients, yes
[14:45] <The_Ball> I'm only using RBD, but I'm seeing that message
[14:45] <Lokta> will look into it, thx !
[14:45] <Be-El> kernel rbd or librbd (userspace)?
[14:46] <Be-El> Lokta: there's a blog entry by sebastien han describing how to setup two root hierarchies to support ssd and hdd based storage on the same cluster
[14:46] <Be-El> Lokta: a similar setup (with its own root for the san-osds) should work in your case
[14:46] <The_Ball> Be-El, rbd
[14:46] * madkiss (~madkiss@2001:6f8:12c3:f00f:edb1:7084:2283:e848) Quit (Quit: Leaving.)
[14:46] <Be-El> The_Ball: ok, since libceph is the kernel library for ceph related functions
[14:47] <Be-El> The_Ball: i assume that osds close connections after some time of inactivity to save resources
[14:48] <Lokta> got it ! thank you :)
[14:48] <denaitre> Be-El: thanks for the reply, i'm looking into it
[14:48] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[14:49] <Be-El> so, odes anyone around has experience with centos 7.2, mellanox connect-x3 card in ethernet mode and performance tuning? our osd hosts have way too high latency / way too low throughput.....
[14:49] <Be-El> s/odes/does/
[14:51] * johnavp1989 (~jpetrini@pool-72-94-170-170.phlapa.fios.verizon.net) has joined #ceph
[14:51] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[14:52] * vbellur (~vijay@2601:18f:700:55b0:5e51:4fff:fee8:6a5c) Quit (Ping timeout: 480 seconds)
[14:53] * krish (~oftc-webi@static-202-65-140-146.pol.net.in) Quit (Ping timeout: 480 seconds)
[14:54] * ram (~oftc-webi@static-202-65-140-146.pol.net.in) has joined #ceph
[14:56] * georgem (~Adium@206.108.127.16) has joined #ceph
[14:58] * NTTEC (~nttec@122.53.162.158) has joined #ceph
[15:00] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) has joined #ceph
[15:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[15:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[15:02] * neurodrone_ (~neurodron@NYUFWA-GUESTS-01.NATPOOL.NYU.EDU) has joined #ceph
[15:03] * bara_ (~bara@213.175.37.12) has joined #ceph
[15:05] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[15:05] * scg (~zscg@valis.gnu.org) has joined #ceph
[15:09] * karnan (~karnan@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:09] * MatthewH12 (~Zeis@7V7AAFJTL.tor-irc.dnsbl.oftc.net) Quit ()
[15:11] * wes_dillingham (~wes_dilli@65.112.8.203) has joined #ceph
[15:11] * neurodrone_ (~neurodron@NYUFWA-GUESTS-01.NATPOOL.NYU.EDU) Quit (Quit: neurodrone_)
[15:12] * karnan (~karnan@121.244.87.117) has joined #ceph
[15:15] <NTTEC> just wondering can you guys recommend a how to set-up of ceph cluster?
[15:19] * wes_dillingham (~wes_dilli@65.112.8.203) Quit (Quit: wes_dillingham)
[15:20] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[15:21] <flaf> Be-El: I have checked, with a network down, the manual mount just freeze. Imho, there is a little regression Infernalis->Jewel on Ubuntu Trusty.
[15:22] <Be-El> and the ubuntu xenial backport kernel for trusty definitely has a number of problems with cephfs
[15:23] <flaf> Oh, with cephfs I have gave up to use the kernel currently. ;)
[15:23] * jrankin (~jrankin@d53-64-170-236.nap.wideopenwest.com) has joined #ceph
[15:25] * dscastro (~dscastro@181.166.94.84) has joined #ceph
[15:25] <Be-El> the pagecache support for ceph-fuse is not working completely yet, so i have to use the kernel client
[15:26] <Be-El> it is also faster with respect to latency (no kernel <-> userspace context switch)
[15:26] <flaf> Yes it's logical.
[15:26] <dscastro> Hello, does anybody knows if is safe to disable sortbitwise on a jewel cluster?
[15:28] <dscastro> just trying to understand why my cluster went to unhealth with lots of unfound objects, there isn't node crashes nor disks failing and restarting the osd's daemons make it worst
[15:28] * bara_ (~bara@213.175.37.12) Quit (Quit: Bye guys! (??????????????????? ?????????)
[15:29] * m0zes__ (~mozes@n117m02.cis.ksu.edu) has joined #ceph
[15:29] * oliveiradan10 (~doliveira@67.214.238.80) Quit (Remote host closed the connection)
[15:30] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[15:32] <Be-El> dscastro: do you use a cache tier just by chance?
[15:33] <dscastro> no
[15:33] * NTTEC (~nttec@122.53.162.158) Quit (Remote host closed the connection)
[15:33] <ram> Hi. I am configuring radosgw using the followed link : http://docs.ceph.com/docs/master/install/install-ceph-gateway/ . When I ran "$ceph-deploy install --rgw ceph-mon" . It was giving DPKG issue.
[15:33] <ram> [ceph][WARNIN] E: Sub-process /usr/bin/dpkg returned an error code (1) [ceph][ERROR ] RuntimeError: command returned non-zero exit status: 100 [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install ca-certificates apt-transport-https
[15:34] <ram> Please tell me how to resolve this
[15:34] <ram> I tried different ways to resolve but it shows the same error
[15:35] <Be-El> ram: what's the exact error message from apt-get?
[15:35] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[15:36] <ram> Be-El: Hi.
[15:36] <ram> Errors were encountered while processing: ceph-common ceph-base ceph-osd ceph-mon ceph-mds ceph radosgw E: Sub-process /usr/bin/dpkg returned an error code (1)
[15:37] * rmart04 (~rmart04@support.memset.com) has joined #ceph
[15:37] <Be-El> ram: that just indicates that something went wron, but not the reason why package installation failed
[15:37] <Be-El> ram: you can try to install the packages manually on the host by running apt-get install as root
[15:39] <ram> Be-El: I tried manually. But also was giving the same issue.
[15:39] <Be-El> ram: but it should have printed a more verbose error message
[15:40] * adept256 (~Swompie`@4MJAAFTX2.tor-irc.dnsbl.oftc.net) has joined #ceph
[15:41] <ram> Be-El: it is a big one . can I send that verbose message directly
[15:41] * flisky (~Thunderbi@124.207.50.249) has joined #ceph
[15:41] <Be-El> ram: you can upload it do a pastebin
[15:45] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[15:45] <ram> Be-El : http://paste.openstack.org/show/506947/
[15:45] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[15:47] * kefu (~kefu@114.92.122.74) Quit (Max SendQ exceeded)
[15:48] * kefu (~kefu@114.92.122.74) has joined #ceph
[15:48] * jamesw_u3d (~textual@108-248-86-161.lightspeed.austtx.sbcglobal.net) has joined #ceph
[15:49] <ram> Be-El : http://paste.openstack.org/show/506952/.
[15:50] * ntpttr_laptop (~ntpttr@192.55.54.45) has joined #ceph
[15:55] * smerz (~ircircirc@37.74.194.90) has joined #ceph
[15:55] * flisky (~Thunderbi@124.207.50.249) Quit (Quit: flisky)
[15:56] * dgurtner (~dgurtner@178.197.232.251) Quit (Read error: Connection reset by peer)
[15:56] * shaunm (~shaunm@74.83.215.100) has joined #ceph
[15:58] * squizzi (~squizzi@107.13.31.195) has joined #ceph
[16:01] * ira (~ira@nat-pool-bos-t.redhat.com) has joined #ceph
[16:04] * dgurtner (~dgurtner@194.230.155.137) has joined #ceph
[16:09] * adept256 (~Swompie`@4MJAAFTX2.tor-irc.dnsbl.oftc.net) Quit ()
[16:10] * blip2 (~Bwana@snowfall.relay.coldhak.com) has joined #ceph
[16:11] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:12] * dscastro (~dscastro@181.166.94.84) Quit (Ping timeout: 480 seconds)
[16:13] * matej22211 (~matejz@141.255.254.208) Quit (Quit: matej22211)
[16:14] * danieagle (~Daniel@177.94.29.76) has joined #ceph
[16:18] * andrei__1 (~andrei@37.220.104.190) Quit (Quit: Ex-Chat)
[16:21] <Be-El> ram: the problem seems to be related to the ceph-common package. i would propose to remove that package and reinstall it manually on the host itself
[16:26] * itamarl (~itamarl@194.90.7.244) Quit (Quit: itamarl)
[16:28] * yatin (~yatin@203.212.245.90) has joined #ceph
[16:29] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) has joined #ceph
[16:30] * yatin (~yatin@203.212.245.90) Quit (Remote host closed the connection)
[16:31] * bl3d (~bl3d@62.116.219.97) Quit (Quit: Leaving)
[16:32] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[16:32] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[16:32] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[16:33] * itamarl (~itamarl@bzq-79-176-66-28.red.bezeqint.net) has joined #ceph
[16:35] * huangjun (~kvirc@117.151.50.204) has joined #ceph
[16:37] * jamesw_u3d (~textual@108-248-86-161.lightspeed.austtx.sbcglobal.net) Quit (Read error: Connection reset by peer)
[16:39] * joshd (~jdurgin@71-92-201-212.dhcp.gldl.ca.charter.com) has joined #ceph
[16:39] * blip2 (~Bwana@7V7AAFJYW.tor-irc.dnsbl.oftc.net) Quit ()
[16:40] * Aethis (~oracular@ns330209.ip-5-196-66.eu) has joined #ceph
[16:41] * jamesw_u3d (~textual@108-248-86-161.lightspeed.austtx.sbcglobal.net) has joined #ceph
[16:43] * itamarl (~itamarl@bzq-79-176-66-28.red.bezeqint.net) Quit (Quit: itamarl)
[16:45] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[16:45] * Lokta (~Lokta@carbon.coe.int) Quit (Quit: Leaving)
[16:51] * vata (~vata@207.96.182.162) has joined #ceph
[16:53] * matejz (~matejz@element.planetq.org) has joined #ceph
[16:57] * ntpttr_laptop (~ntpttr@192.55.54.45) Quit (Remote host closed the connection)
[16:58] * bene (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[16:58] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[16:58] * kutija (~kutija@89.216.27.139) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[16:59] * Kurt (~Adium@2001:628:1:5:7d9d:babe:b49:c61e) Quit (Quit: Leaving.)
[17:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[17:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[17:02] <Be-El> flaf: if you are using ceph-fuse, do you use the --default-permissions parameter?
[17:03] * yatin (~yatin@203.212.245.90) has joined #ceph
[17:04] <flaf> Be-El: no I use only this fstab line http://paste.alacon.org/41379 which works fine with infernalis and works fine in Jewel _but_ only with a manual mount, not at reboot during the automatic mountall.
[17:05] * matejz (~matejz@element.planetq.org) Quit (Quit: matejz)
[17:05] * flaf is reading the meaning of --default-permissions ...
[17:08] * gregmark (~Adium@68.87.42.115) has joined #ceph
[17:09] * Aethis (~oracular@7V7AAFJ0T.tor-irc.dnsbl.oftc.net) Quit ()
[17:10] * xolotl (~spate@dsl-olubrasgw1-54fb5b-165.dhcp.inet.fi) has joined #ceph
[17:11] * vbellur (~vijay@nat-pool-bos-t.redhat.com) has joined #ceph
[17:12] * antongribok (~antongrib@pool-173-66-18-82.washdc.fios.verizon.net) has joined #ceph
[17:13] * wushudoin (~wushudoin@38.99.12.237) has joined #ceph
[17:15] * xarses (~xarses@64.124.158.100) has joined #ceph
[17:18] * antongri_ (~antongrib@216.207.42.140) Quit (Ping timeout: 480 seconds)
[17:21] * yatin (~yatin@203.212.245.90) Quit (Remote host closed the connection)
[17:24] <m0zes__> mlovell. flaf,: thanks for looking at things with me yesterday. I???ve now sent something to the mailing list http://permalink.gmane.org/gmane.comp.file-systems.ceph.user/30016
[17:25] <flaf> Ah m0zes__ so problem is still present.
[17:25] <m0zes__> yes it is.
[17:26] <flaf> I will follow this thread with interest. Very curious to know the reason of this problem.
[17:27] * denaitre (~oftc-webi@squid1-loi.cpub.univ-nantes.fr) Quit (Ping timeout: 480 seconds)
[17:28] * beardo_ (~sma310@207-172-244-241.c3-0.atw-ubr5.atw.pa.cable.rcn.com) has joined #ceph
[17:28] * branto (~branto@nat-pool-brq-t.redhat.com) Quit (Quit: Leaving.)
[17:30] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[17:30] * redf (~red@80-108-89-163.cable.dynamic.surfer.at) Quit (Ping timeout: 480 seconds)
[17:31] <Be-El> m0zes__: did you try to move the content of the failing ssd osds to other (hdd) based osd to allow the cluster to become usable again?
[17:31] * yatin (~yatin@203.212.245.90) has joined #ceph
[17:33] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:33] <m0zes__> Be-El: yes, I tried moving the entire pool to the spinning disks. the spinning disks started hitting the suicide timeout.
[17:35] <m0zes__> I tried re-weighting the failing ssds to 0, too
[17:35] * `10` (~10@69.169.91.14) Quit (Ping timeout: 480 seconds)
[17:37] <Be-El> we have a nearly identical setup. and if an ssd fails for some reason, it will tear down half of the hdd osds, too
[17:37] <Be-El> did you check for a firmware update for the p3700?
[17:37] <m0zes__> the ssds are functioning as far as I can tell??? but I???ll check for an update
[17:38] * smokedmeets (~smokedmee@c-73-158-201-226.hsd1.ca.comcast.net) has joined #ceph
[17:38] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[17:39] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[17:39] * xolotl (~spate@06SAADCIV.tor-irc.dnsbl.oftc.net) Quit ()
[17:42] * Brochacho (~alberto@2601:243:504:6aa:2530:579:659d:1ef5) has joined #ceph
[17:44] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has left #ceph
[17:45] * rakeshgm (~rakesh@121.244.87.117) Quit (Remote host closed the connection)
[17:45] * rotbeard (~redbeard@aftr-109-90-233-106.unity-media.net) has joined #ceph
[17:47] * ska (~skatinolo@cpe-173-174-111-177.austin.res.rr.com) has joined #ceph
[17:47] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) has joined #ceph
[17:48] * huangjun (~kvirc@117.151.50.204) Quit (Ping timeout: 480 seconds)
[17:48] <ska> Is there a way to insert an UUID into the Calamari database? I don't see a table for that.
[17:49] <MentalRay> Anyone has node running their OS on LSI 3Ware 9750
[17:50] <MentalRay> And is experiencing random node reboot?
[17:51] <Heebie> 3ware? Those are old and slow. (We stopped using them.. comparing performance of a 3ware vs. a "proper" LSI was like night and day, about a factor of 100x faster on the "proper" LSI with all other elements being the same.)
[17:52] <MentalRay> https://bugs.centos.org/view.php?id=10073
[17:52] <MentalRay> I think we experiment this issue on a POC
[17:52] <MentalRay> just curious
[17:52] <MentalRay> Heebie we used those for OS in RAID1
[17:52] <MentalRay> but thinking of removing all of them
[17:52] * ntpttr_laptop (~ntpttr@134.134.139.83) has joined #ceph
[17:52] <MentalRay> but just curious to see if anyone else has experiment this
[17:53] <Heebie> I haven't seen random reboots with 3ware cards, but I've seen them randomly "throw out" disks that were fine and things like that.
[17:53] * Hemanth (~hkumar_@121.244.87.117) Quit (Ping timeout: 480 seconds)
[17:54] * ntpttr_laptop (~ntpttr@134.134.139.83) Quit ()
[17:55] * `10` (~10@69.169.91.14) has joined #ceph
[17:56] * garphy is now known as garphy`aw
[17:57] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[17:58] * redf (~red@80-108-89-163.cable.dynamic.surfer.at) has joined #ceph
[18:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[18:01] * antongribok (~antongrib@pool-173-66-18-82.washdc.fios.verizon.net) Quit (Remote host closed the connection)
[18:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[18:01] * antongribok (~antongrib@pool-173-66-18-82.washdc.fios.verizon.net) has joined #ceph
[18:05] * TMM (~hp@185.5.121.201) Quit (Quit: Ex-Chat)
[18:09] * antongribok (~antongrib@pool-173-66-18-82.washdc.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[18:10] * Skyrider (~Revo84@7V7AAFJ55.tor-irc.dnsbl.oftc.net) has joined #ceph
[18:10] <Brochacho> Is there a way to get a pg dump form `ceph daemon socket`?
[18:12] * deepthi (~deepthi@106.206.145.35) Quit (Quit: Leaving)
[18:15] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[18:20] * redf (~red@80-108-89-163.cable.dynamic.surfer.at) Quit (Ping timeout: 480 seconds)
[18:21] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) Quit (Remote host closed the connection)
[18:22] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[18:22] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[18:23] * gauravbafna (~gauravbaf@122.172.242.199) has joined #ceph
[18:24] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[18:27] * gauravbafna (~gauravbaf@122.172.242.199) Quit (Remote host closed the connection)
[18:28] <scuttlemonkey> Ceph Developer Monthly starting in ~2mins -- http://wiki.ceph.com/Planning
[18:29] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:e4fc:ae1f:ea84:d75) Quit (Ping timeout: 480 seconds)
[18:30] * kefu is now known as kefu|afk
[18:30] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) has joined #ceph
[18:30] * ade (~abradshaw@nat-pool-str-t.redhat.com) Quit (Ping timeout: 480 seconds)
[18:33] * antongribok (~antongrib@204.148.17.66) has joined #ceph
[18:34] * antongribok (~antongrib@204.148.17.66) Quit ()
[18:35] * kefu|afk (~kefu@114.92.122.74) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:36] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) Quit (Quit: Leaving)
[18:37] * kefu (~kefu@183.193.182.2) has joined #ceph
[18:37] * gauravbafna (~gauravbaf@122.172.242.199) has joined #ceph
[18:38] <PoRNo-MoRoZ> i got my cluster on dedicated network switch, if i change IP of what switch and it will reboot - what will happen to ceph ?
[18:38] <PoRNo-MoRoZ> can it handle temporary out of network ?
[18:38] <PoRNo-MoRoZ> or i should completely stop all infrastructure to reboot switch ?
[18:39] * mnathani (~mnathani_@192-0-149-228.cpe.teksavvy.com) Quit (Ping timeout: 480 seconds)
[18:39] * reed (~reed@75-101-54-18.dsl.static.fusionbroadband.com) has joined #ceph
[18:39] * Skyrider (~Revo84@7V7AAFJ55.tor-irc.dnsbl.oftc.net) Quit ()
[18:40] * Doodlepieguy (~shishi@179.43.146.230) has joined #ceph
[18:40] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[18:42] * gauravbafna (~gauravbaf@122.172.242.199) Quit (Remote host closed the connection)
[18:42] * northrup (~northrup@75-146-11-137-Nashville.hfc.comcastbusiness.net) has joined #ceph
[18:43] * gauravbafna (~gauravbaf@122.172.242.199) has joined #ceph
[18:43] <northrup> has anyone ever heard of a successful deployment of Ceph on Microsoft Azure?
[18:43] <northrup> ... one that is actually performant?
[18:43] * DanFoster (~Daniel@2a00:1ee0:3:1337:44a6:c3d5:84e2:371c) Quit (Quit: Leaving)
[18:43] * pabluk is now known as pabluk_
[18:44] <monsted> northrup: that sounds expensive
[18:44] <northrup> Yeah, well - it's damn sure not performant
[18:44] <northrup> :(
[18:45] <northrup> 342 Read / 115 Write IOPS at the moment
[18:45] <northrup> and that's after kernel tuning
[18:45] <northrup> for TCP
[18:45] <monsted> PoRNo-MoRoZ: does the switch really need to reboot to change IP? that's insane.
[18:45] * Miouge (~Miouge@188.189.76.220) Quit (Quit: Miouge)
[18:46] <northrup> just looking for anyone who's done Ceph on the could and how performant it is?
[18:46] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[18:47] * Miouge (~Miouge@188.189.76.220) has joined #ceph
[18:49] <PoRNo-MoRoZ> monsted dunno actually
[18:49] <PoRNo-MoRoZ> don't want to try ))
[18:51] * dgurtner_ (~dgurtner@178.197.225.108) has joined #ceph
[18:52] * rmart04 (~rmart04@support.memset.com) Quit (Quit: rmart04)
[18:52] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:52] * dgurtner (~dgurtner@194.230.155.137) Quit (Ping timeout: 480 seconds)
[18:56] * gauravbafna (~gauravbaf@122.172.242.199) Quit (Remote host closed the connection)
[18:56] * gauravbafna (~gauravbaf@122.172.242.199) has joined #ceph
[19:00] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[19:02] <PoRNo-MoRoZ> monsted anyway
[19:02] <PoRNo-MoRoZ> can ceph handle 30 secs without a network ?
[19:04] * gauravbafna (~gauravbaf@122.172.242.199) Quit (Ping timeout: 480 seconds)
[19:05] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[19:05] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:e4fc:ae1f:ea84:d75) has joined #ceph
[19:06] * johnavp1989 (~jpetrini@pool-72-94-170-170.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[19:08] * dvanders_ (~dvanders@dvanders-pro.cern.ch) has joined #ceph
[19:09] * ngoswami (~ngoswami@1.39.14.239) has joined #ceph
[19:09] * ira (~ira@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[19:09] * Doodlepieguy (~shishi@7V7AAFJ7K.tor-irc.dnsbl.oftc.net) Quit ()
[19:10] * Morde (~Snowman@7V7AAFJ9A.tor-irc.dnsbl.oftc.net) has joined #ceph
[19:10] * rotbeard (~redbeard@aftr-109-90-233-106.unity-media.net) Quit (Quit: Leaving)
[19:10] * ngoswami (~ngoswami@1.39.14.239) Quit ()
[19:10] * irq0 (~seri@amy.irq0.org) has joined #ceph
[19:11] * ira (~ira@nat-pool-bos-t.redhat.com) has joined #ceph
[19:11] * hybrid512 (~walid@195.200.189.206) Quit (Quit: Leaving.)
[19:12] * dvanders (~dvanders@dvanders-pro.cern.ch) Quit (Ping timeout: 480 seconds)
[19:14] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:e4fc:ae1f:ea84:d75) Quit (Ping timeout: 480 seconds)
[19:14] <ram> Be-El: Thank you
[19:14] * jrankin (~jrankin@d53-64-170-236.nap.wideopenwest.com) Quit (Quit: Leaving)
[19:15] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:e4fc:ae1f:ea84:d75) has joined #ceph
[19:17] * wgao (~wgao@106.120.101.38) Quit (Read error: Connection timed out)
[19:18] * chardan (~chardan@173.240.241.94) Quit (Ping timeout: 480 seconds)
[19:18] * wgao (~wgao@106.120.101.38) has joined #ceph
[19:19] * owlbot (~supybot@pct-empresas-50.uc3m.es) Quit (Remote host closed the connection)
[19:23] * owlbot (~supybot@pct-empresas-50.uc3m.es) has joined #ceph
[19:26] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:e4fc:ae1f:ea84:d75) Quit (Ping timeout: 480 seconds)
[19:27] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:e4fc:ae1f:ea84:d75) has joined #ceph
[19:32] * rakeshgm (~rakesh@106.51.26.213) has joined #ceph
[19:33] * yatin (~yatin@203.212.245.90) Quit (Remote host closed the connection)
[19:35] * linjan_ (~linjan@86.62.112.22) Quit (Ping timeout: 480 seconds)
[19:36] * dgurtner_ (~dgurtner@178.197.225.108) Quit (Read error: Connection reset by peer)
[19:38] * ngoswami (~ngoswami@1.39.14.239) has joined #ceph
[19:38] * vbellur (~vijay@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[19:39] * Morde (~Snowman@7V7AAFJ9A.tor-irc.dnsbl.oftc.net) Quit ()
[19:40] * Eric1 (~w0lfeh@7V7AAFKAZ.tor-irc.dnsbl.oftc.net) has joined #ceph
[19:40] * mnathani (~mnathani_@192-0-149-228.cpe.teksavvy.com) has joined #ceph
[19:42] * ngoswami (~ngoswami@1.39.14.239) Quit ()
[19:43] * gauravbafna (~gauravbaf@122.172.242.199) has joined #ceph
[19:44] * ira (~ira@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[19:46] * penguinRaider (~KiKo@14.139.82.6) has joined #ceph
[19:49] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:e4fc:ae1f:ea84:d75) Quit (Ping timeout: 480 seconds)
[19:51] * green (~oftc-webi@sky-78-19-113-132.bas512.cwt.btireland.net) has joined #ceph
[19:51] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[19:52] * gauravbafna (~gauravbaf@122.172.242.199) Quit (Ping timeout: 480 seconds)
[19:52] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[19:52] * green (~oftc-webi@sky-78-19-113-132.bas512.cwt.btireland.net) Quit ()
[19:52] * jsweeney (~oftc-webi@sky-78-19-113-132.bas512.cwt.btireland.net) has joined #ceph
[19:56] <jsweeney> We had an OpenStack Cloud with Cinder running on Ceph as a test environment. A not-for-profit organisation started using it. We did not know they were using it for live data. Last weekend we shut down the Cloud. Today they contacted us to say that they would really like to get data from a volumn.
[19:59] <jsweeney> We have Ceph/Cinder still running but the Nova servers have been moved. We can locate that volumn in the Ceph Cluster and can cd the the OSD and the folder where it is mounted. We can see lots of files there each file about 4MB and they do not have an extension.
[19:59] <PoRNo-MoRoZ> that's objects
[20:00] <jsweeney> Is there a way we can download the data from that volumn and recover a Microsoft Access .mdb file from it.
[20:00] <PoRNo-MoRoZ> placement groups made from objects
[20:00] <PoRNo-MoRoZ> placement groups spread across all osds
[20:00] <PoRNo-MoRoZ> can you get access to disk images, stored in ceph ?
[20:00] <PoRNo-MoRoZ> ceph running ?
[20:00] <jsweeney> it is spread across three osds
[20:01] <jsweeney> yes it is and I can list the disk image which is under pg Voumes
[20:01] <jsweeney> sorry it is pool Volumes
[20:01] <Walex> jsweeney: just copy the disk image somewhere else.
[20:02] <jsweeney> I can just list the disk image using ls
[20:02] <Walex> jsweeney: or mount it using RBD
[20:02] <jsweeney> what would be the actual location for the image
[20:02] <Walex> jsweeney: I am not sure what you mean by that.
[20:03] <PoRNo-MoRoZ> across all osds ))
[20:03] <PoRNo-MoRoZ> image not stored as a 'file'
[20:03] <PoRNo-MoRoZ> it's bunch of objects 4mb each
[20:04] <PoRNo-MoRoZ> jsweeney
[20:04] <PoRNo-MoRoZ> rbd ls
[20:04] <jsweeney> ok when I run rados ls -p volumes I can see my volume
[20:04] <Walex> jsweeney: you still have Ceph and Cinder still running, so what's the problem?
[20:05] <jsweeney> we do not have compute running and we cannot create another instance to attach volume there
[20:05] <Walex> jsweeney: why is that relevant?
[20:05] <jsweeney> when I run rados ls -p volumes I can see my volume
[20:06] <Walex> jsweeney: you can create a new VM with 'libvirt' if you want. Or you can just use Ceph tools to copy the image somewhere else.
[20:06] <PoRNo-MoRoZ> u need to gain access to ceph storage via rgw or cephfs i think
[20:06] <PoRNo-MoRoZ> and move your data image
[20:06] <PoRNo-MoRoZ> to another place
[20:07] <PoRNo-MoRoZ> jsweeney
[20:07] <PoRNo-MoRoZ> rbd export
[20:07] <PoRNo-MoRoZ> somekinda
[20:07] <PoRNo-MoRoZ> rbd export pool/vm-image /tmp/vm-image.img
[20:08] <Walex> jsweeney: then you can mount the image using the loop device or one of very many tools that allow dealing with QEMU disk images.
[20:08] <Walex> jsweeney: again, what's the problem with doing any of that?
[20:09] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[20:09] <jsweeney> coz I can list content unde the pool and it does not tell if its image. how can I list the images under any pool
[20:09] <PoRNo-MoRoZ> rbd ls pool
[20:09] * Eric1 (~w0lfeh@7V7AAFKAZ.tor-irc.dnsbl.oftc.net) Quit ()
[20:10] <jsweeney> that gives me the list of pools I have for example volume
[20:10] * Vale (~nupanick@185.36.100.145) has joined #ceph
[20:10] <PoRNo-MoRoZ> nope
[20:11] <PoRNo-MoRoZ> what's your pool name ?
[20:11] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[20:11] <Walex> jsweeney: what PoRNo-MoRoZ means is 'rbd ls volume'
[20:11] <Walex> jsweeney: in his example "pool" is just a generic name
[20:11] * musca (musca@tyrael.eu) has left #ceph
[20:12] <PoRNo-MoRoZ> ceph osd lspools
[20:12] <PoRNo-MoRoZ> :D
[20:12] <PoRNo-MoRoZ> Walex thanks )
[20:12] <PoRNo-MoRoZ> jsweeney list your pools, find pool where your image belongs to
[20:13] <PoRNo-MoRoZ> then look for that image
[20:13] <PoRNo-MoRoZ> rbd ls POOLNAME
[20:13] <PoRNo-MoRoZ> then export it
[20:13] <PoRNo-MoRoZ> rbd export POOL/IMAGE /PATH/to/export.img
[20:15] <jsweeney> thanks I guess I can try this and see how it works
[20:15] <PoRNo-MoRoZ> )
[20:15] <PoRNo-MoRoZ> next tips: fdisk -l /path/to/export.img
[20:16] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[20:16] <PoRNo-MoRoZ> and mount using loop with offsets
[20:16] <PoRNo-MoRoZ> alright gonna go home
[20:16] <jsweeney> sure thanks
[20:17] * matejz (~matejz@element.planetq.org) has joined #ceph
[20:19] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Ping timeout: 480 seconds)
[20:19] <jsweeney> Hi
[20:19] <jsweeney> Tried to export the image and getting error fault
[20:19] <jsweeney> wrong node
[20:20] <jsweeney> I am running this command from the ceph monitor
[20:20] <jsweeney> do i need to run it from the OSD node
[20:20] <PoRNo-MoRoZ> i think it should work from any ceph node
[20:22] <jsweeney> srry but it is not
[20:22] * Miouge (~Miouge@188.189.76.220) Quit (Quit: Miouge)
[20:23] * cathode (~cathode@50.232.215.114) has joined #ceph
[20:24] * kefu (~kefu@183.193.182.2) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[20:26] * jluis (~joao@8.184.114.89.rev.vodafone.pt) has joined #ceph
[20:26] * ChanServ sets mode +o jluis
[20:28] * gregmark (~Adium@68.87.42.115) has joined #ceph
[20:28] * ngoswami (~ngoswami@1.39.99.181) has joined #ceph
[20:29] * ngoswami (~ngoswami@1.39.99.181) Quit ()
[20:29] * ngoswami (~ngoswami@1.39.99.181) has joined #ceph
[20:29] * ngoswami (~ngoswami@1.39.99.181) Quit ()
[20:32] * joao (~joao@8.184.114.89.rev.vodafone.pt) Quit (Ping timeout: 480 seconds)
[20:35] * shaunm (~shaunm@74.83.215.100) Quit (Ping timeout: 480 seconds)
[20:35] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[20:36] * antongribok (~antongrib@216.207.42.140) has joined #ceph
[20:36] * ngoswami (~ngoswami@1.39.99.181) has joined #ceph
[20:38] * ngoswami (~ngoswami@1.39.99.181) Quit ()
[20:39] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[20:39] * ngoswami (~ngoswami@1.39.99.181) has joined #ceph
[20:40] * Vale (~nupanick@4MJAAFUCQ.tor-irc.dnsbl.oftc.net) Quit ()
[20:40] * ivancich (~ivancich@12.118.3.106) Quit (Quit: ivancich)
[20:42] * ngoswami (~ngoswami@1.39.99.181) Quit ()
[20:50] * ngoswami (~ngoswami@1.39.99.181) has joined #ceph
[20:58] * garphy`aw is now known as garphy
[20:58] * overclk (~quassel@117.202.96.214) Quit (Remote host closed the connection)
[21:01] * linjan_ (~linjan@176.195.175.108) has joined #ceph
[21:01] <antongribok> Apologies for slightly off topic post... In all my years of running Ceph and reading Hacker News (not at the same time) I've never seen so many mentions of Ceph in the comments about another storage solution: https://news.ycombinator.com/item?id=11816122
[21:04] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[21:05] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[21:09] * ngoswami (~ngoswami@1.39.99.181) Quit (Quit: This computer has gone to sleep)
[21:13] * ngoswami (~ngoswami@1.39.99.181) has joined #ceph
[21:13] <cathode> i really don't understand what the point of Torus is
[21:13] <cathode> it's like a "Me too!" project..
[21:14] <antongribok> +1
[21:14] * jacoo (~TehZomB@tor-exit.bynumlaw.net) has joined #ceph
[21:18] * vbellur (~vijay@nat-pool-bos-t.redhat.com) has joined #ceph
[21:19] * ngoswami (~ngoswami@1.39.99.181) Quit (Quit: This computer has gone to sleep)
[21:21] * jamesw_u3d (~textual@108-248-86-161.lightspeed.austtx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[21:21] <TheSov> antongribok, software like ceph lets people get passed EMC, DELL, HP, PURE, ETC. storage designed for large and small applications that replaces a commercial san is crucial to business startups and large scale business being held hostage by vendors
[21:21] * ngoswami (~ngoswami@1.39.99.181) has joined #ceph
[21:22] <TheSov> when people find a peice of software that is repliable and replaces millions of dollars of CapEx you got a winner
[21:25] * ngoswami (~ngoswami@1.39.99.181) Quit ()
[21:30] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit (Ping timeout: 480 seconds)
[21:31] * ngoswami (~ngoswami@1.39.99.181) has joined #ceph
[21:34] <cathode> cephfs needs a windows client :)
[21:35] * ngoswami (~ngoswami@1.39.99.181) Quit ()
[21:37] * rdias (~rdias@bl7-92-98.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[21:39] * rdias (~rdias@2001:8a0:749a:d01:e4e1:2aee:7292:662f) has joined #ceph
[21:44] * jacoo (~TehZomB@4MJAAFUFQ.tor-irc.dnsbl.oftc.net) Quit ()
[21:44] * jakekosberg (~galaxyAbs@chulak.enn.lu) has joined #ceph
[21:45] * BrianA (~BrianA@192.55.3.4) has joined #ceph
[21:48] * Wahmed (~wahmed@206.174.203.195) Quit (Quit: Nettalk6 - www.ntalk.de)
[21:52] * vata (~vata@207.96.182.162) Quit (Quit: Leaving.)
[21:54] * vbellur (~vijay@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[21:55] * ngoswami (~ngoswami@1.39.99.181) has joined #ceph
[21:55] * ngoswami (~ngoswami@1.39.99.181) Quit ()
[21:59] * penguinRaider (~KiKo@14.139.82.6) Quit (Ping timeout: 480 seconds)
[22:03] * dmanchado (~dmanchad@nat-pool-bos-t.redhat.com) Quit (Quit: ZNC 1.6.2 - http://znc.in)
[22:05] * dmanchad (~dmanchad@nat-pool-bos-t.redhat.com) has joined #ceph
[22:09] * penguinRaider (~KiKo@146.185.31.226) has joined #ceph
[22:14] * jakekosberg (~galaxyAbs@4MJAAFUG4.tor-irc.dnsbl.oftc.net) Quit ()
[22:14] * poller (~brannmar@159.148.186.194) has joined #ceph
[22:25] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[22:26] * georgem (~Adium@206.108.127.16) Quit (Ping timeout: 480 seconds)
[22:31] * shaunm (~shaunm@74.83.215.100) has joined #ceph
[22:32] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[22:32] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[22:33] <T1> cathode: there is - look for ceph-dokan
[22:43] * rraja (~rraja@121.244.87.117) Quit (Quit: Leaving)
[22:44] * poller (~brannmar@7V7AAFKI1.tor-irc.dnsbl.oftc.net) Quit ()
[22:44] * Dragonshadow1 (~Jyron@192.42.115.101) has joined #ceph
[22:45] * BrianA (~BrianA@192.55.3.4) Quit (Quit: Leaving.)
[22:50] * vanham (~vanham@12.199.84.146) Quit (Ping timeout: 480 seconds)
[22:51] * BrianA (~BrianA@nrm-1c3-ag5500-02.tco.seagate.com) has joined #ceph
[22:55] * Wahmed (~wahmed@s75-158-44-99.ab.hsia.telus.net) has joined #ceph
[23:01] * rendar (~I@host148-178-dynamic.7-87-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[23:02] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Ping timeout: 480 seconds)
[23:02] * madkiss (~madkiss@2001:6f8:12c3:f00f:ed21:ea62:1850:5557) has joined #ceph
[23:11] * scg (~zscg@valis.gnu.org) Quit (Ping timeout: 480 seconds)
[23:14] * Dragonshadow1 (~Jyron@06SAADC1E.tor-irc.dnsbl.oftc.net) Quit ()
[23:19] * georgem (~Adium@24.114.75.206) has joined #ceph
[23:20] * Brochacho (~alberto@2601:243:504:6aa:2530:579:659d:1ef5) Quit (Quit: Brochacho)
[23:20] * Brochacho (~alberto@c-73-45-127-198.hsd1.il.comcast.net) has joined #ceph
[23:22] * georgem (~Adium@24.114.75.206) Quit ()
[23:22] * georgem (~Adium@206.108.127.16) has joined #ceph
[23:25] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) Quit (Remote host closed the connection)
[23:26] * rendar (~I@host148-178-dynamic.7-87-r.retail.telecomitalia.it) has joined #ceph
[23:29] * Brochacho (~alberto@c-73-45-127-198.hsd1.il.comcast.net) Quit (Ping timeout: 480 seconds)
[23:36] * georgem (~Adium@206.108.127.16) Quit (Ping timeout: 480 seconds)
[23:44] * chrisinajar (~drdanick@0.tor.exit.babylon.network) has joined #ceph
[23:51] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[23:52] * danieagle (~Daniel@177.94.29.76) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[23:52] * khyron (~khyron@187.207.11.87) has joined #ceph
[23:57] * m0zes__ (~mozes@n117m02.cis.ksu.edu) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.