#ceph IRC Log

Index

IRC Log for 2016-06-03

Timestamps are in GMT/BST.

[0:02] <bla> oh ... forgot to 'ceph-deploy the mds'
[0:02] <bla> shame on me
[0:11] * linjan__ (~linjan@176.195.152.175) has joined #ceph
[0:13] * bniver (~bniver@pool-173-48-58-27.bstnma.fios.verizon.net) Quit (Quit: Leaving)
[0:13] * ben1 (ben@pearl.meh.net.nz) has joined #ceph
[0:14] * vata (~vata@207.96.182.162) Quit (Quit: Leaving.)
[0:14] * bniver (~bniver@pool-173-48-58-27.bstnma.fios.verizon.net) has joined #ceph
[0:16] * chrisinajar (~TheDoudou@7V7AAFMJU.tor-irc.dnsbl.oftc.net) Quit ()
[0:16] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[0:16] * Oddtwang (~aleksag@89.163.135.98) has joined #ceph
[0:18] * badone (~badone@66.187.239.16) Quit (Quit: k?thxbyebyenow)
[0:18] * linjan_ (~linjan@176.195.175.108) Quit (Ping timeout: 480 seconds)
[0:29] * cathode (~cathode@50.232.215.114) Quit (Quit: Leaving)
[0:32] * scg (~zscg@c-50-169-194-217.hsd1.ma.comcast.net) Quit (Quit: Leaving)
[0:41] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[0:46] * Oddtwang (~aleksag@06SAADE2S.tor-irc.dnsbl.oftc.net) Quit ()
[0:46] * aleksag (~ahmeni@atlantic850.dedicatedpanel.com) has joined #ceph
[0:53] * rendar (~I@host11-179-dynamic.27-79-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[0:54] * m0zes__ (~mozes@ip70-179-131-225.fv.ks.cox.net) has joined #ceph
[0:58] * m0zes__ (~mozes@ip70-179-131-225.fv.ks.cox.net) Quit ()
[1:13] * Brochacho (~alberto@2601:243:504:6aa:871:4052:ae6b:5f1c) Quit (Quit: Brochacho)
[1:16] * aleksag (~ahmeni@7V7AAFMMJ.tor-irc.dnsbl.oftc.net) Quit ()
[1:16] * djidis__ (~clusterfu@daskapital.tor-exit.network) has joined #ceph
[1:17] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[1:18] * vata (~vata@cable-192.222.249.207.electronicbox.net) has joined #ceph
[1:21] * sudocat (~dibarra@192.185.1.20) Quit (Quit: Leaving.)
[1:23] * oms101 (~oms101@p20030057EA0F9200C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:24] * allaok (~allaok@ARennes-658-1-231-16.w2-13.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[1:31] * oms101 (~oms101@p20030057EA1CED00C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:46] * djidis__ (~clusterfu@7V7AAFMNX.tor-irc.dnsbl.oftc.net) Quit ()
[1:46] * Bj_o_rn (~zc00gii@argenla.tor-exit.network) has joined #ceph
[1:49] * wushudoin (~wushudoin@2601:646:9501:d2b2:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[1:50] * andreww (~xarses@64.124.158.100) Quit (Ping timeout: 480 seconds)
[1:59] * andreww (~xarses@73.93.154.223) has joined #ceph
[2:00] * m0zes__ (~mozes@ip70-179-131-225.fv.ks.cox.net) has joined #ceph
[2:00] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Ping timeout: 480 seconds)
[2:03] * LeaChim (~LeaChim@host86-168-126-119.range86-168.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:08] * m0zes__ (~mozes@ip70-179-131-225.fv.ks.cox.net) Quit (Ping timeout: 480 seconds)
[2:11] * NTTEC (~nttec@122.53.162.158) has joined #ceph
[2:13] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Quit: Leaving)
[2:14] * NTTEC (~nttec@122.53.162.158) Quit (Remote host closed the connection)
[2:16] * Bj_o_rn (~zc00gii@4MJAAFWFC.tor-irc.dnsbl.oftc.net) Quit ()
[2:16] * onyb (~ani07nov@112.133.232.23) has joined #ceph
[2:21] * andreww (~xarses@73.93.154.223) Quit (Ping timeout: 480 seconds)
[2:26] * vbellur (~vijay@71.234.224.255) has joined #ceph
[2:29] * ceph_chengpeng (~ceph_chen@218.83.112.81) Quit (Quit: Leaving)
[2:30] * andreww (~xarses@73.93.154.227) has joined #ceph
[2:32] * huangjun (~kvirc@113.57.168.154) has joined #ceph
[2:34] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Remote host closed the connection)
[2:34] * dgurtner (~dgurtner@178.197.239.239) Quit (Ping timeout: 480 seconds)
[2:37] * m0zes__ (~mozes@ip70-179-131-225.fv.ks.cox.net) has joined #ceph
[2:45] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[2:46] * capitalthree (~tokie@freedom.ip-eend.nl) has joined #ceph
[2:51] * andreww (~xarses@73.93.154.227) Quit (Ping timeout: 480 seconds)
[3:00] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[3:02] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[3:06] * Vacuum_ (~Vacuum@i59F79B63.versanet.de) has joined #ceph
[3:10] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[3:11] <ronrib> just about to fire up a new cluster, should I use ubuntu 14.04 or centos 7 as the base?
[3:12] * Vacuum__ (~Vacuum@88.130.208.86) Quit (Ping timeout: 480 seconds)
[3:16] * capitalthree (~tokie@06SAADFAQ.tor-irc.dnsbl.oftc.net) Quit ()
[3:16] * CydeWeys (~Moriarty@193.90.12.89) has joined #ceph
[3:29] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:39] * yanzheng (~zhyan@125.70.23.87) has joined #ceph
[3:40] * MentalRay (~MentalRay@107.171.161.165) has joined #ceph
[3:41] * m0zes__ (~mozes@ip70-179-131-225.fv.ks.cox.net) Quit (Quit: m0zes__)
[3:46] * CydeWeys (~Moriarty@4MJAAFWI1.tor-irc.dnsbl.oftc.net) Quit ()
[3:46] * Behedwin (~drupal@4MJAAFWJ9.tor-irc.dnsbl.oftc.net) has joined #ceph
[3:49] * georgem (~Adium@104-222-119-175.cpe.teksavvy.com) has joined #ceph
[3:53] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[3:58] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[4:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[4:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[4:01] * Mika_c (~Mika@122.146.93.152) has joined #ceph
[4:05] * onyb (~ani07nov@112.133.232.23) Quit (Ping timeout: 480 seconds)
[4:11] * kefu (~kefu@114.92.122.74) has joined #ceph
[4:13] * scuttlemonkey is now known as scuttle|afk
[4:13] * scuttle|afk is now known as scuttlemonkey
[4:13] * ivancich (~ivancich@12.118.3.106) Quit (Read error: Connection reset by peer)
[4:15] * ivancich (~ivancich@12.118.3.106) has joined #ceph
[4:15] * onyb (~ani07nov@112.133.232.23) has joined #ceph
[4:16] * Behedwin (~drupal@4MJAAFWJ9.tor-irc.dnsbl.oftc.net) Quit ()
[4:16] * Gecko1986 (~Shesh@193.90.12.89) has joined #ceph
[4:25] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has left #ceph
[4:25] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[4:25] * onyb (~ani07nov@112.133.232.23) Quit (Ping timeout: 480 seconds)
[4:27] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[4:28] <IvanJobs> hi guys, I'm currently looking into jemalloc. I found that my hammer version ceph used tcmalloc, how can I change it to jemalloc? My ceph is running on ubuntu 14.04 os.
[4:29] <IvanJobs> Any help would be appreciated.
[4:30] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[4:30] <motk> do you really need to?
[4:30] * neurodrone_ (~neurodron@static-108-29-37-177.nycmny.fios.verizon.net) has joined #ceph
[4:32] <motk> LD_PRELOAD if you do
[4:33] * MentalRay (~MentalRay@107.171.161.165) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[4:33] * neurodrone (~neurodron@158.106.193.162) Quit (Ping timeout: 480 seconds)
[4:33] * neurodrone_ is now known as neurodrone
[4:34] * onyb (~ani07nov@112.133.232.23) has joined #ceph
[4:35] * NTTEC (~nttec@119.93.91.136) has joined #ceph
[4:40] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Remote host closed the connection)
[4:41] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) Quit (Quit: Leaving)
[4:42] * onyb (~ani07nov@112.133.232.23) Quit (Ping timeout: 480 seconds)
[4:43] * flisky (~Thunderbi@36.110.40.28) has joined #ceph
[4:45] <lurbs> LD_PRELOAD doesn't seem to be sufficient for me.
[4:46] <lurbs> https://paste.nothing.net.nz/7597f6#85lUwjavDOFSZ3077doudg== <-- Former is without LD_PRELOAD, the latter with.
[4:46] * Gecko1986 (~Shesh@06SAADFEL.tor-irc.dnsbl.oftc.net) Quit ()
[4:46] * Mattress (~Plesioth@7V7AAFMYV.tor-irc.dnsbl.oftc.net) has joined #ceph
[4:47] <lurbs> But 'perf top' still shows libtcmalloc.so being used, and not jemalloc.
[4:47] <lurbs> I did see, but haven't tried: https://www.spinics.net/lists/ceph-users/msg28151.html
[4:47] <lurbs> The above is with 16.04 LTS and 10.2.1 BTW.
[4:48] * m0zes__ (~mozes@ip70-179-131-225.fv.ks.cox.net) has joined #ceph
[4:48] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[4:49] <rkeene> Compiling with --with-jemalloc seems to work for me
[4:51] * onyb (~ani07nov@112.133.232.23) has joined #ceph
[4:51] <lurbs> Has anyone here done like for like benchmarks on their cluster(s), tcmalloc vs jemalloc?
[4:52] <IvanJobs> yep, I really need to. motk
[4:53] <IvanJobs> thx rkeene, I will use LD_PRELOAD instead.
[4:54] <IvanJobs> lurbs, I'm currently going to do this with cosbench and "rados bench"
[4:54] <lurbs> I'd be curious to see if you can get it going with just an LD_PRELOAD and not a recompile.
[4:56] * dvanders (~dvanders@pb-d-128-141-3-250.cern.ch) has joined #ceph
[4:57] * dvanders_ (~dvanders@2001:1458:202:225::101:124a) Quit (Read error: Connection reset by peer)
[5:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[5:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[5:06] <IvanJobs> Ok, I will try LD_PRELOAD, feedback later.
[5:07] * penguinRaider (~KiKo@69.163.33.182) Quit (Ping timeout: 480 seconds)
[5:10] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) has joined #ceph
[5:14] * yanzheng1 (~zhyan@125.70.23.87) has joined #ceph
[5:15] * yanzheng (~zhyan@125.70.23.87) Quit (Ping timeout: 480 seconds)
[5:16] * penguinRaider (~KiKo@69.163.33.182) has joined #ceph
[5:16] * Mattress (~Plesioth@7V7AAFMYV.tor-irc.dnsbl.oftc.net) Quit ()
[5:16] * loft (~EdGruberm@46.183.218.199) has joined #ceph
[5:22] * reed (~reed@75-101-54-18.dsl.static.fusionbroadband.com) Quit (Quit: Ex-Chat)
[5:29] * shaunm (~shaunm@74.83.215.100) Quit (Ping timeout: 480 seconds)
[5:30] * natarej (~natarej@101.188.54.14) has joined #ceph
[5:30] * yatin (~yatin@161.163.44.8) has joined #ceph
[5:33] * vbellur (~vijay@71.234.224.255) Quit (Ping timeout: 480 seconds)
[5:35] * IvanJobs_ (~ivanjobs@103.50.11.146) has joined #ceph
[5:35] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Read error: Connection reset by peer)
[5:35] * natarej__ (~natarej@101.188.54.14) Quit (Ping timeout: 480 seconds)
[5:37] * penguinRaider (~KiKo@69.163.33.182) Quit (Ping timeout: 480 seconds)
[5:40] * Vacuum__ (~Vacuum@i59F79C95.versanet.de) has joined #ceph
[5:46] * georgem (~Adium@104-222-119-175.cpe.teksavvy.com) Quit (Quit: Leaving.)
[5:46] * loft (~EdGruberm@4MJAAFWNP.tor-irc.dnsbl.oftc.net) Quit ()
[5:46] * Scymex (~Guest1390@7V7AAFM02.tor-irc.dnsbl.oftc.net) has joined #ceph
[5:46] * penguinRaider (~KiKo@69.163.33.182) has joined #ceph
[5:47] * Vacuum_ (~Vacuum@i59F79B63.versanet.de) Quit (Ping timeout: 480 seconds)
[5:49] * shyu (~shyu@218.241.172.114) has joined #ceph
[5:53] * LongyanG (~long@15255.s.time4vps.eu) has joined #ceph
[5:55] * Long_yanG (~long@15255.s.time4vps.eu) Quit (Ping timeout: 480 seconds)
[5:56] * onyb (~ani07nov@112.133.232.23) Quit (Ping timeout: 480 seconds)
[5:58] * Racpatel (~Racpatel@2601:87:3:3601::4edb) Quit (Ping timeout: 480 seconds)
[6:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[6:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[6:06] * onyb (~ani07nov@112.133.232.12) has joined #ceph
[6:15] * sage (~quassel@2607:f298:6050:709d:d1c8:d5cf:6390:78a6) Quit (Remote host closed the connection)
[6:16] * sage (~quassel@2607:f298:6050:709d:a405:f2e2:76b2:e42d) has joined #ceph
[6:16] * ChanServ sets mode +o sage
[6:16] * Scymex (~Guest1390@7V7AAFM02.tor-irc.dnsbl.oftc.net) Quit ()
[6:17] * dneary (~dneary@pool-96-233-46-27.bstnma.fios.verizon.net) has joined #ceph
[6:22] * deepthi (~deepthi@122.172.47.100) has joined #ceph
[6:23] * kefu (~kefu@114.92.122.74) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:29] * abbi (~oftc-webi@static-202-65-140-151.pol.net.in) Quit (Ping timeout: 480 seconds)
[6:37] * shyu (~shyu@218.241.172.114) Quit (Ping timeout: 480 seconds)
[6:39] * kefu (~kefu@183.193.187.174) has joined #ceph
[6:41] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[6:42] * m0zes__ (~mozes@ip70-179-131-225.fv.ks.cox.net) Quit (Quit: m0zes__)
[6:46] * kefu_ (~kefu@114.92.122.74) has joined #ceph
[6:47] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (Read error: Connection reset by peer)
[6:48] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[6:48] * kefu (~kefu@183.193.187.174) Quit (Ping timeout: 480 seconds)
[7:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[7:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[7:05] * IvanJobs_ (~ivanjobs@103.50.11.146) Quit (Remote host closed the connection)
[7:11] * overclk (~quassel@121.244.87.117) has joined #ceph
[7:18] * penguinRaider (~KiKo@69.163.33.182) Quit (Ping timeout: 480 seconds)
[7:19] * matj345314 (~matj34531@element.planetq.org) has joined #ceph
[7:21] * Enikma1 (~Misacorp@4MJAAFWSE.tor-irc.dnsbl.oftc.net) has joined #ceph
[7:21] * matj345314 (~matj34531@element.planetq.org) Quit ()
[7:25] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[7:28] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:34] * penguinRaider (~KiKo@14.139.82.6) has joined #ceph
[7:35] * yatin (~yatin@161.163.44.8) Quit (Remote host closed the connection)
[7:38] * Miouge (~Miouge@91.177.58.174) has joined #ceph
[7:38] <ronrib> with cephfs i'm seeing writes of 1gb/s and reads of 120mb/s, the writes are great but is there anything I can do to improve the reads?
[7:41] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[7:43] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[7:43] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[7:44] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[7:44] * flisky (~Thunderbi@36.110.40.28) Quit (Quit: flisky)
[7:45] * shyu (~shyu@218.241.172.114) has joined #ceph
[7:47] * EinstCra_ (~EinstCraz@58.247.117.134) has joined #ceph
[7:48] * NTTEC_ (~nttec@119.93.91.136) has joined #ceph
[7:48] * NTTEC (~nttec@119.93.91.136) Quit (Read error: Connection reset by peer)
[7:50] * Enikma1 (~Misacorp@4MJAAFWSE.tor-irc.dnsbl.oftc.net) Quit ()
[7:53] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[7:54] * NTTEC (~nttec@203.177.235.23) has joined #ceph
[7:57] * Blfrg (~Blfrg@186.4.22.14) Quit (Remote host closed the connection)
[7:59] * MentalRay (~MentalRay@107.171.161.165) has joined #ceph
[8:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[8:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[8:01] * NTTEC_ (~nttec@119.93.91.136) Quit (Ping timeout: 480 seconds)
[8:04] * mohmultihouse (~mohmultih@gw01.mhitp.dk) has joined #ceph
[8:05] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Remote host closed the connection)
[8:05] * yatin (~yatin@161.163.44.8) has joined #ceph
[8:09] * penguinRaider (~KiKo@14.139.82.6) Quit (Ping timeout: 480 seconds)
[8:09] * karnan (~karnan@121.244.87.117) has joined #ceph
[8:16] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[8:17] * yanzheng2 (~zhyan@125.70.23.87) has joined #ceph
[8:18] * penguinRaider (~KiKo@69.163.33.182) has joined #ceph
[8:19] * yanzheng1 (~zhyan@125.70.23.87) Quit (Ping timeout: 480 seconds)
[8:23] * gauravbafna (~gauravbaf@49.32.0.140) has joined #ceph
[8:43] * matj345314 (~matj34531@141.255.254.208) has joined #ceph
[8:44] * rakeshgm (~rakesh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[8:46] * yatin (~yatin@161.163.44.8) Quit (Remote host closed the connection)
[8:46] * agsha (~agsha@121.244.155.10) has joined #ceph
[8:48] * yatin (~yatin@161.163.44.8) has joined #ceph
[8:54] * EinstCra_ (~EinstCraz@58.247.117.134) Quit (Remote host closed the connection)
[8:54] * EinstCrazy (~EinstCraz@58.247.117.134) has joined #ceph
[8:55] * rakeshgm (~rakesh@121.244.87.118) has joined #ceph
[8:55] * b0e (~aledermue@213.95.25.82) has joined #ceph
[8:58] * EinstCrazy (~EinstCraz@58.247.117.134) Quit (Remote host closed the connection)
[8:58] * EinstCrazy (~EinstCraz@58.247.117.134) has joined #ceph
[8:59] * ade (~abradshaw@nat-pool-str-u.redhat.com) has joined #ceph
[9:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[9:01] * Hemanth (~hkumar_@121.244.87.118) has joined #ceph
[9:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[9:01] * Hemanth (~hkumar_@121.244.87.118) Quit (Remote host closed the connection)
[9:05] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[9:06] * analbeard (~shw@support.memset.com) has joined #ceph
[9:07] * MentalRay (~MentalRay@107.171.161.165) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[9:08] * EinstCrazy (~EinstCraz@58.247.117.134) Quit (Remote host closed the connection)
[9:08] * EinstCrazy (~EinstCraz@58.247.117.134) has joined #ceph
[9:13] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Ping timeout: 480 seconds)
[9:13] * kutija (~kutija@89.216.27.139) has joined #ceph
[9:14] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) Quit (Quit: Leaving.)
[9:16] <sep> i am experiencing this problem ; ec backed rbd images freezes ; http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-February/007881.html , is there a correct fix. or a official workaround ? the one mentioned in the mail does not much for me. using latest hammer
[9:17] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[9:19] * rakeshgm (~rakesh@121.244.87.118) Quit (Ping timeout: 480 seconds)
[9:19] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[9:22] <Be-El> do you use a cache tier for the ec pool?
[9:22] <sep> the command rados -p ec-veeam setomapval rbd_id.bigimage dummy value ; did fix the problem. But is it a permanenet fix. or just for now ?
[9:22] <sep> Be-El, yes
[9:22] * EinstCrazy (~EinstCraz@58.247.117.134) Quit (Remote host closed the connection)
[9:23] * EinstCrazy (~EinstCraz@58.247.117.134) has joined #ceph
[9:25] <sep> http://tracker.ceph.com/issues/12903 ;; i assume this means it will be fixed soonish ? "pending backport?"
[9:26] * EinstCra_ (~EinstCraz@58.247.117.134) has joined #ceph
[9:27] * EinstCrazy (~EinstCraz@58.247.117.134) Quit (Read error: Connection reset by peer)
[9:27] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[9:28] <Be-El> i don't know, but the issues is recognized and solved at least
[9:29] <sep> do you know if the "rados -p ec-veeam setomapval rbd_id.bigimage dummy value" is a permanent workaround ? or something that needs to be run regularly to keep the image working ? perhaps put it in cron
[9:30] <Be-El> no clue, i've used ec pools for cephfs only in the past
[9:31] <sep> i am beginning to suspect that might be a better way for me as well.
[9:32] <sep> i use huge rbd images to store backup images. they are quite large. does cephfs deal with that gracefully ?
[9:32] * EinstCra_ (~EinstCraz@58.247.117.134) Quit (Remote host closed the connection)
[9:32] <Be-El> cephfs adds more complexity to the setup since you need at least one mds
[9:33] <Be-El> and the posix compliant file access may result in other problems, too (e.g. if applications do not scope with locking correctly)
[9:33] * EinstCrazy (~EinstCraz@58.247.117.134) has joined #ceph
[9:34] * garphy is now known as garphy`aw
[9:35] <Be-El> if you need concurrent access to the same filesystem from multiple hosts (e.g. a replacement for NFS), then cephfs is a nice alternative
[9:35] <Be-El> but i assume that you have one backup host writing to one image at a time
[9:36] * bara (~bara@ip4-83-240-10-82.cust.nbox.cz) has joined #ceph
[9:37] <NTTEC> Hello Be-El, thx for the help last time. I was able to set-up a ceph cluster, now my question is what cloud platforms is the best to use for ceph? and after setting this up what's next to learn for ceph?
[9:37] * onyb (~ani07nov@112.133.232.12) Quit (Ping timeout: 480 seconds)
[9:38] <Be-El> NTTEC: the cloud platform depends on your requirements. i've tested opennebula and openstack with ceph. both work fine. opennebula is easier to setup, openstack offers more functionality
[9:39] <Be-El> NTTEC: if you just need a bunch of virtual machines without much user interaction, you should also have a look at proxmox. also works with ceph out of the box
[9:39] <NTTEC> do I need to install the cloud platform in a separate server? or it should be on the admin-node?
[9:40] <Be-El> you need more than one server for any reasonable cloud solution
[9:41] <NTTEC> oh this is just for my learning materials.
[9:41] * Mika_c (~Mika@122.146.93.152) Quit (Remote host closed the connection)
[9:41] <NTTEC> so it would be best to have the cloud platform installed on a separate server?
[9:42] <Be-El> there isn't a single platform. openstack for example has controller and compute nodes (with at least 3 dedicated controller nodes for production setup)
[9:43] <NTTEC> wow, how about promox? and opennebula?
[9:43] * swami1 (~swami@49.44.57.239) has joined #ceph
[9:44] <Be-El> opennebula also has a central management server and dedicated compute servers
[9:44] <Be-El> the recommended way to install proxmox is also using dedicated hardware, since it comes as a ready-to-use iso image
[9:45] <Be-El> you can put opennebula and openstack on a single node, even shared with the ceph admin node. but it is not recommended
[9:46] <NTTEC> I see. but for advance user would you recommend openstack for this?
[9:46] * onyb (~ani07nov@112.133.232.14) has joined #ceph
[9:46] * yatin (~yatin@161.163.44.8) Quit (Remote host closed the connection)
[9:47] * dugravot6 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) has joined #ceph
[9:47] <Be-El> it is probably the solution most people use and most users are familar with (with the exception of public clouds like amazon, azure etc. )
[9:49] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[9:51] <NTTEC> I see. but after setting up ceph cluster, what else should I learn to further my knowledge on ceph?
[9:51] * EinstCrazy (~EinstCraz@58.247.117.134) Quit (Remote host closed the connection)
[9:52] <Be-El> NTTEC: start with monitoring, e.g. collectd, graphite, grafana
[9:52] * DanFoster (~Daniel@2a00:1ee0:3:1337:813:add1:e71d:1ef6) has joined #ceph
[9:53] * shyu (~shyu@218.241.172.114) Quit (Read error: Connection reset by peer)
[9:54] <NTTEC> ok thx for the info
[9:54] * dneary (~dneary@pool-96-233-46-27.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[9:55] * garphy`aw is now known as garphy
[9:56] * inf_b (~o_O@dot1x-232-072.wlan.uni-giessen.de) has joined #ceph
[9:57] * dgurtner (~dgurtner@178.197.233.142) has joined #ceph
[9:57] * zdzichu (zdzichu@2001:470:71:68d::1) has joined #ceph
[9:57] <zdzichu> hi, what does it mean: # rbd create rbd/Oracle_ZFS_Storage-disk1 --size 53687091200
[9:57] <zdzichu> 2016-06-03 09:54:23.339942 7f1ec9f241c0 -1 librbd: image size not compatible with object map
[9:58] * scubacuda (sid109325@0001fbab.user.oftc.net) Quit (Remote host closed the connection)
[9:58] * braderhart (sid124863@braderhart.user.oftc.net) Quit (Remote host closed the connection)
[10:00] * Chaos_Llama (~Silentkil@chomsky.torservers.net) has joined #ceph
[10:00] * NTTEC (~nttec@203.177.235.23) Quit (Remote host closed the connection)
[10:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[10:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[10:01] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[10:02] * rmart04 (~rmart04@support.memset.com) has joined #ceph
[10:02] * pabluk_ is now known as pabluk
[10:04] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[10:04] * Mika_c (~Mika@122.146.93.152) has joined #ceph
[10:07] * flisky (~Thunderbi@36.110.40.30) has joined #ceph
[10:08] * thomnico (~thomnico@2a01:e35:8b41:120:118a:452c:7c6f:3c84) has joined #ceph
[10:08] * onyb (~ani07nov@112.133.232.14) Quit (Ping timeout: 480 seconds)
[10:14] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[10:15] <sep> Be-El, correct, there is only 1 writer or reader at any given time. but since the focus is cost/bytes having replication is difficult. so it do require erasure coding of some sort. n
[10:16] <Be-El> well, it's the usual problem.....cost, speed, availability.....choose two ;-)
[10:16] <sep> zdzichu, do you have that much space ? 53687091200 Megabytes = 53 petabytes
[10:18] <sep> Be-El, indeed, i am trying to sacrifice speed... but it does need to be somewhat usable ofcourse :)
[10:18] * thomnico (~thomnico@2a01:e35:8b41:120:118a:452c:7c6f:3c84) Quit (Ping timeout: 480 seconds)
[10:18] <zdzichu> oh damn, it's in megabytes
[10:19] <zdzichu> right, I was expecting it will take bytes
[10:19] <zdzichu> thanks
[10:19] * onyb (~ani07nov@112.133.232.14) has joined #ceph
[10:19] <Be-El> sep: i would propose to write a mail to the mailing list (maybe even ceph-devel) about the state of the ec+rbd fix
[10:19] <sep> with cephfs. can you mount different parts on different machines ? to "shard" the cephfs ?
[10:20] <sep> i could try the mailinglist first. thanks for the help
[10:20] * allaok (~allaok@machine107.orange-labs.com) has left #ceph
[10:22] * xiucai (~hualingso@222.90.141.44) has joined #ceph
[10:22] <Be-El> both ceph-fuse and kernel cephfs allow you to specify a subdirectory to be used as root of the mount point
[10:22] <Be-El> with jewel you can also have multiple namespaces
[10:23] <ceph_> log [ERR] : deep-scrub 4.576f ca7a576f/rbd_data.2b0cf7c5d46f1.000000000000d7d4/head//4 on disk size (8388608) does not match object info size (4562944) ajusted for ondisk to (4562944)
[10:23] <ceph_> someone know there
[10:23] <ceph_> ?
[10:24] <ceph_> obj metadata [size] is not equal in the disk file size
[10:26] * bara (~bara@ip4-83-240-10-82.cust.nbox.cz) Quit (Quit: Bye guys! (??????????????????? ?????????)
[10:27] * thomnico (~thomnico@2a01:e35:8b41:120:d0e9:bcf6:5dab:a7ad) has joined #ceph
[10:30] * Chaos_Llama (~Silentkil@4MJAAFWYT.tor-irc.dnsbl.oftc.net) Quit ()
[10:33] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[10:36] * andreww (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[10:37] * swami2 (~swami@49.32.0.152) has joined #ceph
[10:38] * LeaChim (~LeaChim@host86-168-126-119.range86-168.btcentralplus.com) has joined #ceph
[10:39] * swami1 (~swami@49.44.57.239) Quit (Ping timeout: 480 seconds)
[10:42] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[10:43] * onyb (~ani07nov@112.133.232.14) Quit (Ping timeout: 480 seconds)
[10:52] * TMM (~hp@185.5.121.201) has joined #ceph
[10:52] * onyb (~ani07nov@112.133.232.14) has joined #ceph
[10:56] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:3df9:f187:182b:27cf) has joined #ceph
[10:56] * rraja (~rraja@121.244.87.117) has joined #ceph
[10:56] <xiucai> :)
[10:57] * scubacuda (sid109325@0001fbab.user.oftc.net) has joined #ceph
[10:58] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[11:00] * Coe|work (~Bobby@5.255.80.27) has joined #ceph
[11:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[11:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[11:01] * yatin (~yatin@161.163.44.8) has joined #ceph
[11:03] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[11:04] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[11:08] * kefu_ is now known as kefu
[11:09] * yatin (~yatin@161.163.44.8) Quit (Ping timeout: 480 seconds)
[11:10] * braderhart (sid124863@braderhart.user.oftc.net) has joined #ceph
[11:13] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[11:17] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[11:18] * Hemanth (~hkumar_@121.244.87.117) has joined #ceph
[11:19] <bla> hi ... mit litlle experimental ceph cluster says: mds cluster is degraded. The mds log: ).connect claims to be [2001:67c:670:100:9e5c:8eff:fece:cdfe]:6800/10304 not [2001:67c:670:100:9e5c:8eff:fece:cdfe]:6800/13660 - wrong node!
[11:19] <bla> what could i have broken?
[11:20] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[11:21] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[11:21] * shylesh (~shylesh@121.244.87.118) has joined #ceph
[11:21] * shylesh (~shylesh@121.244.87.118) Quit ()
[11:22] * flisky1 (~Thunderbi@36.110.40.26) has joined #ceph
[11:23] * shylesh (~shylesh@121.244.87.118) has joined #ceph
[11:25] * flisky (~Thunderbi@246e281e.test.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[11:25] * flisky1 is now known as flisky
[11:28] * onyb (~ani07nov@112.133.232.14) Quit (Ping timeout: 480 seconds)
[11:30] * Coe|work (~Bobby@4MJAAFW1F.tor-irc.dnsbl.oftc.net) Quit ()
[11:30] * Da_Pineapple (~Spessu@23.254.211.232) has joined #ceph
[11:33] <mistur> Hello
[11:34] <mistur> quick question : do I need to run mds service on mon server or it might be on differente server ?
[11:37] * onyb (~ani07nov@112.133.232.14) has joined #ceph
[11:43] * yatin (~yatin@161.163.44.8) has joined #ceph
[11:43] * dugravot6 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) Quit (Quit: Leaving.)
[11:45] * infernix (nix@000120cb.user.oftc.net) Quit (Quit: ZNC - http://znc.sourceforge.net)
[11:45] <IvanJobs> mistur, it depends. you'd better put them apart, but put them together is fine.
[11:45] * dugravot6 (~dugravot6@194.199.223.4) has joined #ceph
[11:46] <IcePic> I think it can (and even should) be separate.
[11:46] <IcePic> Ideally, you will have a node for a particular type of process. For example, some nodes may run ceph-osd daemons, other nodes may run ceph-mds daemons, and still other nodes may run ceph-mon daemons.
[11:46] <IcePic> from the ceph web.
[11:47] <IcePic> but it will probably work out fine to test with all in the same node, if your test is small-scale and not about performance and redundancy and so on.
[11:48] * kefu (~kefu@114.92.122.74) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[11:48] * ceph_ (~chris@180.168.197.82) Quit (Read error: Connection reset by peer)
[11:49] * ceph_ (~chris@222.73.33.154) has joined #ceph
[11:49] * kefu (~kefu@114.92.122.74) has joined #ceph
[11:53] * dan__ (~Daniel@office.34sp.com) has joined #ceph
[11:55] * Miouge (~Miouge@91.177.58.174) Quit (Quit: Miouge)
[11:59] * murmur_ (~murmur@zeeb.org) has joined #ceph
[12:00] * Da_Pineapple (~Spessu@7V7AAFNFF.tor-irc.dnsbl.oftc.net) Quit ()
[12:00] * LorenXo (~Kristophe@7V7AAFNGA.tor-irc.dnsbl.oftc.net) has joined #ceph
[12:00] * DanFoster (~Daniel@2a00:1ee0:3:1337:813:add1:e71d:1ef6) Quit (Ping timeout: 480 seconds)
[12:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[12:01] * yatin (~yatin@161.163.44.8) Quit (Remote host closed the connection)
[12:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[12:01] <ceph_> how to change obj metadata
[12:02] * scubacuda (sid109325@0001fbab.user.oftc.net) Quit (Remote host closed the connection)
[12:02] * braderhart (sid124863@braderhart.user.oftc.net) Quit (Remote host closed the connection)
[12:02] <ceph_> obj size in attr ,etc ..
[12:03] * inf_b (~o_O@dot1x-232-072.wlan.uni-giessen.de) Quit (Ping timeout: 480 seconds)
[12:09] * murmur (~murmur@zeeb.org) Quit (Remote host closed the connection)
[12:10] * dgurtner (~dgurtner@178.197.233.142) Quit (Read error: Connection reset by peer)
[12:11] * rendar (~I@host251-119-dynamic.57-82-r.retail.telecomitalia.it) has joined #ceph
[12:11] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Read error: Connection reset by peer)
[12:13] * yatin (~yatin@161.163.44.8) has joined #ceph
[12:13] * Miouge (~Miouge@91.177.58.174) has joined #ceph
[12:14] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[12:15] * dugravot6 (~dugravot6@194.199.223.4) Quit (Quit: Leaving.)
[12:15] * onyb (~ani07nov@112.133.232.14) Quit (Ping timeout: 480 seconds)
[12:16] * dugravot6 (~dugravot6@194.199.223.4) has joined #ceph
[12:17] * thomnico (~thomnico@2a01:e35:8b41:120:d0e9:bcf6:5dab:a7ad) Quit (Remote host closed the connection)
[12:17] * Mika_c (~Mika@122.146.93.152) Quit (Remote host closed the connection)
[12:17] * scubacuda (sid109325@0001fbab.user.oftc.net) has joined #ceph
[12:18] * inf_b (~o_O@dot1x-232-072.wlan.uni-giessen.de) has joined #ceph
[12:19] <kefu> ceph_ all your replicas are marked inconsistent?
[12:20] * infernix (nix@000120cb.user.oftc.net) has joined #ceph
[12:23] * onyb (~ani07nov@112.133.232.28) has joined #ceph
[12:23] <ceph_> yes
[12:24] <ceph_> pg status: active+clean+inconsistent
[12:24] <ceph_> my obj attr: #getfattr -d 'rbd\udata.5343b29d46b4a.0000000000000a69__head_6155CB5F__11' ; output:# file: rbd\134udata.5343b29d46b4a.0000000000000a69__head_6155CB5F__1
[12:24] <ceph_> #output : user.ceph.snapset=0sAgIZAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAA== user.cephos.spill_out=0sMQA=
[12:25] <ceph_> I try decode it, Error occur
[12:25] * thomnico (~thomnico@2a01:e35:8b41:120:489e:f6c:5c85:ab71) has joined #ceph
[12:26] * mortn (~mortn@h17n9-t-a12.ias.bredband.telia.com) has joined #ceph
[12:30] * LorenXo (~Kristophe@7V7AAFNGA.tor-irc.dnsbl.oftc.net) Quit ()
[12:30] <kefu> ceph_, but your obj does not have an object_info in its xattr...
[12:30] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[12:34] * braderhart (sid124863@braderhart.user.oftc.net) has joined #ceph
[12:38] * rmart04_ (~rmart04@5.153.255.226) has joined #ceph
[12:40] * rmart04 (~rmart04@support.memset.com) Quit (Ping timeout: 480 seconds)
[12:41] * analbeard (~shw@support.memset.com) Quit (Ping timeout: 480 seconds)
[12:42] <ceph_> It's so bad, some obj has selinux in attr but ceph.user._
[12:43] * shyu (~shyu@218.241.172.114) has joined #ceph
[12:45] * yatin (~yatin@161.163.44.8) Quit (Remote host closed the connection)
[12:46] * rmart04 (~rmart04@support.memset.com) has joined #ceph
[12:46] * analbeard (~shw@support.memset.com) has joined #ceph
[12:46] * rmart04_ (~rmart04@5.153.255.226) Quit (Ping timeout: 480 seconds)
[12:48] * viisking (~viisking@183.80.255.12) Quit (Read error: Connection reset by peer)
[12:51] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[12:51] * yatin (~yatin@161.163.44.8) has joined #ceph
[12:55] * ingvarha_ is now known as ingvar
[13:00] * Scaevolus (~PuyoDead@atlantic850.dedicatedpanel.com) has joined #ceph
[13:00] <kefu> ceph_ probably you can put the object back using "rados put " command ?
[13:00] <ceph_> no
[13:00] <kefu> so the object_info gets updated?
[13:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[13:01] * NTTEC (~nttec@122.53.162.158) has joined #ceph
[13:01] <ceph_> I use cluster with openstack
[13:01] <ceph_> rbd
[13:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[13:01] <mistur> IcePic: ok because I have a cluster with MDSs and MONs on different server
[13:01] * linjan__ (~linjan@176.195.152.175) Quit (Remote host closed the connection)
[13:01] * linjan (~linjan@176.195.152.175) has joined #ceph
[13:01] <ceph_> my cluster update from 0.72
[13:02] <ceph_> and now is 0.80.5
[13:02] <kefu> looks like ancient ...
[13:02] <mistur> IcePic: and when I try to " mount -t ceph <mds_server>:/ /mnt "
[13:03] <kefu> we don't support omap backed xattr now.
[13:03] <mistur> It try to connecte to port 6789 of the <mds_server>
[13:03] <kefu> ceph_ ^
[13:03] * shyu (~shyu@218.241.172.114) Quit (Ping timeout: 480 seconds)
[13:03] <mistur> but 6789 is the mon port not mds
[13:03] * onyb (~ani07nov@112.133.232.28) Quit (Ping timeout: 480 seconds)
[13:03] * yatin (~yatin@161.163.44.8) Quit (Remote host closed the connection)
[13:04] <ceph_> e...
[13:04] <ceph_> too bad
[13:04] <ceph_> Zzz...
[13:04] <ceph_> now I just del the rbd included the obj
[13:05] <kefu> ceph_ do you have the ceph_osdomap_tool?
[13:05] * derjohn_mob (~aj@185.65.67.249) Quit (Ping timeout: 480 seconds)
[13:06] <kefu> but i'd suggest you upgrade from firefly to hammer.
[13:06] <ceph_> yes
[13:07] <kefu> firefly is EOL now.
[13:07] <ceph_> I hava this tool
[13:07] * yatin (~yatin@161.163.44.8) has joined #ceph
[13:07] <ceph_> yes , I know
[13:07] <kefu> maybe you can use it to extract the object info for you?
[13:07] <kefu> i have not tried so.
[13:08] <ceph_> ok
[13:08] <ceph_> thx u all the same
[13:08] <kefu> yw =)
[13:08] * bniver (~bniver@pool-173-48-58-27.bstnma.fios.verizon.net) Quit (Remote host closed the connection)
[13:08] <ceph_> but now, just two pg
[13:08] <ceph_> and little obj
[13:08] <ceph_> I try del this obj, and test
[13:09] <kefu> k.
[13:09] * NTTEC (~nttec@122.53.162.158) Quit (Ping timeout: 480 seconds)
[13:11] * IvanJobs (~ivanjobs@103.50.11.146) Quit ()
[13:11] * flisky (~Thunderbi@36.110.40.26) Quit (Ping timeout: 480 seconds)
[13:12] * yatin (~yatin@161.163.44.8) Quit (Remote host closed the connection)
[13:12] * ram (~oftc-webi@static-202-65-140-146.pol.net.in) has joined #ceph
[13:13] <ram> Hi. could anyone please provide a good reference link for ceph integration with keystone
[13:15] <mistur> IcePic: ok my bad, it expect mon server and not mds server
[13:15] <mistur> http://docs.ceph.com/docs/jewel/rados/operations/authentication 404 :(
[13:16] <vikhyat> mistur: ram http://docs.ceph.com/docs/master/radosgw/keystone/
[13:17] <mistur> vikhyat: I don't have keystone
[13:18] <vikhyat> mistur: ahh sorry my bad it was for ram
[13:18] <mistur> vikhyat: ha ok :)
[13:18] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Quit: Leaving)
[13:19] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[13:20] <Kvisle> has anyone enabled keystone authentication in radosgw after running radosgw for a while? I wonder how it affects the existing users
[13:21] <Kvisle> as I interpret the documentation, it will create the keystone-users in ceph upon successful auth - which means that existing users should be unaffected
[13:21] <Kvisle> but it'd be useful to have some confirmation on this
[13:23] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[13:24] * mortn (~mortn@h17n9-t-a12.ias.bredband.telia.com) Quit (Quit: Leaving)
[13:25] * wjw-freebsd (~wjw@176.74.240.1) has joined #ceph
[13:28] <flaf> Be-El: hello ;). I don't know if you remember but I had a problem of mount at boot with fuse.ceph on Ubuntu Trusty. I have definitively found the problem. Just for information, it's not a bug from upstart but it's a bug from the package "mountall" used by upstart on Ubuntu Trusty to mount filesystems in fstab. All is explained in my bug report here:
[13:28] <flaf> https://bugs.launchpad.net/ubuntu/+source/mountall/+bug/1588594 (a simply problem of strip_slashes() function ;)).
[13:29] * andreww (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[13:30] * Scaevolus (~PuyoDead@06SAADF1R.tor-irc.dnsbl.oftc.net) Quit ()
[13:30] * Grimmer (~Jourei@tollana.enn.lu) has joined #ceph
[13:39] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) has joined #ceph
[13:40] * ceph_ (~chris@222.73.33.154) Quit (Quit: Leaving)
[13:40] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[13:41] * huangjun (~kvirc@113.57.168.154) Quit (Ping timeout: 480 seconds)
[13:43] * kefu (~kefu@114.92.122.74) Quit (Max SendQ exceeded)
[13:44] * kefu (~kefu@114.92.122.74) has joined #ceph
[13:47] * dgurtner (~dgurtner@178.197.233.142) has joined #ceph
[13:49] * NTTEC (~nttec@122.53.162.158) has joined #ceph
[13:50] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[13:54] * xiucai (~hualingso@222.90.141.44) has left #ceph
[14:00] * Grimmer (~Jourei@06SAADF27.tor-irc.dnsbl.oftc.net) Quit ()
[14:00] * xENO_ (~Nephyrin@176.10.99.207) has joined #ceph
[14:01] * hellertime (~Adium@72.246.3.14) has joined #ceph
[14:01] <hellertime> ok this is not expected:
[14:01] <hellertime> pg 4.16 is active+undersized+degraded, acting [243,2147483647,237]
[14:01] <hellertime> i didn't realize my cluster had an osd.2147483647
[14:01] <hellertime> :)
[14:03] <Be-El> hellertime: you do not have enough hosts to satisty the crush requirements. the number means -1 == no osd found
[14:04] <vikhyat> hellertime: looks like something missing in code we should not print like this can you create a tracker
[14:04] <vikhyat> hellertime: http://tracker.ceph.com/projects/ceph/issues/new
[14:04] * NTTEC (~nttec@122.53.162.158) Quit (Remote host closed the connection)
[14:06] <hellertime> I'll go do that
[14:06] <vikhyat> thank you !
[14:06] * NTTEC (~nttec@122.53.162.158) has joined #ceph
[14:06] * NTTEC (~nttec@122.53.162.158) Quit (Remote host closed the connection)
[14:06] * NTTEC (~nttec@122.53.162.158) has joined #ceph
[14:09] * NTTEC (~nttec@122.53.162.158) Quit (Remote host closed the connection)
[14:12] * georgem (~Adium@24.114.68.95) has joined #ceph
[14:12] * georgem (~Adium@24.114.68.95) Quit ()
[14:13] * georgem (~Adium@206.108.127.16) has joined #ceph
[14:14] * skyrat (~skyrat@94.230.156.78) has joined #ceph
[14:16] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[14:17] * m0zes__ (~mozes@ip70-179-131-225.fv.ks.cox.net) has joined #ceph
[14:18] * huangjun (~kvirc@117.151.50.152) has joined #ceph
[14:19] * gauravbafna (~gauravbaf@49.32.0.140) Quit (Remote host closed the connection)
[14:20] * wjw-freebsd (~wjw@176.74.240.1) Quit (Ping timeout: 480 seconds)
[14:20] * inf_b (~o_O@dot1x-232-072.wlan.uni-giessen.de) Quit (Quit: No Ping reply in 180 seconds.)
[14:21] * inf_b (~o_O@dot1x-232-072.wlan.uni-giessen.de) has joined #ceph
[14:23] <skyrat> Hi, I'm asking for help. I have a really small testing ceph cluster 3 nodes, 5 osds. I rebooted one node (with one osd only) and after a reboot the OSD daemon fails to start again. All was deployed by ceph-deploy script, osds were created with --dmcrypt. The problem is that the disk is not decrypted, therefore the /dev/mapper/<uuid> doesnt exist and cannot be mounted to /var/lib/ceph/osd/...
[14:25] * TMM (~hp@185.5.121.201) Quit (Quit: Ex-Chat)
[14:26] * huangjun (~kvirc@117.151.50.152) Quit (Ping timeout: 480 seconds)
[14:28] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[14:30] * xENO_ (~Nephyrin@4MJAAFW9Q.tor-irc.dnsbl.oftc.net) Quit ()
[14:32] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[14:32] * bene (~bene@2601:18c:8501:41d0:ea2a:eaff:fe08:3c7a) has joined #ceph
[14:34] * m0zes__ (~mozes@ip70-179-131-225.fv.ks.cox.net) Quit (Ping timeout: 480 seconds)
[14:34] * shylesh (~shylesh@121.244.87.118) Quit (Remote host closed the connection)
[14:37] * EinstCrazy (~EinstCraz@116.238.107.127) has joined #ceph
[14:42] * flisky (~Thunderbi@106.37.236.217) has joined #ceph
[14:42] * flisky (~Thunderbi@106.37.236.217) Quit ()
[14:43] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[14:43] * fli_ (fli@eastside.wirebound.net) has joined #ceph
[14:43] * fli (fli@eastside.wirebound.net) Quit (Read error: Connection reset by peer)
[14:47] * mattbenjamin1 (~mbenjamin@12.118.3.106) has joined #ceph
[14:50] * georgem (~Adium@206.108.127.16) has joined #ceph
[14:50] * Hemanth (~hkumar_@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:56] * EinstCrazy (~EinstCraz@116.238.107.127) Quit (Remote host closed the connection)
[14:57] * rakeshgm (~rakesh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:01] * smokedmeets (~smokedmee@c-73-158-201-226.hsd1.ca.comcast.net) Quit (Quit: smokedmeets)
[15:02] * EinstCrazy (~EinstCraz@116.238.107.127) has joined #ceph
[15:03] <Heebie> skyrat: Have you checked to see if there is mention of the OSD disks in /etc/crypttab? (I'm not sure if there should be or not, but I'd expect it to be there if it's a permanent thing.)
[15:08] * kefu (~kefu@114.92.122.74) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[15:09] * dan__ (~Daniel@office.34sp.com) Quit (Quit: Leaving)
[15:13] <skyrat> Heebie, thanks for the reply. No, there is no mention in crypttab, even on other nodes (which I did not reboot, but expect the same behavior). From my understanding one can be able to activate the osd from admin nde using `ceph-deploy disk activate` or directly on the node by the init script - this is skipped in default config because starting of the osd daemon is driven by a udev rule. I'm not able to start it by any of those ways because all end up by
[15:13] <skyrat> invoking `ceph-disk activate ...` on the given node, which fails. Definitely it is connected with the fact there is no saved info about how to decrypt the disk.
[15:15] <Heebie> That would be in /etc/crypttab, or would have to be manually passed to cryptsetup by whatever is setting it up. (I've never used encrypted disks with ceph, but I have used them.)
[15:18] <Heebie> Did you have to pass the encryption keys with the initial ceph-deploy osd create command? If not, perhaps they're locally stored in ceph.conf (or <clustername>.conf if your cluster isn't named "ceph") and haven't been copied out elsewhere? Hopefully one of the guys who actually knows will show up soon. (It's almost a given that at least a few here will know.. there are some very knowledgeable people!)
[15:19] <skyrat> I know, I use encrypted disks on daily basis, and was actually surprised to see empty crypttab, but I thought the info is stored in the ceph config by ceph-deploy. Actually the `ceph-deploy osd prepare` formated, encrypted and prepared the disk perfectly *AND* triggered cryptsetup to decrypt them.
[15:19] <skyrat> after that, the activate function worked as expected
[15:20] <Heebie> Does luks use crypttab? I've always just used cryptsetup by itself, for everything.. I've never used luks. Maybe ceph uses that?
[15:20] * kefu (~kefu@114.92.122.74) has joined #ceph
[15:20] <skyrat> I use luks for everything and cryptsetup uses crypttab
[15:21] <skyrat> for that
[15:21] <Heebie> I'm going to leave it alone and hope someone who has a clue about it shows up and answers your question. :)
[15:22] <skyrat> ok, thanks for your time
[15:25] <skyrat> I did not pass the encryption keys explicitly but they are stored on the node in `/etc/ceph/dmcrypt-keys/`
[15:25] * m0zes__ (~mozes@n117m02.cis.ksu.edu) has joined #ceph
[15:28] * scg (~zscg@valis.gnu.org) has joined #ceph
[15:28] <IcePic> Gandalf says: You shall not pass!... the encryption keys.
[15:30] * Scrin (~Harryhy@06SAADF91.tor-irc.dnsbl.oftc.net) has joined #ceph
[15:32] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[15:32] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[15:32] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[15:33] * gregmark (~Adium@68.87.42.115) has joined #ceph
[15:36] * scg (~zscg@valis.gnu.org) Quit (Ping timeout: 480 seconds)
[15:37] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[15:37] <skyrat> IcePic, makes sense! I did just `ceph-deploy osd prepare --dmcrypt` but the 128bytes long key files do exist on the nodes in the above dir.
[15:38] <skyrat> -rw------- 1 root root 128 Mar 22 11:44 63b8fc87-010a-4518-a85c-a147996da525.luks.key
[15:38] * overclk (~quassel@121.244.87.117) Quit (Remote host closed the connection)
[15:41] * TMM (~hp@185.5.121.201) has joined #ceph
[15:43] * Miouge (~Miouge@91.177.58.174) Quit (Quit: Miouge)
[15:47] * yatin (~yatin@203.212.245.90) has joined #ceph
[15:48] * scg (~zscg@valis.gnu.org) has joined #ceph
[15:49] * Racpatel (~Racpatel@2601:87:3:3601::4edb) has joined #ceph
[15:49] * inf_b (~o_O@dot1x-232-072.wlan.uni-giessen.de) Quit (Remote host closed the connection)
[15:49] * Miouge (~Miouge@91.177.58.174) has joined #ceph
[15:55] * mohmultihouse (~mohmultih@gw01.mhitp.dk) Quit (Read error: Connection reset by peer)
[15:55] * ade (~abradshaw@nat-pool-str-u.redhat.com) Quit (Ping timeout: 480 seconds)
[15:59] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) has joined #ceph
[16:00] * Scrin (~Harryhy@06SAADF91.tor-irc.dnsbl.oftc.net) Quit ()
[16:07] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) has joined #ceph
[16:07] * sudocat (~dibarra@2602:306:8bc7:4c50:2154:c2aa:b0f5:f23a) has joined #ceph
[16:08] * yanzheng2 (~zhyan@125.70.23.87) Quit (Quit: This computer has gone to sleep)
[16:11] * EinstCrazy (~EinstCraz@116.238.107.127) Quit (Remote host closed the connection)
[16:13] * EinstCrazy (~EinstCraz@116.238.107.127) has joined #ceph
[16:15] * EinstCrazy (~EinstCraz@116.238.107.127) Quit (Remote host closed the connection)
[16:16] * Racpatel (~Racpatel@2601:87:3:3601::4edb) Quit (Quit: Leaving)
[16:16] * Racpatel (~Racpatel@2601:87:3:3601::4edb) has joined #ceph
[16:17] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:17] * ibravo (~ibravo@72.83.69.64) Quit (Quit: Leaving)
[16:21] * shaunm (~shaunm@cpe-192-180-17-174.kya.res.rr.com) has joined #ceph
[16:22] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[16:24] <georgem> any recommendation for "vm.min_free_kbytes" on large OSD nodes?
[16:27] * MACscr (~Adium@c-73-9-230-5.hsd1.il.comcast.net) has joined #ceph
[16:28] <MACscr> how should i be troubleshooting the issue of osd's not being automatically mounted at boot. what exactly handles that?
[16:28] <MACscr> seems to be random
[16:28] <MACscr> and i always forget what actually does the mounting. lol
[16:29] <zdzichu> fstab?
[16:29] <mnaser> for me (rhel) the startup scripts mount it
[16:29] <mnaser> so starting the osd services will start it up
[16:30] * joshd1 (~jdurgin@71-92-201-212.dhcp.gldl.ca.charter.com) has joined #ceph
[16:32] <MACscr> mnaser: hmm, any ideas why these fsid's would be different? http://paste.debian.net/713800/
[16:32] <MACscr> i know im using the proxmox flavor, which always makes things harder to troubleshoot. it is pretty much still hammer though
[16:33] <mnaser> were these drives used before in another cluster
[16:33] <MACscr> mnaser: same existing cluster, just everything was rebooted and now this =/
[16:33] <mnaser> looks like some of your drives were matched with the wrong fsid
[16:33] <mnaser> and i think thats happening is
[16:34] <mnaser> you are manually mounting them
[16:34] * xarses_ (~xarses@64.124.158.100) has joined #ceph
[16:34] <mnaser> but ceph refuses to autmatically mount them because fsid is incorrect
[16:34] <MACscr> but im not
[16:34] <mnaser> how did you create those osds
[16:34] <MACscr> i havent changed anything
[16:34] <MACscr> not with the configs at least
[16:34] <mnaser> did you manually create partitions
[16:34] <mnaser> and mount them
[16:34] <mnaser> to setup the osds?
[16:35] <MACscr> 6 months ago?
[16:35] <mnaser> when you first set up the cluster
[16:35] <MACscr> yes, i guess so
[16:35] <mnaser> so these drives were probably deployed before using ceph-disk with another uuid
[16:35] <mnaser> so the solution is either mount them manually
[16:36] <mnaser> or update the fsid so they mount automatically with the start command
[16:36] <mnaser> by update the fsid
[16:36] <mnaser> not the one in the config, but hte one attached to the drive
[16:37] <MACscr> how do i do that?
[16:37] <MACscr> i have 6 osd's and they all show the same fsid error
[16:38] <MACscr> that is diff than the config
[16:38] <MACscr> so why not just update the config with the fsid that the disks think its going to be?
[16:38] <mnaser> because the rest of the cluster might have that fsid
[16:38] <mnaser> what is the fsid in the rest of your cluster config
[16:39] * swami2 (~swami@49.32.0.152) Quit (Quit: Leaving.)
[16:40] * penguinRaider (~KiKo@69.163.33.182) Quit (Ping timeout: 480 seconds)
[16:40] <MACscr> same one thats listed in that pastebin
[16:40] <MACscr> they all share the same config file
[16:41] <MACscr> as its a network share type thing or whatever crazyness proxmox uses
[16:41] <mnaser> so for some reason all the fsids on the partitions/osds are not matching your actual disks fsid
[16:42] <MACscr> well you say the rest of your config, im just talking about the ceph.conf
[16:42] <MACscr> should i be looking at something else?
[16:42] <mnaser> cat /var/lib/ceph/osd/ceph-N/ceph_fsid
[16:42] <mnaser> on that node
[16:42] <mnaser> that should match the fsid in your ceph config
[16:43] * yatin (~yatin@203.212.245.90) Quit (Remote host closed the connection)
[16:43] * wushudoin (~wushudoin@2601:646:9501:d2b2:2ab2:bdff:fe0b:a6ee) has joined #ceph
[16:44] * wushudoin (~wushudoin@2601:646:9501:d2b2:2ab2:bdff:fe0b:a6ee) Quit ()
[16:45] * wushudoin (~wushudoin@2601:646:9501:d2b2:2ab2:bdff:fe0b:a6ee) has joined #ceph
[16:45] <MACscr> its empty
[16:45] <MACscr> that folder
[16:45] <mnaser> ..where are your osds mounted
[16:45] <mnaser> lol
[16:45] <MACscr> they arent mounted, lol, thats my point
[16:45] <mnaser> and when they'll be mounted, the ceph_fsid is probably not matching
[16:46] <mnaser> mount them manually and check the ceph_fsid then
[16:46] <MACscr> i dont even know what paths are going to be and i havent had to do this int he past
[16:47] <mnaser> i dont know then \o/
[16:47] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[16:51] * rmart04 (~rmart04@support.memset.com) Quit (Quit: rmart04)
[16:55] * Vacuum_ (~Vacuum@i59F79CE0.versanet.de) has joined #ceph
[16:55] * untoreh (~fra@151.26.29.18) has joined #ceph
[16:56] * cholcombe (~chris@2001:67c:1562:8007::aac:40f1) Quit (Ping timeout: 480 seconds)
[16:57] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) has joined #ceph
[16:57] * matj345314 (~matj34531@141.255.254.208) Quit (Quit: matj345314)
[16:58] * penguinRaider (~KiKo@69.163.33.182) has joined #ceph
[17:02] * Vacuum__ (~Vacuum@i59F79C95.versanet.de) Quit (Ping timeout: 480 seconds)
[17:02] * kutija (~kutija@89.216.27.139) Quit (Quit: Textual IRC Client: www.textualapp.com)
[17:04] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:04] * cathode (~cathode@50.232.215.114) has joined #ceph
[17:06] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[17:07] * yatin (~yatin@203.212.245.90) has joined #ceph
[17:07] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[17:10] * shylesh__ (~shylesh@45.124.225.97) has joined #ceph
[17:16] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) Quit (Quit: Lost terminal)
[17:17] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) has joined #ceph
[17:17] * agsha (~agsha@121.244.155.10) Quit (Remote host closed the connection)
[17:18] * dvanders (~dvanders@pb-d-128-141-3-250.cern.ch) Quit (Ping timeout: 480 seconds)
[17:21] * nill (~nill@103.51.75.193) has joined #ceph
[17:23] * nill (~nill@103.51.75.193) Quit ()
[17:23] * yatin (~yatin@203.212.245.90) Quit (Remote host closed the connection)
[17:30] * blank (~Bromine@exit1.ipredator.se) has joined #ceph
[17:39] * TMM (~hp@185.5.121.201) Quit (Ping timeout: 480 seconds)
[17:43] * dillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: Leaving)
[17:46] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) Quit (Quit: Leaving.)
[17:49] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[17:50] * rraja (~rraja@121.244.87.117) Quit (Quit: Leaving)
[17:52] <skyrat> The encrypted OSD got rebooted, after that all "activate" trials fail. The reason is obvious, partitions do not get decrypted. The osd was marked out&down in the meantime. Did that call also the "deactivate" function which trashes the keys and other stuff? Can it be connected again without re-formating??
[17:55] <MACscr> mnaser: crap, all my configs were wiped and im finding out my backups from r1soft seem to be incomplete. ugh!
[17:59] * kefu (~kefu@114.92.122.74) Quit (Max SendQ exceeded)
[17:59] * kefu (~kefu@55.99.caa1.ip4.static.sl-reverse.com) has joined #ceph
[18:00] * blank (~Bromine@06SAADGG2.tor-irc.dnsbl.oftc.net) Quit ()
[18:00] * Xeon06 (~Nephyrin@7V7AAFNZI.tor-irc.dnsbl.oftc.net) has joined #ceph
[18:01] * whatevsz_ (~quassel@185.22.47.212) has joined #ceph
[18:02] * Skaag (~lunix@65.200.54.234) has joined #ceph
[18:02] * whatevsz__ (~quassel@b9168e90.cgn.dg-w.de) has joined #ceph
[18:08] * whatevsz (~quassel@b9168e90.cgn.dg-w.de) Quit (Ping timeout: 480 seconds)
[18:10] * whatevsz_ (~quassel@185.22.47.212) Quit (Ping timeout: 480 seconds)
[18:10] * dugravot6 (~dugravot6@194.199.223.4) Quit (Quit: Leaving.)
[18:12] * pabluk is now known as pabluk_
[18:15] * reed (~reed@75-101-54-18.dsl.static.fusionbroadband.com) has joined #ceph
[18:17] * thomnico (~thomnico@2a01:e35:8b41:120:489e:f6c:5c85:ab71) Quit (Quit: Ex-Chat)
[18:20] * xarses_ (~xarses@64.124.158.100) Quit (Remote host closed the connection)
[18:27] * thomnico (~thomnico@2a01:e35:8b41:120:489e:f6c:5c85:ab71) has joined #ceph
[18:27] * penguinRaider (~KiKo@69.163.33.182) Quit (Ping timeout: 480 seconds)
[18:30] * Xeon06 (~Nephyrin@7V7AAFNZI.tor-irc.dnsbl.oftc.net) Quit ()
[18:32] * skyrat (~skyrat@94.230.156.78) Quit (Quit: Leaving)
[18:34] * Plesioth (~skrblr@marcuse-2.nos-oignons.net) has joined #ceph
[18:36] * penguinRaider (~KiKo@69.163.33.182) has joined #ceph
[18:37] * xarses (~xarses@64.124.158.100) has joined #ceph
[18:38] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[18:38] * thomnico (~thomnico@2a01:e35:8b41:120:489e:f6c:5c85:ab71) Quit (Quit: Ex-Chat)
[18:39] * deepthi (~deepthi@122.172.47.100) Quit (Quit: Leaving)
[18:43] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[18:43] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[18:44] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[18:53] * dgurtner (~dgurtner@178.197.233.142) Quit (Read error: Connection reset by peer)
[18:59] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[19:00] * garphy is now known as garphy`aw
[19:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[19:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[19:04] * Plesioth (~skrblr@4MJAAFXOG.tor-irc.dnsbl.oftc.net) Quit ()
[19:04] * xENO_ (~Heliwr@72.52.75.27) has joined #ceph
[19:04] <mnaser> MACscr that might explain it
[19:05] <mnaser> just dont reboot any other osd nodes
[19:05] * joshd1 (~jdurgin@71-92-201-212.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[19:05] <mnaser> or you're in for a bad time
[19:05] <MACscr> oh, they were all down
[19:05] <mnaser> ..happy friday?
[19:07] * dmick1 (~dmick@206.169.83.146) has joined #ceph
[19:08] <MACscr> health HEALTH_OK
[19:10] * sugoruyo (~georgev@paarthurnax.esc.rl.ac.uk) Quit (Quit: I'm going home!)
[19:14] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) Quit (Quit: Leaving)
[19:16] * shylesh__ (~shylesh@45.124.225.97) Quit (Ping timeout: 480 seconds)
[19:18] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[19:18] * kefu_ (~kefu@114.92.122.74) has joined #ceph
[19:19] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) has joined #ceph
[19:19] * kefu_ (~kefu@114.92.122.74) Quit ()
[19:20] * kefu (~kefu@55.99.caa1.ip4.static.sl-reverse.com) Quit (Ping timeout: 480 seconds)
[19:24] * gauravbafna (~gauravbaf@122.167.118.114) has joined #ceph
[19:28] <rkeene> While doing a rbd export-diff I got: 2016-06-03 00:32:31.807648 7f9abb5d1700 -1 librbd::ImageWatcher: 0x43a79f0 image watch failed: 70941808, (107) Transport endpoint is not connected -- how fatal is that for the transfer ?
[19:28] <rkeene> (It was being done by a script so I didn't capture the exit code -- if anything writes to stderr though it's treated as if the command fails)
[19:33] * dgurtner (~dgurtner@178.197.233.142) has joined #ceph
[19:34] * xENO_ (~Heliwr@4MJAAFXP6.tor-irc.dnsbl.oftc.net) Quit ()
[19:34] * MJXII (~Cue@85.159.237.210) has joined #ceph
[19:34] * dmick1 (~dmick@206.169.83.146) has left #ceph
[19:36] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[19:36] * yatin (~yatin@203.212.245.90) has joined #ceph
[19:37] * kefu (~kefu@183.193.187.174) has joined #ceph
[19:37] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[19:39] * mykola (~Mikolaj@91.245.76.80) has joined #ceph
[19:40] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[19:40] * gauravba_ (~gauravbaf@122.172.230.37) has joined #ceph
[19:42] * agsha (~agsha@124.40.246.234) has joined #ceph
[19:44] * gauravbafna (~gauravbaf@122.167.118.114) Quit (Ping timeout: 480 seconds)
[19:47] * kefu (~kefu@183.193.187.174) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:50] * MrHeavy (~MrHeavy@pool-108-29-34-55.nycmny.fios.verizon.net) has joined #ceph
[19:53] * SamYaple (~SamYaple@162.209.126.134) Quit (Ping timeout: 480 seconds)
[19:57] <MACscr> ceph -s shows healthy, yet rbd gives segfault
[19:57] <MACscr> root@host1:~# rbd -p rbd ls
[19:57] <MACscr> 2016-06-03 12:54:10.277720 7fa8ce849700 ??0 -- 10.10.0.101:0/3065406290 >> 10.10.0.100:6804/10090 pipe(0x3a4dc00 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x3a51ea0).fault
[19:57] <MACscr> any suggestions?
[19:58] <PoRNo-MoRoZ> mine segfaults was linked with misconfigured monitors in ceph.conf
[19:58] <PoRNo-MoRoZ> i configured monitor that didn't actually exists
[19:58] <PoRNo-MoRoZ> what's in your case - dunno )
[19:59] * gauravba_ (~gauravbaf@122.172.230.37) Quit (Remote host closed the connection)
[19:59] * agsha_ (~agsha@124.40.246.234) has joined #ceph
[20:00] * gauravbafna (~gauravbaf@122.172.230.37) has joined #ceph
[20:03] * agsha__ (~agsha@124.40.246.234) has joined #ceph
[20:04] * agsha (~agsha@124.40.246.234) Quit (Read error: No route to host)
[20:04] * MJXII (~Cue@4MJAAFXRF.tor-irc.dnsbl.oftc.net) Quit ()
[20:04] * Kyso_ (~xul@192.42.116.16) has joined #ceph
[20:06] * gauravbafna (~gauravbaf@122.172.230.37) Quit (Remote host closed the connection)
[20:06] * gauravbafna (~gauravbaf@122.172.230.37) has joined #ceph
[20:10] * agsha_ (~agsha@124.40.246.234) Quit (Ping timeout: 480 seconds)
[20:12] * doppelgrau (~doppelgra@dslb-088-072-094-168.088.072.pools.vodafone-ip.de) has joined #ceph
[20:15] * gauravba_ (~gauravbaf@122.172.196.136) has joined #ceph
[20:16] * gauravbafna (~gauravbaf@122.172.230.37) Quit (Read error: Connection reset by peer)
[20:17] * ircolle (~Adium@2601:285:201:633a:7408:6698:94af:c9e5) has joined #ceph
[20:18] * gauravbafna (~gauravbaf@122.172.199.62) has joined #ceph
[20:22] * matj345314 (~matj34531@element.planetq.org) has joined #ceph
[20:22] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) Quit (Quit: leaving)
[20:23] * gauravba_ (~gauravbaf@122.172.196.136) Quit (Ping timeout: 480 seconds)
[20:28] * gauravba_ (~gauravbaf@122.178.192.205) has joined #ceph
[20:29] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[20:29] * gauravba_ (~gauravbaf@122.178.192.205) Quit (Remote host closed the connection)
[20:29] * SamYaple (~SamYaple@162.209.126.134) has joined #ceph
[20:30] * gauravbafna (~gauravbaf@122.172.199.62) Quit (Ping timeout: 480 seconds)
[20:34] * Kyso_ (~xul@4MJAAFXSS.tor-irc.dnsbl.oftc.net) Quit ()
[20:34] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) Quit (Remote host closed the connection)
[20:36] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[20:37] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[20:38] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) has joined #ceph
[20:42] * PoRNo-MoRoZ (~hp1ng@5.101.207.18) Quit ()
[20:45] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Ping timeout: 480 seconds)
[20:45] * m0zes__ (~mozes@n117m02.cis.ksu.edu) Quit (Ping timeout: 480 seconds)
[20:48] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Ping timeout: 480 seconds)
[20:49] <rkeene> MACscr, Does it actually segfault ? You didn't show that
[20:51] <The_Ball> Does anyone know which release Bluestore will be declared stable? Or is that not set yet?
[20:54] <s3an2> BlueStore is included as an experimental feature. The plan is for it to become the default backend in the K or L release.
[20:57] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[20:57] * yatin (~yatin@203.212.245.90) Quit (Remote host closed the connection)
[21:00] * dgurtner (~dgurtner@178.197.233.142) Quit (Ping timeout: 480 seconds)
[21:02] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[21:04] * utugi______ (~kalleeen@tor-exit.mensrea.org) has joined #ceph
[21:05] * m0zes__ (~mozes@ip70-179-131-225.fv.ks.cox.net) has joined #ceph
[21:13] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) Quit (Quit: Leaving)
[21:13] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) has joined #ceph
[21:21] * kklimonda (~kklimonda@2001:41d0:a:f2b5::1) has joined #ceph
[21:24] <kklimonda> hi, I have a bunch of servers used as openstack compute nodes, and I'd like to evaluate ceph for storage backend. The only caveat is that all those servers can only have two disks installed in them. I was thinking of going one big HDD (3TB) + one SSD with 4 partitions: 1 for system, another for swap (we are doing some slight memory overcommit, although not much) and two partitions for ceph: one
[21:24] <kklimonda> for journal, and another for ceph cache layer. Does it make sense, or would it put too much pressure on SSD, especially putting on it both cache and journal?
[21:24] * garphy`aw is now known as garphy
[21:34] * utugi______ (~kalleeen@06SAADGSU.tor-irc.dnsbl.oftc.net) Quit ()
[21:34] * Plesioth (~Fapiko@192.42.116.16) has joined #ceph
[21:37] * scg (~zscg@valis.gnu.org) Quit (Quit: Leaving)
[21:39] * scg (~zscg@valis.gnu.org) has joined #ceph
[21:43] * shaunm (~shaunm@cpe-192-180-17-174.kya.res.rr.com) Quit (Ping timeout: 480 seconds)
[21:43] * aboyle (~aboyle__@ardent.csail.mit.edu) Quit (Remote host closed the connection)
[21:45] * gauravbafna (~gauravbaf@122.178.192.205) has joined #ceph
[21:50] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) has joined #ceph
[22:01] <The_Ball> s3an2, thanks
[22:02] * hellertime (~Adium@72.246.3.14) Quit (Quit: Leaving.)
[22:03] * Racpatel (~Racpatel@2601:87:3:3601::4edb) Quit (Ping timeout: 480 seconds)
[22:04] * Plesioth (~Fapiko@06SAADGUN.tor-irc.dnsbl.oftc.net) Quit ()
[22:09] * mattbenjamin1 (~mbenjamin@12.118.3.106) Quit (Quit: Leaving.)
[22:12] * khyron (~khyron@200.77.224.239) has joined #ceph
[22:14] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[22:17] * Miouge (~Miouge@91.177.58.174) Quit (Quit: Miouge)
[22:19] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[22:19] * mykola (~Mikolaj@91.245.76.80) Quit (Quit: away)
[22:21] * Miouge (~Miouge@91.177.58.174) has joined #ceph
[22:27] * `10` (~10@69.169.91.14) Quit (Ping timeout: 480 seconds)
[22:32] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[22:34] * agsha__ (~agsha@124.40.246.234) Quit (Remote host closed the connection)
[22:34] * Wijk (~ain@Relay-J.tor-exit.network) has joined #ceph
[22:36] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Quit: Leaving)
[22:38] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[22:39] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[22:43] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) Quit (Remote host closed the connection)
[22:46] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[22:47] * matj345314 (~matj34531@element.planetq.org) Quit (Quit: matj345314)
[22:58] * rendar (~I@host251-119-dynamic.57-82-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[23:02] * gauravbafna (~gauravbaf@122.178.192.205) Quit (Remote host closed the connection)
[23:03] * ircolle (~Adium@2601:285:201:633a:7408:6698:94af:c9e5) Quit (Quit: Leaving.)
[23:04] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) has joined #ceph
[23:04] * Wijk (~ain@4MJAAFXZ8.tor-irc.dnsbl.oftc.net) Quit ()
[23:05] * scg (~zscg@valis.gnu.org) Quit (Quit: Leaving)
[23:12] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[23:14] * Miouge (~Miouge@91.177.58.174) Quit (Quit: Miouge)
[23:16] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[23:17] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[23:19] * bene (~bene@2601:18c:8501:41d0:ea2a:eaff:fe08:3c7a) Quit (Quit: Konversation terminated!)
[23:24] * rendar (~I@host251-119-dynamic.57-82-r.retail.telecomitalia.it) has joined #ceph
[23:25] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) has joined #ceph
[23:28] * doppelgrau_ (~doppelgra@dslb-088-072-094-168.088.072.pools.vodafone-ip.de) has joined #ceph
[23:30] * doppelgrau (~doppelgra@dslb-088-072-094-168.088.072.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[23:30] * doppelgrau_ is now known as doppelgrau
[23:33] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[23:33] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[23:35] * Popz (~starcoder@watchme.tor-exit.network) has joined #ceph
[23:37] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) has joined #ceph
[23:42] * oliveiradan_ (~doliveira@137.65.133.10) Quit (Remote host closed the connection)
[23:45] * joelc (~joelc@cpe-24-28-78-20.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[23:54] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[23:58] * oliveiradan (~doliveira@137.65.133.10) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.