#ceph IRC Log

Index

IRC Log for 2016-05-18

Timestamps are in GMT/BST.

[0:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[0:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[0:03] * cheese^ (~CoMa@4MJAAE63B.tor-irc.dnsbl.oftc.net) Quit ()
[0:03] * Grimhound (~vegas3@ded31663.iceservers.net) has joined #ceph
[0:03] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[0:12] * jmlowe (~Adium@c-68-45-14-99.hsd1.in.comcast.net) Quit (Quit: Leaving.)
[0:14] * jmlowe (~Adium@c-68-45-14-99.hsd1.in.comcast.net) has joined #ceph
[0:16] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[0:33] * Grimhound (~vegas3@4MJAAE64I.tor-irc.dnsbl.oftc.net) Quit ()
[0:33] * Scymex (~Da_Pineap@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[0:35] * fsimonce (~simon@host128-29-dynamic.250-95-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[0:38] <vanham> Thanks m0zes
[0:38] <vanham> m0zes, thanks!
[0:42] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[0:45] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[0:49] * haplo37 (~haplo37@199.91.185.156) Quit (Remote host closed the connection)
[0:50] * MentalRay_ (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[0:54] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[0:56] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[0:56] * debian112 (~bcolbert@24.126.201.64) Quit (Read error: No route to host)
[0:56] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Ping timeout: 480 seconds)
[0:58] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[0:58] * MentalRay_ (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Ping timeout: 480 seconds)
[0:58] * debian112 (~bcolbert@24.126.201.64) Quit ()
[0:59] * natarej (~natarej@2001:8003:48e7:c400:25b8:8a18:3167:d3e0) has joined #ceph
[1:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[1:01] * haomaiwang (~haomaiwan@106.187.51.170) has joined #ceph
[1:03] * Scymex (~Da_Pineap@4MJAAE65C.tor-irc.dnsbl.oftc.net) Quit ()
[1:03] * puvo (~luckz@163.172.209.46) has joined #ceph
[1:05] * natarej_ (~natarej@101.188.30.168) Quit (Ping timeout: 480 seconds)
[1:06] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[1:07] * bene (~bene@2601:18c:8501:41d0:ea2a:eaff:fe08:3c7a) Quit (Quit: Konversation terminated!)
[1:10] * rendar (~I@host61-179-dynamic.27-79-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[1:17] <singler> hey, I am having strange problem on CephFS. When I try to create deep dir structure with "mkdir -p" it fails with "Permission denied" after few levels
[1:17] <singler> I see nothing suspicious in logs
[1:17] <singler> it is running jewel 10.2.0
[1:18] <singler> and I was unable to repro it in test cluster
[1:19] <singler> is it possible that upgraded infernalis -> jewel cluster may have some different cephfs settings than new jewel one?
[1:19] * lcurtis_ (~lcurtis@47.19.105.250) Quit (Remote host closed the connection)
[1:20] <singler> test cluster has "8=file layout v2" while problematic one doesn't
[1:21] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[1:23] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[1:24] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[1:33] * puvo (~luckz@06SAACND2.tor-irc.dnsbl.oftc.net) Quit ()
[1:33] * Quackie (~matx@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[1:34] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[1:36] <jiffe> I guess another question is how well will ceph handle billions of objects?
[1:41] * LeaChim (~LeaChim@host86-176-96-249.range86-176.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[1:44] * hybrid512 (~walid@161.240.10.109.rev.sfr.net) Quit (Remote host closed the connection)
[1:46] * oms101 (~oms101@p20030057EA02C300C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:47] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[1:52] * gregsfortytwo (~gregsfort@transit-86-181-132-209.redhat.com) Quit (Quit: bye!)
[1:53] * kevinc (~kevinc__@ip174-65-71-172.sd.sd.cox.net) has joined #ceph
[1:54] * oms101 (~oms101@p20030057EA02AC00C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:55] * EinstCrazy (~EinstCraz@180.156.251.118) Quit (Remote host closed the connection)
[1:57] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[1:58] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[2:01] * haomaiwang (~haomaiwan@106.187.51.170) Quit (Remote host closed the connection)
[2:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[2:03] * Quackie (~matx@7V7AAEVDD.tor-irc.dnsbl.oftc.net) Quit ()
[2:03] * Catsceo (~skney@06SAACNGO.tor-irc.dnsbl.oftc.net) has joined #ceph
[2:09] * thansen (~thansen@17.253.sfcn.org) has joined #ceph
[2:15] * vanham (~vanham@208.76.55.202) Quit (Quit: Ex-Chat)
[2:16] * Skaag (~lunix@65.200.54.234) Quit (Quit: Leaving.)
[2:25] * wushudoin_ (~wushudoin@2601:646:8202:5ed0:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[2:31] * brians (~brian@80.111.114.175) Quit (Read error: Connection reset by peer)
[2:32] * brians (~brian@80.111.114.175) has joined #ceph
[2:33] * Catsceo (~skney@06SAACNGO.tor-irc.dnsbl.oftc.net) Quit ()
[2:33] * sixofour (~skney@3.tor.exit.babylon.network) has joined #ceph
[2:39] * Brochacho (~alberto@c-73-45-127-198.hsd1.il.comcast.net) Quit (Quit: Brochacho)
[2:48] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[2:53] * murmur (~murmur@zeeb.org) Quit (Remote host closed the connection)
[2:54] * neurodrone_ (~neurodron@162.243.191.67) Quit (Quit: neurodrone_)
[2:55] * aj__ (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[2:57] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[2:57] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[2:58] * linuxkidd (~linuxkidd@38.sub-70-210-245.myvzw.com) Quit (Quit: Leaving)
[2:58] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[2:59] * murmur (~murmur@zeeb.org) has joined #ceph
[3:00] * georgem (~Adium@104-222-119-175.cpe.teksavvy.com) has joined #ceph
[3:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[3:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[3:01] * georgem (~Adium@104-222-119-175.cpe.teksavvy.com) Quit ()
[3:01] * georgem (~Adium@206.108.127.16) has joined #ceph
[3:03] * sixofour (~skney@06SAACNHZ.tor-irc.dnsbl.oftc.net) Quit ()
[3:03] * lmg (~andrew_m@7V7AAEVGM.tor-irc.dnsbl.oftc.net) has joined #ceph
[3:06] * kevinc (~kevinc__@ip174-65-71-172.sd.sd.cox.net) Quit (Quit: Leaving)
[3:07] * gregsfortytwo (~gregsfort@transit-86-181-132-209.redhat.com) has joined #ceph
[3:14] * georgem (~Adium@206.108.127.16) has left #ceph
[3:14] * sepa (~sepa@aperture.GLaDOS.info) Quit (Ping timeout: 480 seconds)
[3:19] * atheism (~atheism@182.48.117.114) has joined #ceph
[3:21] * Mutter (~Mutter@ool-4571f3a5.dyn.optonline.net) has joined #ceph
[3:22] * EinstCrazy (~EinstCraz@58.247.117.134) has joined #ceph
[3:24] * MentalRay (~MentalRay@107.171.161.165) has joined #ceph
[3:24] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[3:25] * seosepa (~sepa@aperture.GLaDOS.info) has joined #ceph
[3:26] * Racpatel (~Racpatel@2601:87:3:3601::6f15) Quit (Quit: Leaving)
[3:26] * brians (~brian@80.111.114.175) Quit (Read error: Connection reset by peer)
[3:27] * brians (~brian@80.111.114.175) has joined #ceph
[3:29] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[3:29] * Mutter (~Mutter@ool-4571f3a5.dyn.optonline.net) Quit (Quit: Mutter: www.mutterirc.com)
[3:29] * Mutter (~Mutter@ool-4571f3a5.dyn.optonline.net) has joined #ceph
[3:32] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[3:33] * lmg (~andrew_m@7V7AAEVGM.tor-irc.dnsbl.oftc.net) Quit ()
[3:33] * swami1 (~swami@27.7.167.163) has joined #ceph
[3:33] * TGF (~Sun7zu@nooduitgang.schmutzig.org) has joined #ceph
[3:33] * EinstCrazy (~EinstCraz@58.247.117.134) Quit (Remote host closed the connection)
[3:34] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[3:35] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[3:37] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:37] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[3:37] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:39] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[3:40] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[3:42] * vbellur (~vijay@122.178.206.131) has joined #ceph
[3:45] <flaf> Hi. In a Infernalis cluster (little testing cluster which was off during few days), impossible to restart the OSD correctly. According to ???ceph ods tree??? all my 3 OSD are down. I don't see why. After a restart of the daemon, the daemon is running. In the log, I see no clew. In fact, I have absolutely no idea why my cluster doesn't start => http://paste.alacon.org/41199
[3:47] <motk> cephx?
[3:48] <flaf> I don't think so, but how can I valid or confirm that?
[3:48] <motk> anything in ceph log?
[3:49] * thansen (~thansen@17.253.sfcn.org) Quit (Quit: Ex-Chat)
[3:50] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[3:50] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[3:50] <flaf> I don't see any clew in the log.
[3:51] <flaf> A curious thing is I see absolutely no traffic in the cluster network (with tcpdump).
[3:52] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[3:53] <flaf> but with netstat, I can see each OSD bound to the public IP and the cluster IP for each node.
[3:54] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[3:54] * vbellur (~vijay@122.178.206.131) Quit (Ping timeout: 480 seconds)
[3:54] <flaf> Ah in a log, I can see this warning concerning the mon ???-1 WARNING: 'mon addr' config option 172.31.10.1:0/0 does not match monmap file continuing with monmap configuration???
[3:55] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit ()
[3:55] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[3:55] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:55] * yanzheng (~zhyan@125.70.22.41) has joined #ceph
[3:56] * Mutter (~Mutter@ool-4571f3a5.dyn.optonline.net) Quit (Quit: Mutter: www.mutterirc.com)
[3:56] * Mutter (~Mutter@ool-4571f3a5.dyn.optonline.net) has joined #ceph
[3:56] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[3:57] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[3:57] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[3:58] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:58] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[3:58] <flaf> http://paste.alacon.org/41202 <= but here is my monmap. So, for me, mon addr in my ceph.conf matches perfectly with the monmap.
[3:59] * Mutter (~Mutter@ool-4571f3a5.dyn.optonline.net) has left #ceph
[4:00] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[4:00] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[4:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[4:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[4:01] <jiffe> how are files stored in cephfs? 1 file=>1 object mapping, 1+ files per object, 1+ objects per file?
[4:01] * kefu (~kefu@183.193.181.153) has joined #ceph
[4:02] * shohn1 (~shohn@dslb-188-102-031-150.188.102.pools.vodafone-ip.de) has joined #ceph
[4:02] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[4:02] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[4:03] * TGF (~Sun7zu@06SAACNKJ.tor-irc.dnsbl.oftc.net) Quit ()
[4:03] * Kakeru (~visored@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[4:04] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[4:04] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[4:04] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[4:05] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[4:05] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[4:05] * yanzheng (~zhyan@125.70.22.41) Quit (Quit: ??????)
[4:06] * zhaochao (~zhaochao@125.39.112.6) has joined #ceph
[4:07] * shohn (~shohn@dslb-094-222-209-023.094.222.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[4:08] * monsted (~monsted@rootweiler.dk) Quit (Read error: Connection reset by peer)
[4:08] * monsted (~monsted@rootweiler.dk) has joined #ceph
[4:09] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[4:10] <jiffe> looks like the latter
[4:11] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Ping timeout: 480 seconds)
[4:11] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:12] <m0zes> objects per file. the feault layout says 1 object for every 4MB of each file.
[4:12] * allen_ga- (~allen_gao@58.213.72.214) Quit (Ping timeout: 480 seconds)
[4:13] <jiffe> is there any limitations to the number of objects I can have in ceph?
[4:18] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[4:20] <flaf> Ah... just a question: if I upgrade ceph from infernalis to jewel, is the content of osd working dirs definitively changed, so that if I come back before the upgrade via a snapshot (my nodes are little VMs) the osd are incompatible with ceph Infernalis?
[4:21] <m0zes> jiffe: not that I know of, although I've got 2824680804 objects in ceph right now.
[4:22] <jiffe> m0zes: thats good to know, I've got about 500M objects I'll be dumping in to start out with but planning to get up to about 10x that
[4:24] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[4:26] <flaf> Because the snapshot of my nodes concerns the system partition, but not the OSD working dirs. After my upgrade to jewel, I have restore the snapshot in a "infernalis" state but the OSD working dirs stayed in Jewel state. I think it's my problem, but I'm not sure...
[4:27] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[4:29] <m0zes> I would not be surprised if the upgrade to jewel weren't downgrade compatible.
[4:30] * allen_gao (~allen_gao@58.213.72.214) has joined #ceph
[4:31] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[4:31] <flaf> m0zes: but I'm sure if the comparison is possible. In my case, all my system and ceph soft are in Infernalis state, but /var/lib/ceph/osd-$i/ are in a Jewel state.
[4:32] * grauzikas (grauzikas@78-56-222-78.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[4:33] * Kakeru (~visored@7V7AAEVJO.tor-irc.dnsbl.oftc.net) Quit ()
[4:33] * Hejt (~Curt`@marylou.nos-oignons.net) has joined #ceph
[4:36] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[4:38] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[4:41] * overclk (~quassel@117.202.96.84) has joined #ceph
[4:43] * natarej_ (~natarej@101.188.30.168) has joined #ceph
[4:43] <MentalRay> Anyone experimented with bluestore on Jewel?
[4:45] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[4:45] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[4:45] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit ()
[4:46] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[4:46] * natarej (~natarej@2001:8003:48e7:c400:25b8:8a18:3167:d3e0) Quit (Read error: Connection reset by peer)
[4:51] * kefu_ (~kefu@55.99.caa1.ip4.static.sl-reverse.com) has joined #ceph
[4:53] * kefu (~kefu@183.193.181.153) Quit (Ping timeout: 480 seconds)
[4:59] * swami1 (~swami@27.7.167.163) Quit (Ping timeout: 480 seconds)
[5:00] * shohn (~shohn@dslb-146-060-206-254.146.060.pools.vodafone-ip.de) has joined #ceph
[5:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[5:01] * haomaiwang (~haomaiwan@106.187.51.170) has joined #ceph
[5:02] * haomaiwang (~haomaiwan@106.187.51.170) Quit (Remote host closed the connection)
[5:03] * Hejt (~Curt`@7V7AAEVK0.tor-irc.dnsbl.oftc.net) Quit ()
[5:03] * mason1 (~cryptk@65.19.167.131) has joined #ceph
[5:07] * shohn1 (~shohn@dslb-188-102-031-150.188.102.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[5:10] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[5:13] <flaf> I have definitively no idea my osd are all down. If I launch an osd in foreground, I have this: http://paste.alacon.org/41204 <= I see no error which can explain why the osd is down. The last message is curious ???0 osd.0 166 done with init, starting boot process???, but nothing after. The process is still up but the osd is down according to ???ceph -s???.
[5:14] <flaf> *no idea why my osd...
[5:19] * Vacuum_ (~Vacuum@i59F790ED.versanet.de) has joined #ceph
[5:21] * eternaleye (~Alex@174-21-123-146.tukw.qwest.net) Quit (Quit: Quit)
[5:22] * eternaleye (~Alex@174-21-123-146.tukw.qwest.net) has joined #ceph
[5:25] * Vacuum__ (~Vacuum@i59F79841.versanet.de) Quit (Ping timeout: 480 seconds)
[5:33] * mason1 (~cryptk@7V7AAEVLY.tor-irc.dnsbl.oftc.net) Quit ()
[5:33] * vegas3 (~darkid@kunstler.tor-exit.calyxinstitute.org) has joined #ceph
[5:39] * vbellur (~vijay@121.244.87.118) has joined #ceph
[5:51] <flaf> The same foreground start of an osd but with more verbosity => http://paste.alacon.org/41205. The last message is ???osd.1 166 done with init, starting boot process, osd.1 166 We are healthy, booting???. After that, nothing. So "healthy" ok but the OSD is still down... I don't understand.
[5:51] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[5:57] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[5:57] * shaunm (~shaunm@74.83.215.100) Quit (Ping timeout: 480 seconds)
[6:00] * jamespage (~jamespage@culvain.gromper.net) Quit (Quit: Coyote finally caught me)
[6:00] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[6:02] * KindOne_ (kindone@h25.17.117.75.dynamic.ip.windstream.net) has joined #ceph
[6:03] * vegas3 (~darkid@4MJAAE7FU.tor-irc.dnsbl.oftc.net) Quit ()
[6:03] * Rosenbluth (~oracular@192.42.116.16) has joined #ceph
[6:07] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) has joined #ceph
[6:07] * KindOne (kindone@h180.146.186.173.dynamic.ip.windstream.net) Quit (Ping timeout: 480 seconds)
[6:07] * KindOne_ is now known as KindOne
[6:12] <badone> flaf: try starting it in foreground with high debug logging?
[6:13] <flaf> badone: in the paste, it's debug osd = 1/5, debug filestore = 1/5, debug journal = 1, debug monc = 5/20.
[6:13] <flaf> No enough?
[6:14] <badone> debu osd 20
[6:15] <flaf> in fact, what is the syntax debug osd X/Y ?
[6:16] <badone> flaf: first number is the debug output that makes it to the logs
[6:16] <badone> second value is the log output that gets stored in memory and dumped out only if there's a crash
[6:17] <flaf> IAh ok, thx.
[6:17] <badone> just use 20 for now
[6:18] <flaf> http://paste.alacon.org/41206 <= with debug 20 I have this loop
[6:18] <flaf> badone: but my context is very specific.
[6:19] <flaf> In fact it's VMs with 2 disks one for / and one for /var/lib/ceph/osd/ceph-$id/. All was Ok in the state "infernalis".
[6:19] <flaf> I have made an upgrade to jewel (just to test)
[6:20] <flaf> After the upgrade all was OK, no problem.
[6:20] <flaf> Then, I wanted to retry an upgrade.
[6:20] <flaf> So I have used a snapshot.
[6:20] <flaf> to come back in the "Infernalis" state.
[6:21] <badone> looks like the snapshot was inconsistent
[6:21] <flaf> *But*... my snapshot is only for /, not for /var/lib/ceph/osd/ceph-$id
[6:22] <flaf> So, I had / in "Ok-infernalis" state, but /var/lib/ceph/osd/ceph-$id in "Ok-jewel" state.
[6:23] <badone> flaf: not ideal I suspect
[6:23] <flaf> Do you think it's a patholigic state and it's normal to have undefined behavior?
[6:23] <badone> flaf: let's just say I doubt we'll ever support it
[6:25] <flaf> In fact, just for the glory, I would like to repair my testing cluster. In this specific case, it's a waste of time?
[6:26] * kefu_ (~kefu@55.99.caa1.ip4.static.sl-reverse.com) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:26] * MentalRay (~MentalRay@107.171.161.165) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:26] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[6:31] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (Read error: No route to host)
[6:31] * toastyde2th (~toast@pool-71-255-253-39.washdc.fios.verizon.net) has joined #ceph
[6:31] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[6:32] <badone> flaf: only you can decide that :)
[6:33] * Rosenbluth (~oracular@4MJAAE7G4.tor-irc.dnsbl.oftc.net) Quit ()
[6:33] * Rens2Sea (~xolotl@109.236.90.209) has joined #ceph
[6:34] <flaf> badone: I think I will give up and just restrict my energy for real cases. ;)
[6:34] <flaf> I have destroyed this cluster. ;)
[6:35] <flaf> (with my partial snapshot)
[6:35] <badone> flaf: good thinking IMHO ;)
[6:35] <flaf> Anyway, thx for your help badone and for the syntax "debug log". ;)
[6:37] <badone> flaf: np at all
[6:38] * toastyde1th (~toast@pool-71-255-253-39.washdc.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[6:40] * swami1 (~swami@49.32.0.244) has joined #ceph
[6:45] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[6:45] * vbellur (~vijay@121.244.87.118) has left #ceph
[6:50] * prallab (~prallab@216.207.42.137) has joined #ceph
[6:50] * kefu (~kefu@183.193.181.153) has joined #ceph
[6:54] * kefu_ (~kefu@114.92.122.74) has joined #ceph
[6:58] * kefu (~kefu@183.193.181.153) Quit (Ping timeout: 480 seconds)
[7:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[7:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[7:03] * Rens2Sea (~xolotl@7V7AAEVPC.tor-irc.dnsbl.oftc.net) Quit ()
[7:07] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:08] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[7:09] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Quit: Leaving)
[7:10] * EinstCrazy (~EinstCraz@58.247.117.134) has joined #ceph
[7:11] * eternaleye (~Alex@174-21-123-146.tukw.qwest.net) Quit (Quit: Quit)
[7:11] * prallab (~prallab@216.207.42.137) Quit (Remote host closed the connection)
[7:12] * eternaleye (~Alex@174-21-123-146.tukw.qwest.net) has joined #ceph
[7:12] * EinstCra_ (~EinstCraz@58.247.117.134) has joined #ceph
[7:13] * natarej (~natarej@2001:8003:48e0:f500:f9fb:b2e4:7d79:52c6) has joined #ceph
[7:16] * guampa (~g@0001bfc4.user.oftc.net) Quit (Ping timeout: 480 seconds)
[7:17] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[7:18] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[7:19] * natarej_ (~natarej@101.188.30.168) Quit (Ping timeout: 480 seconds)
[7:19] * EinstCrazy (~EinstCraz@58.247.117.134) Quit (Ping timeout: 480 seconds)
[7:20] * madkiss (~madkiss@ip5b414c62.dynamic.kabel-deutschland.de) has joined #ceph
[7:21] * eternaleye (~Alex@174-21-123-146.tukw.qwest.net) Quit (Quit: Quit)
[7:22] * eternaleye (~Alex@174-21-123-146.tukw.qwest.net) has joined #ceph
[7:24] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[7:25] * prallab (~prallab@216.207.42.137) has joined #ceph
[7:26] * madkiss (~madkiss@ip5b414c62.dynamic.kabel-deutschland.de) Quit (Read error: No route to host)
[7:28] * sankarshan (~sankarsha@121.244.87.117) has joined #ceph
[7:28] * guampa (~g@lumumba.torservers.net) has joined #ceph
[7:39] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[7:40] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[7:40] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[7:46] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[7:54] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[8:00] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[8:03] * Xerati (~Corti^car@192.42.116.16) has joined #ceph
[8:06] * lurbs (user@uber.geek.nz) Quit (Remote host closed the connection)
[8:11] * lurbs (user@uber.geek.nz) has joined #ceph
[8:11] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) Quit (Quit: Bye)
[8:12] * shylesh__ (~shylesh@121.244.87.118) has joined #ceph
[8:14] * eternaleye (~Alex@174-21-123-146.tukw.qwest.net) Quit (Quit: Quit)
[8:14] * eternaleye (~Alex@174-21-123-146.tukw.qwest.net) has joined #ceph
[8:18] * madkiss (~madkiss@2001:6f8:12c3:f00f:b521:45f5:653f:35fc) has joined #ceph
[8:18] * chopmann (~sirmonkey@ip4d149312.dynamic.kabel-deutschland.de) has joined #ceph
[8:21] * EinstCrazy (~EinstCraz@58.247.117.134) has joined #ceph
[8:29] * EinstCra_ (~EinstCraz@58.247.117.134) Quit (Ping timeout: 480 seconds)
[8:33] * Xerati (~Corti^car@06SAACNU0.tor-irc.dnsbl.oftc.net) Quit ()
[8:33] * arsenaali (~neobenedi@isoroku-tor-exit.itnowork.com) has joined #ceph
[8:38] * pabluk__ is now known as pabluk_
[8:38] * ade (~abradshaw@tmo-112-103.customers.d1-online.com) has joined #ceph
[8:44] * inf_b (~o_O@dot1x-232-241.wlan.uni-giessen.de) has joined #ceph
[8:46] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[8:52] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[8:57] * grauzikas (grauzikas@78-56-222-78.static.zebra.lt) has joined #ceph
[8:59] * karnan (~karnan@121.244.87.117) has joined #ceph
[9:03] * arsenaali (~neobenedi@4MJAAE7MT.tor-irc.dnsbl.oftc.net) Quit ()
[9:03] * Deiz (~EdGruberm@185.100.85.101) has joined #ceph
[9:05] * vbellur (~vijay@121.244.87.118) has joined #ceph
[9:06] * ledgr (~ledgr@88-222-11-185.meganet.lt) has joined #ceph
[9:07] * ledgr (~ledgr@88-222-11-185.meganet.lt) Quit (Remote host closed the connection)
[9:07] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) has joined #ceph
[9:10] * ledgr_ (~ledgr@88-222-11-185.meganet.lt) has joined #ceph
[9:12] * ledgr_ (~ledgr@88-222-11-185.meganet.lt) Quit (Remote host closed the connection)
[9:12] * ledgr_ (~ledgr@88-119-196-104.static.zebra.lt) has joined #ceph
[9:12] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) Quit (Read error: Connection reset by peer)
[9:16] * analbeard (~shw@support.memset.com) has joined #ceph
[9:18] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) has joined #ceph
[9:18] * huangjun (~kvirc@113.57.168.154) has joined #ceph
[9:23] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[9:25] * yyy_21 (~oftc-webi@114.247.245.138) has joined #ceph
[9:26] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) has joined #ceph
[9:33] * Deiz (~EdGruberm@4MJAAE7NU.tor-irc.dnsbl.oftc.net) Quit ()
[9:33] * skney1 (~straterra@tsn109-201-154-178.dyn.nltelcom.net) has joined #ceph
[9:33] <T1w> what's the best way of ensuring NFS redundancy when I use ceph to serve RBDs with XFS filesystems inside to the NFS server and it's clients?
[9:34] <T1w> DRBD?
[9:37] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[9:37] <z3ro> hi all i have little problems, i try add new mds server and after add new mds in cluster daemon don`t start and in log i saw this :http://pastebin.com/RWqzNygJ . Google don`t help :(
[9:40] <sep> T1w, i have never done this myself, but i like S??bastien Han's solution he writes about here https://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/
[9:41] <T1w> sep: ah, yes.. that was the page I was looking for!
[9:41] * fsimonce (~simon@host128-29-dynamic.250-95-r.retail.telecomitalia.it) has joined #ceph
[9:41] <T1w> thanks!
[9:42] <sep> its from 2012 tho. things might have changed a bit.
[9:43] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[9:44] * branto (~branto@nat-pool-brq-t.redhat.com) has joined #ceph
[9:48] <eternaleye> T1w: Well, long-run I'd suspect nfs-ganesha with the CephFS backend to be the leading option, once multiple MDS goes stable and such
[9:49] <eternaleye> T1w: But if you're using XFS over RBD as your backend storage, it sounds more like you're basically treating Ceph as a traditional SAN
[9:49] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[9:50] <T1w> eternaleye: well, as soon as multiple active MDS become a viable option and CephFS can handle more than a few hundred files in a single directory without killing performance, then I'd be happy just to drop RBDs and put everything in CephFS
[9:50] <eternaleye> T1w: But anyway, I don't see why DRBD would be useful - multiple clients can access RBD volumes; using DRBD would just be adding a layer of indirection
[9:51] <T1w> but until that is possible I'm stuck with RBDs for redundancy and data safety
[9:51] <T1w> I might look like a traditional SAN, but the $-tag is far far from it
[9:51] <T1w> it even
[9:52] <eternaleye> Hence why I said "long run" for the CephFS option :P
[9:52] <T1w> ;)
[9:52] * ledgr_ (~ledgr@88-119-196-104.static.zebra.lt) Quit (Remote host closed the connection)
[9:52] <T1w> I have no problem with getting all NFS clients to use CephFS instead
[9:53] <T1w> it's just a matter of maturity for CephFS - Jewel is the first time CephFS has been production ready, and while it seems promising I'm not going to touch it for the first few minor updates (and possible 1 or 2 major versions as well)
[9:54] * dugravot61 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) has joined #ceph
[9:54] * grauzikas (grauzikas@78-56-222-78.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[9:55] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[9:56] <sep> z3ro, i do not know but i would have investigated the part of your log that say "ERROR: missing keyring, cannot use cephx for authentication" ;; does your new mds have a keyring, is the mds aware of it and using it when starting ?
[10:00] <sep> are there any rules of thumb when figuring out how large a cache tier you need for erasure coded pool
[10:00] * jmlowe (~Adium@c-68-45-14-99.hsd1.in.comcast.net) Quit (Quit: Leaving.)
[10:00] <T1w> leseb?
[10:00] <z3ro> yes new mds have keyring. http://pastebin.com/mL6F4HpD
[10:01] <z3ro> http://pastebin.com/16nHtaGz
[10:03] * skney1 (~straterra@4MJAAE7OS.tor-irc.dnsbl.oftc.net) Quit ()
[10:03] * Shesh (~Linkshot@edwardsnowden0.torservers.net) has joined #ceph
[10:04] <z3ro> any ideas?
[10:07] * rmart04 (~rmart04@support.memset.com) has joined #ceph
[10:08] <sep> z3ro, mds does start as root ? and not as ceph user ? i have not used mds
[10:09] <z3ro> start as root
[10:13] * DanFoster (~Daniel@office.34sp.com) has joined #ceph
[10:14] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[10:14] * grauzikas (grauzikas@78-56-222-78.static.zebra.lt) has joined #ceph
[10:21] * liver (~smuxi@47-33-82-28.dhcp.rvsd.ca.charter.com) has joined #ceph
[10:21] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[10:22] * liver (~smuxi@47-33-82-28.dhcp.rvsd.ca.charter.com) Quit (Read error: Connection reset by peer)
[10:23] * natarej_ (~natarej@101.188.30.168) has joined #ceph
[10:26] * natarej (~natarej@2001:8003:48e0:f500:f9fb:b2e4:7d79:52c6) Quit (Read error: Connection reset by peer)
[10:29] <leseb> T1w: kinda busy at the moment but what can I do for you?
[10:31] * mivaho (~quassel@2001:983:eeb4:1:c0de:69ff:fe2f:5599) has joined #ceph
[10:31] * evelu (~erwan@37.165.142.53) has joined #ceph
[10:31] <T1w> leseb: it's not important, but regarding your oldish blogpost about HA NFS backed by RBDs with XFS filesystems inside (https://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/ ) - can you tell me if you have had any major issues with this setup or if you know of better ways of getting some form of NFS redundency (just active/passive9?
[10:33] <leseb> T1w: this setup has been running for years now and we haven't experienced any major issues, but since cephfs is out, this is where the game starts to change :)
[10:33] <T1w> we've been hit by nfsd freezes during heavy IO that resulted in rebooting the NFS server since it never came back - twice - and the few stacktraces I have says something about io wait (and my cluster has logged nothing during that period)
[10:33] * Shesh (~Linkshot@4MJAAE7PM.tor-irc.dnsbl.oftc.net) Quit ()
[10:33] * clusterfudge (~Helleshin@4MJAAE7QM.tor-irc.dnsbl.oftc.net) has joined #ceph
[10:33] <T1w> leseb: yeah, but I'm not going to be a first mover on cephfs - I cannot afford to experiment on that front
[10:34] <leseb> T1w: yes I understand, I guess it depends what you want to run on it
[10:34] <T1w> "afford" as in both level of service to systems, my own time and whatever hardware I need to just have a simple way of testing..
[10:35] <T1w> when I started lloking at ceph some time ago cephfs seemed like a godsend, but the pitfalls were too large for us to go with that
[10:36] <T1w> so for now we're all in on RBDs served to clients via NFS
[10:36] <T1w> but it's nice to know that pacemaker setup works
[10:37] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[10:38] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[10:43] * mnathani2 (~mnathani_@192-0-149-228.cpe.teksavvy.com) has joined #ceph
[10:45] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) has joined #ceph
[10:47] <sep> anyone know if there are there any rules of thumb when scaling how large a cache tier you need for erasure coded pool ? i assume it would depend on your working set. but there might be some boundaries one need to stay inside ?
[10:48] * Vacuum__ (~Vacuum@88.130.217.34) has joined #ceph
[10:49] <ledgr> 200GB (600 including replication) of data uses only 8GB of cache. I think it needs tuning, but thats the numbers I have here.
[10:50] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:d139:ef47:92aa:73d8) has joined #ceph
[10:55] * Vacuum_ (~Vacuum@i59F790ED.versanet.de) Quit (Ping timeout: 480 seconds)
[10:56] <sep> ledgr, thanks for the numbers,
[10:57] * evelu (~erwan@37.165.142.53) Quit (Ping timeout: 480 seconds)
[10:58] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[10:58] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[10:59] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[11:00] * evelu (~erwan@37.165.142.53) has joined #ceph
[11:01] * LeaChim (~LeaChim@host86-176-96-249.range86-176.btcentralplus.com) has joined #ceph
[11:03] * clusterfudge (~Helleshin@4MJAAE7QM.tor-irc.dnsbl.oftc.net) Quit ()
[11:05] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[11:06] * swami2 (~swami@49.44.57.243) has joined #ceph
[11:07] * flisky (~Thunderbi@36.110.40.26) has joined #ceph
[11:12] * swami1 (~swami@49.32.0.244) Quit (Ping timeout: 480 seconds)
[11:15] * bara_ (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[11:19] <ieth0> Hello everyone, is there anyway to set replication size on each object instead of pools? or any workarounds to replicate a hot object n times more than pool???s replication size?
[11:23] * zhaochao_ (~zhaochao@125.39.9.158) has joined #ceph
[11:23] <Kvisle> for the purpose of increasing possible throughput to that object?
[11:24] <Kvisle> because increasing replicas does not solve that for you -- io, including reads, is normally performed towards the primary placement group in the acting set.
[11:25] * dugravot61 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) Quit (Quit: Leaving.)
[11:25] <s3an2> ieth0, what is the use case?
[11:28] * zhaochao (~zhaochao@125.39.112.6) Quit (Ping timeout: 480 seconds)
[11:28] * zhaochao_ is now known as zhaochao
[11:29] <ieth0> s3an2, sometimes you have a really hot object which you must serve it to thousands of people in the first hours of upload. and after several days its like other objects and you may decrease replicas per that object. ( I know cache tiering might help in these cases )
[11:30] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) has joined #ceph
[11:30] * rraja (~rraja@121.244.87.117) has joined #ceph
[11:32] <s3an2> The read of the object AFAIK will only happen from the primary OSD anyway.
[11:32] * yanzheng (~zhyan@125.70.22.41) has joined #ceph
[11:33] * Kottizen (~KeeperOfT@192.42.116.16) has joined #ceph
[11:34] <s3an2> If it is an RGW object - an upstream cache (varnish / CDN) maybe an option to help content delivery
[11:35] * wjw-freebsd2 (~wjw@smtp.digiware.nl) has joined #ceph
[11:38] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[11:39] <z3ro> [11:24:39] <z3ro> i all i have little problems, i try add new mds server and after add new mds in cluster daemon don`t start and in log i saw this :http://pastebin.com/RWqzNygJ . Google don`t help :(
[11:39] <z3ro> [11:25:00] <z3ro> mds have keyring. http://pastebin.com/mL6F4HpD
[11:39] <z3ro> [11:25:00] <z3ro> http://pastebin.com/16nHtaGz
[11:39] <z3ro> i all i have little problems, i try add new mds server and after add new mds in cluster daemon don`t start and in log i saw this :http://pastebin.com/RWqzNygJ . Google don`t help :(
[11:39] <z3ro> mds have keyring. http://pastebin.com/mL6F4HpD
[11:39] <z3ro> http://pastebin.com/16nHtaGz
[11:39] <z3ro> sorry for dubble
[11:40] <ieth0> s3an2, its RGW object, so if I have a pool with even 100 replica per object, it will only use the primary OSD for all requests and not the other 99 OSDs ?
[11:41] * TMM (~hp@185.5.122.2) has joined #ceph
[11:44] <s3an2> ieth0, The object will get read from the OSD that is Primary for the PG that the object is within - the other OSD's are not used for reads AFAIK - So added extra replica is not going to really help you here.
[11:45] <z3ro> the question is removed
[11:46] * evelu (~erwan@37.165.142.53) Quit (Ping timeout: 480 seconds)
[11:48] <ieth0> s3an2, so extra replicas is just for making extra copies for backup. anyway to help performance without cache?
[11:49] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) Quit (Remote host closed the connection)
[11:54] * evelu (~erwan@37.163.126.203) has joined #ceph
[11:58] * rdias (~rdias@2001:8a0:749a:d01:796d:44ac:1cad:318c) Quit (Ping timeout: 480 seconds)
[12:00] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[12:01] * rendar (~I@host84-39-dynamic.60-82-r.retail.telecomitalia.it) has joined #ceph
[12:01] * kefu_ (~kefu@114.92.122.74) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[12:01] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[12:02] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) has joined #ceph
[12:02] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[12:02] * rdias (~rdias@bl7-92-98.dsl.telepac.pt) has joined #ceph
[12:03] * Kottizen (~KeeperOfT@4MJAAE7SU.tor-irc.dnsbl.oftc.net) Quit ()
[12:03] * BlS (~Linkshot@Relay-J.tor-exit.network) has joined #ceph
[12:04] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[12:10] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[12:14] * bniver (~bniver@pool-173-48-58-27.bstnma.fios.verizon.net) has joined #ceph
[12:14] * bara_ (~bara@nat-pool-brq-t.redhat.com) Quit (Read error: Connection reset by peer)
[12:14] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Read error: Connection reset by peer)
[12:14] * EinstCrazy (~EinstCraz@58.247.117.134) Quit (Remote host closed the connection)
[12:14] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[12:19] * kefu (~kefu@114.92.122.74) has joined #ceph
[12:20] * inf_b (~o_O@dot1x-232-241.wlan.uni-giessen.de) Quit (Ping timeout: 480 seconds)
[12:20] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[12:21] * johnhunter (~hunter@139.129.6.152) has joined #ceph
[12:32] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[12:33] * BlS (~Linkshot@06SAACN4D.tor-irc.dnsbl.oftc.net) Quit ()
[12:33] * Eric1 (~clarjon1@4MJAAE7UK.tor-irc.dnsbl.oftc.net) has joined #ceph
[12:36] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[12:39] * Lokta (~Lokta@carbon.coe.int) has joined #ceph
[12:42] * ledgr_ (~ledgr@88-119-196-104.static.zebra.lt) has joined #ceph
[12:42] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) Quit (Read error: Connection reset by peer)
[12:44] * kefu (~kefu@114.92.122.74) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[12:48] * atheism (~atheism@182.48.117.114) Quit (Ping timeout: 480 seconds)
[12:52] * zhaochao (~zhaochao@125.39.9.158) Quit (Ping timeout: 480 seconds)
[12:52] * allen_gao (~allen_gao@58.213.72.214) Quit (Ping timeout: 480 seconds)
[12:53] <Be-El> does ceph support monitoring via SNMP?
[12:54] <BranchPredictor> no
[12:57] * bene (~bene@2601:18c:8501:41d0:ea2a:eaff:fe08:3c7a) has joined #ceph
[12:59] * bniver (~bniver@pool-173-48-58-27.bstnma.fios.verizon.net) Quit (Remote host closed the connection)
[13:03] * Eric1 (~clarjon1@4MJAAE7UK.tor-irc.dnsbl.oftc.net) Quit ()
[13:04] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[13:05] * bene (~bene@2601:18c:8501:41d0:ea2a:eaff:fe08:3c7a) Quit (Read error: No route to host)
[13:05] * rakeshgm (~rakesh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:06] * bene (~bene@2601:18c:8501:41d0:ea2a:eaff:fe08:3c7a) has joined #ceph
[13:06] * yyy_21 (~oftc-webi@114.247.245.138) Quit (Quit: Page closed)
[13:07] * Vidi (~Gibri@06SAACN6T.tor-irc.dnsbl.oftc.net) has joined #ceph
[13:09] * wjw-freebsd2 (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[13:10] <sep> when doing ssh cache tier. do you normaly add a few ssd's to the regular sata storage node. or would one normaly have separate ssd only nodes ? i imagine keeping the nodes separate makes the crushmap easier ?
[13:12] * ledgr_ is now known as ledgr
[13:14] <kiranos> how can I debug "clock skew detected "
[13:14] <kiranos> I've run date on all nodes and they are identical
[13:14] <ledgr> kiranos: install ntp on all servers
[13:14] <kiranos> its identical time on all
[13:15] <kiranos> they are using ntpd
[13:15] <PoRNo-MoRoZ> delta should be < 0.05 afaik
[13:15] <kiranos> do I need to restart something to clear it?
[13:15] <PoRNo-MoRoZ> mon_clock_drift_allowed = 1
[13:15] <PoRNo-MoRoZ> mon_clock_drift_warn_backoff = 30
[13:15] <PoRNo-MoRoZ> look for those
[13:15] * rakeshgm (~rakesh@121.244.87.118) has joined #ceph
[13:16] * monsted (~monsted@rootweiler.dk) Quit (Read error: Connection reset by peer)
[13:16] * monsted (~monsted@rootweiler.dk) has joined #ceph
[13:16] <kiranos> http://pastebin.com/Zn5pcX89
[13:16] <kiranos> PoRNo-MoRoZ:
[13:17] <kiranos> its time in milliseconds
[13:17] * rakeshgm (~rakesh@121.244.87.118) Quit ()
[13:17] <kiranos> I did a for loop
[13:17] <kiranos> its identical :(
[13:17] <PoRNo-MoRoZ> looks close enough
[13:17] <PoRNo-MoRoZ> dunno, i'm newb :D
[13:17] <kiranos> so dont know why it says clock screw detected
[13:18] <kiranos> Might post in the mailinglist :)
[13:18] <kiranos> thanks
[13:20] <PoRNo-MoRoZ> try to grep mon_clock_drift from admin sockets
[13:20] <PoRNo-MoRoZ> and look your current values
[13:23] * HappyLoaf (~HappyLoaf@cpc93928-bolt16-2-0-cust133.10-3.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[13:24] * wjw-freebsd (~wjw@176.74.240.1) has joined #ceph
[13:24] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Quit: Leaving)
[13:26] * rdas (~rdas@121.244.87.116) has joined #ceph
[13:26] * Racpatel (~Racpatel@2601:87:3:3601::6f15) has joined #ceph
[13:32] * chopmann (~sirmonkey@ip4d149312.dynamic.kabel-deutschland.de) Quit (Ping timeout: 480 seconds)
[13:33] <kiranos> "mon_clock_drift_allowed": "0.05",
[13:33] <kiranos> "mon_clock_drift_warn_backoff": "5",
[13:33] <kiranos> PoRNo-MoRoZ:
[13:34] * vbellur (~vijay@121.244.87.118) Quit (Ping timeout: 480 seconds)
[13:37] * Vidi (~Gibri@06SAACN6T.tor-irc.dnsbl.oftc.net) Quit ()
[13:38] * linjan (~linjan@86.62.112.22) has joined #ceph
[13:40] * johnhunter (~hunter@139.129.6.152) Quit (Ping timeout: 480 seconds)
[13:45] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[13:45] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[13:52] * daviddcc (~dcasier@LAubervilliers-656-1-16-160.w217-128.abo.wanadoo.fr) has joined #ceph
[13:55] * shohn1 (~shohn@ipservice-092-208-209-170.092.208.pools.vodafone-ip.de) has joined #ceph
[13:55] * prallab (~prallab@216.207.42.137) Quit (Remote host closed the connection)
[13:58] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[13:58] * prallab (~prallab@216.207.42.137) has joined #ceph
[13:58] * prallab (~prallab@216.207.42.137) Quit (Remote host closed the connection)
[13:58] * prallab (~prallab@216.207.42.137) has joined #ceph
[13:58] * prallab (~prallab@216.207.42.137) Quit (Remote host closed the connection)
[13:58] * vbellur (~vijay@122.172.244.115) has joined #ceph
[13:58] * vanham (~vanham@12.199.84.146) has joined #ceph
[13:59] <vanham> Good morning everyone!
[13:59] * shohn (~shohn@dslb-146-060-206-254.146.060.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[13:59] * shylesh__ (~shylesh@121.244.87.118) Quit (Remote host closed the connection)
[13:59] * prallab (~prallab@216.207.42.137) has joined #ceph
[14:01] <vanham> So, I'm having a few issues with CephFS that are most likely something to be fixed at the software level. Stuff like "mds/StrayManager.cc: 520: FAILED assert(dnl->is_primary())", or "mds0: Client bhs1-mail00-fe01: failing to respond to cache pressure", laggy mds, among other things
[14:01] * Nicola-1980 (~Nicola-19@x55b42959.dyn.telefonica.de) has joined #ceph
[14:03] <vanham> I have three separate clusters, on different servers. Cluster 00, have less than 0,1% of my processing, it's there just to help me test new things, and also gets all the problems.
[14:03] <vanham> mds will be laggy, etc.
[14:03] <vanham> Ceph version is 10.2.1, kernel version is 4.4.0-18-generic, running on Ubuntu 14.04.4 LTS
[14:05] * inf_b (~o_O@dot1x-232-241.wlan.uni-giessen.de) has joined #ceph
[14:06] <vanham> I'm using Cache Tiering for the main data pool, although most of my IO is completely ignoring it. I have an SSD data pool where I put some of the files (small < 1MB) databases
[14:07] <vanham> So I'm also using file layout
[14:07] <vanham> (not dir layout yet)
[14:07] * Nijikokun (~MatthewH1@109.201.133.100) has joined #ceph
[14:08] * prallab (~prallab@216.207.42.137) Quit (Ping timeout: 480 seconds)
[14:08] * guampa (~g@0001bfc4.user.oftc.net) Quit (Remote host closed the connection)
[14:09] * guampa (~g@tor2.asmer.com.ua) has joined #ceph
[14:19] * flisky (~Thunderbi@36.110.40.26) Quit (Quit: flisky)
[14:20] * vbellur (~vijay@122.172.244.115) Quit (Remote host closed the connection)
[14:21] * pabluk_ is now known as pabluk__
[14:21] * vbellur (~vijay@122.172.244.115) has joined #ceph
[14:25] * jordanP (~jordan@92.103.184.178) has joined #ceph
[14:27] * HappyLoaf (~HappyLoaf@cpc93928-bolt16-2-0-cust133.10-3.cable.virginm.net) has joined #ceph
[14:31] * shohn (~shohn@ipservice-092-208-209-170.092.208.pools.vodafone-ip.de) has joined #ceph
[14:31] * shohn1 (~shohn@ipservice-092-208-209-170.092.208.pools.vodafone-ip.de) Quit (Read error: Connection reset by peer)
[14:33] * huangjun (~kvirc@113.57.168.154) Quit (Ping timeout: 480 seconds)
[14:34] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[14:36] <Lokta> Hi everyone ! i've had issues with cephfs client when mounting the server just died, has anyone else had this issue ? It was running debian 8 and kernel v3.16.0-4
[14:37] <PoRNo-MoRoZ> kiranos did it helped you ?
[14:37] * Nijikokun (~MatthewH1@4MJAAE7X4.tor-irc.dnsbl.oftc.net) Quit ()
[14:37] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[14:37] <PoRNo-MoRoZ> server died ? turned off ?
[14:37] <PoRNo-MoRoZ> can't boot now ?
[14:38] <Lokta> stacktraced
[14:38] <Lokta> didn't respond to anything
[14:39] <Be-El> which "server"? osd? mds? mon? client?
[14:39] <Lokta> i had to hard reboot
[14:39] <Lokta> client
[14:39] <vanham> Lokta, what ceph version are you using?
[14:39] <PoRNo-MoRoZ> ah
[14:39] <Be-El> Lokta: and whatever you want to do with cephfs...use a recent kernel
[14:39] <PoRNo-MoRoZ> dunno, i'd install kernel from backports
[14:39] <Lokta> same
[14:39] <Lokta> but i'm having a hard time convincing the boss to use bpo
[14:40] <Lokta> so if there is any version I should avoid
[14:40] <PoRNo-MoRoZ> you can try build kernel from sources
[14:40] <vanham> Yeah, I jumpped ship from Debian to Ubuntu Server for they have support for recent kernel versions on old distros. Here I'm using 14.04.4 LTS with kernel 4.4
[14:40] <Lokta> since i'm using 4.5 everything works perfect
[14:40] <vanham> You'll probrably want to think about that in the future
[14:40] <vanham> Ok, sorry, you just said 4.5
[14:41] <Lokta> i'm drafting the mirgation plan for the production servers
[14:41] <vanham> 3.16
[14:41] <vanham> sorry
[14:41] <Lokta> atm i'm testing on dedicated servers so it's not much of an issue
[14:41] <Be-El> Lokta: use netconsole to get the kernel error message
[14:41] * bene (~bene@2601:18c:8501:41d0:ea2a:eaff:fe08:3c7a) Quit (Quit: Konversation terminated!)
[14:41] <Lokta> nothing was logged
[14:42] <Lokta> even the usb was responding
[14:42] <vanham> Lokta, what ceph version are you on?
[14:42] <Lokta> network was down
[14:42] <Lokta> it was the latest 0.9 back then
[14:42] <Lokta> now i'm using 10.2.1
[14:43] * overclk (~quassel@117.202.96.84) Quit (Read error: Connection reset by peer)
[14:43] <Lokta> Lokta> even the usb was'NT responding*
[14:43] <Lokta> noone had this issue ?
[14:44] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[14:45] <PoRNo-MoRoZ> kvm / ipmi ?
[14:46] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) Quit (Quit: valeech)
[14:46] <vanham> Ok, if you are running with the latest ceph tunnables, it requires kernel 4.5 on the clients. You can set it to use Firefly tunables (ugh!) if you want kernels that old to access it.
[14:46] <vanham> Here I'm using Hammer tunables, and they only require kernel 4.1
[14:46] <vanham> http://docs.ceph.com/docs/master/rados/operations/crush-map/#tunables
[14:47] * shaunm (~shaunm@74.83.215.100) has joined #ceph
[14:48] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[14:48] * itamarl (~itamarl@194.90.7.244) has joined #ceph
[14:48] <vanham> When any dev is available, I'm getting mds/StrayManager.cc: 520: FAILED assert(dnl->is_primary()) on CephFS, ammong other things
[14:49] <vanham> Be-El, I sent that cache tiering question to the list, no one answered. I guess it's a hard problem after all.
[14:49] <Lokta> Exactly what i was looking for, thank you !
[14:49] <vanham> Lokta, you're welcome!
[14:50] <vanham> Be-El, now I'm using file layout at least to the files that I'm sure are constantly changing
[14:51] <Be-El> vanham: keep in mind that the xattr changes only apply to newly created files
[14:51] <vanham> Be-El, yeah, I had to do a whole migration here with some downtime
[14:51] * overclk (~quassel@117.202.96.84) has joined #ceph
[14:54] * wes_dillingham (~wes_dilli@140.247.242.44) has joined #ceph
[14:57] <rmart04> Hi Guys, As always, a difficult Question. Bare with me, I???m running OpenStack Kilo with ceph infernalis backending Nova. I???m running some intergration tests, one of the tests, starts an instance logs into it, then ???hard??? reboots it and tries to log in again. At this point, the keys that were injected at the start are missing. The cloud init data imported on the first boot isnt being persisted. I am able to replicate this manually, however if I ru
[14:57] <rmart04> ???sync??? first, the data persists. Is it possible this is in relation to the ???writeback cache??? on the hypervisor (configured in ceph.conf) and if so, why would it not be syncing?
[14:57] * branto (~branto@nat-pool-brq-t.redhat.com) Quit (Quit: Leaving.)
[15:00] * wes_dillingham_ (~wes_dilli@65.112.8.197) has joined #ceph
[15:02] <vanham> rmart04, I'm not a ceph dev, but doesn't seem to be a Ceph issue
[15:02] * squizzi (~squizzi@107.13.31.195) has joined #ceph
[15:03] * bniver (~bniver@nat-pool-bos-u.redhat.com) has joined #ceph
[15:03] * branto (~branto@nat-pool-brq-t.redhat.com) has joined #ceph
[15:03] <rmart04> Appreciate it was a long shot! As with most issues of this type, so many OpenStack <> Ceph layers
[15:03] <rmart04> thanks
[15:04] <vanham> rmart04, you seem to be talking about a missing ssh authorized_keys
[15:04] * wes_dillingham (~wes_dilli@140.247.242.44) Quit (Ping timeout: 480 seconds)
[15:04] * wes_dillingham_ is now known as wes_dillingham
[15:04] <vanham> If Ceph were to corrupt your data, your instance wouldn't boot up!
[15:04] <vanham> I would check to see how this instance creation and SSH key insertion works first
[15:05] <vanham> Your ceph status is saying that your health is OK, right?
[15:05] * georgem (~Adium@206.108.127.16) has joined #ceph
[15:05] <rmart04> I just had a feeling that the keys were injected, but never made it our of the writeback cache on the hypervisor (configured in ceph.conf - part of ceph-rbd afaik)
[15:05] <rmart04> The keys work during the first boot, just not after the hard reboot.
[15:06] <rmart04> Really, the test seems a bit silly, but its part of the intergration suite to join the OpenStack marketplace
[15:06] <vanham> Cool man!
[15:07] <Be-El> rmart04: does "hard reboot" result in a kill -9 of the VM process (assuming qemu/kvm)?
[15:07] * z3ro (~kvirc@77.95.128.125) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[15:07] <vanham> I would look into how this key is injected. I really don't understand how that works
[15:07] <rmart04> I???d imagine so, but I am not certain. I will have a look around, would that result in the cache being dropped, rather than flushed back into the cluster?
[15:08] * jquinn (~jquinn@nat-pool-bos-t.redhat.com) has joined #ceph
[15:08] <vanham> Any kill -9 would discard anything
[15:08] <rmart04> to be honest, I???m using metadata not cloud-drive, so its not injected really, its queried through http
[15:08] <Be-El> rmart04: the cache is a matter of librbd (and thus user space, not kernel space). if you kill the process, the cache is lost
[15:08] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) Quit (Quit: Leaving)
[15:08] <Be-El> s/kill/kill -9/
[15:09] <Be-El> i'm not sure how librbd handles SIGTERM
[15:09] <rmart04> OK, thanks, thats understandable
[15:10] <vanham> rmart04, in any case, imo, anything you do before starting the vm, like any kind of injection / config change, should have been saved before starting the vm. that's why you need to understand how that process works
[15:12] <Be-El> vanham: afaik the injection happens within the VM, either via an extra virtual drive or via http calls
[15:12] * atheism (~atheism@106.120.8.227) has joined #ceph
[15:13] <rmart04> vanham: the instance is started, and the last part of the OS boot starts a cloud init process, this essentially scans a bunch of http content from a 169.254.x.x address. The ssh keys are present there, and are written to .ssh/auth???keys. As I said, we can log into the instance at that point. The test suite, then instantly hard reboots the instance, and its during the second boot that the keys are missing, that data is not persisted.
[15:13] <Be-El> vanham: otherwise the hypervisor has to know each and every possible file system + location for injection for every possible OS
[15:13] <rmart04> as Be-El says, you can also use a couple of other methods, such as cloud drive.
[15:14] <vanham> Got it
[15:14] <rmart04> Maybe I could change the max dirty age to a really low value ?
[15:14] <vanham> Yeah, if that scripts only runs once, then you are losing any changes
[15:15] <vanham> Can you add a sync call to the end of the cloud init script?
[15:15] * briuc (~oftc-webi@178.237.98.13) has joined #ceph
[15:15] <briuc> hi all,
[15:16] <briuc> I'm trying Calamri on my functionnal server.
[15:16] <vanham> hello briuc
[15:16] <rmart04> possibly, maybe its a bug ticket for cloud init peeps, problem is cloud init comes in pretty much evey cloud OS, I dont want to be customising images if possible
[15:16] <briuc> servers*
[15:16] <briuc> but when I go to the web page of calamri, i only have this message :
[15:16] <briuc> Please use ceph-deploy to create a cluster; please see the Inktank Ceph Enterprise
[15:17] <vanham> rmart04, got it
[15:17] <briuc> and also this one :
[15:17] <briuc> "3 Ceph servers are connected to Calamari, but no Ceph cluster has been created yet. "
[15:17] <briuc> which is not true, I already have one working cluster with ceph-FS and S3 object datas on it
[15:17] <briuc> any way to configure it?
[15:19] <Be-El> rmart04: if you can afford it disable writeback caching completely. if will also be a show stopper if a host itself crashes
[15:19] <rmart04> Hey briuc, its been a while, but from what I remeber you need to configure calamari-client style packages on your nodes
[15:19] <rmart04> salt-/diamond stuff from what I remeber
[15:19] <briuc> this is what I did
[15:19] <briuc> salt-minion + diamond .deb packages
[15:19] <rmart04> OK, have you accepted each node on your master?
[15:19] <rmart04> (keys)
[15:20] <briuc> yes, it is detected by calamri, as it tells that there are 3 ceph servers
[15:20] <briuc> (I and ran a command like salt -A to accept the keys
[15:21] <rmart04> OK cool, sounds like your well on the way then. I remember my first time configuring it being quite a pain
[15:22] <rmart04> be-el, Ill give it a go, any idea if the ceph.conf will be re-read on starting the next instance. Or if I need to restart stuff?
[15:22] <rmart04> I???d imagine it would be read on nova-compute starts
[15:22] <rmart04> ill try
[15:22] <rmart04> thanks for the help
[15:22] <Be-El> rmart04: if should be read at the next instance start, since it starts a new qemu process
[15:23] <rmart04> sure that makes sense
[15:23] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[15:23] <briuc> thanks rmart04, but I have explored everything I found online, but it seem not beeing able to work on the latest version (2015.XX)
[15:24] <rmart04> from what I remeber, the vagrant build was the biggest pita! You???ve made it this far, just keep a keen eye on the logs
[15:24] <briuc> As the rest of Ceph, there is a bit a lack of explanation...
[15:24] <briuc> I'll continue this way then
[15:25] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[15:27] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[15:27] * pabluk__ is now known as pabluk_
[15:32] <rmart04> Just fyi be-el, changing the cache to false, didnt seem to make any difference. Theres an option in nova-api enabled disk_cachemodes=???network=writeback??? that might be the cause. Ill drop that out to test
[15:33] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[15:34] * briuc (~oftc-webi@178.237.98.13) Quit (Quit: Page closed)
[15:37] * FierceForm (~fauxhawk@ekumen.nos-oignons.net) has joined #ceph
[15:38] * vbellur (~vijay@122.172.244.115) Quit (Ping timeout: 480 seconds)
[15:40] * vbellur (~vijay@122.178.228.61) has joined #ceph
[15:43] * grauzikas (grauzikas@78-56-222-78.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[15:45] * jordanP (~jordan@92.103.184.178) Quit (Ping timeout: 480 seconds)
[15:49] * pabluk_ (~pabluk@2a01:e34:ec16:93e0:100e:7501:8678:56ef) Quit (Ping timeout: 480 seconds)
[15:50] * jordanP (~jordan@92.103.184.178) has joined #ceph
[15:52] * inf_b (~o_O@dot1x-232-241.wlan.uni-giessen.de) Quit (Ping timeout: 480 seconds)
[15:52] * inf_b (~o_O@dot1x-232-241.wlan.uni-giessen.de) has joined #ceph
[15:54] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[15:54] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[15:56] * Miouge (~Miouge@94.136.92.20) Quit ()
[15:57] * pabluk_ (~pabluk@2a01:e34:ec16:93e0:d9a8:e8de:adcf:c395) has joined #ceph
[15:58] * grauzikas (grauzikas@78-56-222-78.static.zebra.lt) has joined #ceph
[16:00] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[16:01] * yanzheng (~zhyan@125.70.22.41) Quit (Quit: This computer has gone to sleep)
[16:02] * sankarshan (~sankarsha@121.244.87.117) Quit (Quit: Are you sure you want to quit this channel (Cancel/Ok) ?)
[16:07] * FierceForm (~fauxhawk@7V7AAEV99.tor-irc.dnsbl.oftc.net) Quit ()
[16:07] * DoDzy (~Blueraven@185.100.87.82) has joined #ceph
[16:08] * branto (~branto@nat-pool-brq-t.redhat.com) Quit (Quit: Leaving.)
[16:10] * kefu (~kefu@114.92.122.74) has joined #ceph
[16:13] * jordanP (~jordan@92.103.184.178) Quit (Ping timeout: 480 seconds)
[16:14] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[16:14] * rdas (~rdas@121.244.87.116) has joined #ceph
[16:15] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[16:16] * jowilkin (~jowilkin@nat-pool-rdu-u.redhat.com) has joined #ceph
[16:17] * huangjun (~kvirc@117.152.65.122) has joined #ceph
[16:18] * branto (~branto@nat-pool-brq-t.redhat.com) has joined #ceph
[16:21] * tserong (~tserong@203-214-92-220.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[16:22] * linjan (~linjan@86.62.112.22) Quit (Ping timeout: 480 seconds)
[16:23] * swami2 (~swami@49.44.57.243) Quit (Quit: Leaving.)
[16:24] * tserong (~tserong@203-214-92-220.dyn.iinet.net.au) has joined #ceph
[16:26] * branto (~branto@nat-pool-brq-t.redhat.com) Quit (Quit: Leaving.)
[16:29] * ade (~abradshaw@tmo-112-103.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[16:31] * ade (~abradshaw@GK-84-46-90-18.routing.wtnet.de) has joined #ceph
[16:33] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[16:34] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[16:35] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[16:35] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[16:35] * kefu (~kefu@114.92.122.74) Quit (Max SendQ exceeded)
[16:36] * kefu (~kefu@114.92.122.74) has joined #ceph
[16:36] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[16:37] * Mousey (~yuastnav@87.236.215.83) has joined #ceph
[16:38] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) Quit (Remote host closed the connection)
[16:39] * inf_b (~o_O@dot1x-232-241.wlan.uni-giessen.de) Quit (Remote host closed the connection)
[16:39] * atheism (~atheism@106.120.8.227) Quit (Ping timeout: 480 seconds)
[16:40] * dneary (~dneary@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[16:40] * DoDzy (~Blueraven@4MJAAE74I.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[16:41] * vata1 (~vata@207.96.182.162) has joined #ceph
[16:42] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) has joined #ceph
[16:44] * joshd1 (~jdurgin@71-92-201-212.dhcp.gldl.ca.charter.com) has joined #ceph
[16:44] * The_Ball (~pi@20.92-221-43.customer.lyse.net) Quit (Read error: Connection reset by peer)
[16:45] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:45] * valeech (~valeech@50-205-143-162-static.hfc.comcastbusiness.net) has joined #ceph
[16:46] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[16:47] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[16:47] * kefu (~kefu@114.92.122.74) Quit (Max SendQ exceeded)
[16:48] * kefu (~kefu@114.92.122.74) has joined #ceph
[16:49] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[16:49] * vata1 (~vata@207.96.182.162) Quit (Read error: Connection reset by peer)
[16:49] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[16:50] * vata1 (~vata@207.96.182.162) has joined #ceph
[16:51] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[16:54] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[16:58] * ircolle (~Adium@2601:285:201:633a:69f5:9fe6:e942:e5bf) Quit (Quit: Leaving.)
[16:58] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) has joined #ceph
[16:58] * bara (~bara@213.175.37.12) has joined #ceph
[16:59] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:00] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[17:00] * allaok (~allaok@machine107.orange-labs.com) Quit (Quit: Leaving.)
[17:01] * The_Ball (~pi@20.92-221-43.customer.lyse.net) has joined #ceph
[17:01] * wushudoin (~wushudoin@38.99.12.237) has joined #ceph
[17:03] * linuxkidd (~linuxkidd@38.sub-70-210-245.myvzw.com) has joined #ceph
[17:03] * linuxkidd is now known as linuxkidd|trng
[17:03] * linuxkidd|trng is now known as linuxkidd
[17:03] * jordanP (~jordan@92.103.184.178) has joined #ceph
[17:05] * evelu (~erwan@37.163.126.203) Quit (Read error: Connection reset by peer)
[17:05] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:06] * Brochacho (~alberto@c-73-45-127-198.hsd1.il.comcast.net) has joined #ceph
[17:07] * Mousey (~yuastnav@06SAACOHM.tor-irc.dnsbl.oftc.net) Quit ()
[17:07] * vegas3 (~Sami345@91.109.29.120) has joined #ceph
[17:09] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[17:15] <vanham> When a dev is available, I'm having a assertion failure on my ceph-mds happenning every few minutes on an production cluster
[17:15] <vanham> I can help debug the problem
[17:20] * evelu (~erwan@37.163.126.203) has joined #ceph
[17:22] * karnan (~karnan@121.244.87.117) Quit (Ping timeout: 480 seconds)
[17:27] * eternaleye (~Alex@174-21-123-146.tukw.qwest.net) Quit (Ping timeout: 480 seconds)
[17:32] * karnan (~karnan@121.244.87.117) has joined #ceph
[17:36] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) Quit (Quit: Leaving.)
[17:37] * vegas3 (~Sami345@7V7AAEWEZ.tor-irc.dnsbl.oftc.net) Quit ()
[17:40] * eternaleye (~Alex@66.87.138.61) has joined #ceph
[17:44] * eternaleye_ (~Alex@66-87-139-218.pools.spcsdns.net) has joined #ceph
[17:45] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[17:45] * bara (~bara@213.175.37.12) Quit (Quit: Bye guys! (??????????????????? ?????????)
[17:47] * Lokta (~Lokta@carbon.coe.int) Quit (Ping timeout: 480 seconds)
[17:47] * eternaleye__ (~Alex@66-87-139-163.pools.spcsdns.net) has joined #ceph
[17:47] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[17:48] * eternaleye (~Alex@66.87.138.61) Quit (Ping timeout: 480 seconds)
[17:50] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) Quit (Quit: Connection closed for inactivity)
[17:51] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[17:52] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[17:53] * eternaleye_ (~Alex@66-87-139-218.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[17:54] * itamarl (~itamarl@194.90.7.244) Quit (Quit: itamarl)
[17:55] * codice (~toodles@75-128-34-237.static.mtpk.ca.charter.com) Quit (Quit: leaving)
[17:55] * cathode (~cathode@50.232.215.114) has joined #ceph
[17:55] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[17:56] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[18:04] * Miouge (~Miouge@h-4-155-222.a163.priv.bahnhof.se) has joined #ceph
[18:07] * Aramande_ (~rapedex@exit1.ipredator.se) has joined #ceph
[18:08] * Miouge (~Miouge@h-4-155-222.a163.priv.bahnhof.se) Quit ()
[18:09] * Miouge (~Miouge@h-4-155-222.a163.priv.bahnhof.se) has joined #ceph
[18:12] * jamespd_ (~mucky@mucky.socket7.org) Quit (Ping timeout: 480 seconds)
[18:15] * TMM (~hp@185.5.122.2) Quit (Quit: Ex-Chat)
[18:16] * jamespd (~mucky@mucky.socket7.org) has joined #ceph
[18:18] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[18:19] * Skaag (~lunix@65.200.54.234) has joined #ceph
[18:20] * eternaleye (~Alex@66-87-139-139.pools.spcsdns.net) has joined #ceph
[18:22] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[18:25] * eternaleye__ (~Alex@66-87-139-163.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[18:26] <diq> Anyone know why cephfs might report a "no space left on device" error message when the FS is only at 73% usage?
[18:29] * pabluk_ is now known as pabluk
[18:29] * eternaleye (~Alex@66-87-139-139.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[18:29] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[18:33] * eternaleye (~Alex@66.87.139.127) has joined #ceph
[18:35] * huangjun (~kvirc@117.152.65.122) Quit (Ping timeout: 480 seconds)
[18:35] * winston-d_ (uid98317@id-98317.richmond.irccloud.com) has joined #ceph
[18:37] * Aramande_ (~rapedex@06SAACOMY.tor-irc.dnsbl.oftc.net) Quit ()
[18:37] * Deiz (~Averad@193.90.12.86) has joined #ceph
[18:40] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:40] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) Quit (Remote host closed the connection)
[18:44] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[18:46] * jordanP (~jordan@92.103.184.178) Quit (Ping timeout: 480 seconds)
[18:47] * cathode (~cathode@50.232.215.114) Quit (Quit: Leaving)
[18:48] * daviddcc (~dcasier@LAubervilliers-656-1-16-160.w217-128.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[18:55] * krypto (~krypto@G68-90-105-143.sbcis.sbc.com) has joined #ceph
[18:55] <diq> it seems like CephFS driver will show the entire FS as full when a single OSD gets "near_full"?
[18:57] * rmart04 (~rmart04@support.memset.com) Quit (Quit: rmart04)
[19:02] * mykola (~Mikolaj@91.225.201.82) has joined #ceph
[19:02] * DanFoster (~Daniel@office.34sp.com) Quit (Quit: Leaving)
[19:03] * eternaleye_ (~Alex@50.245.141.73) has joined #ceph
[19:07] * eternaleye (~Alex@66.87.139.127) Quit (Ping timeout: 480 seconds)
[19:07] * Deiz (~Averad@4MJAAE8BZ.tor-irc.dnsbl.oftc.net) Quit ()
[19:07] * Kyso_1 (~click@edwardsnowden2.torservers.net) has joined #ceph
[19:09] * pabluk is now known as pabluk_
[19:10] * kefu is now known as kefu|afk
[19:10] * madkiss1 (~madkiss@ip5b414c62.dynamic.kabel-deutschland.de) has joined #ceph
[19:12] * joshd1 (~jdurgin@71-92-201-212.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[19:13] * jowilkin (~jowilkin@nat-pool-rdu-u.redhat.com) Quit (Ping timeout: 480 seconds)
[19:14] * erhudy (uid89730@id-89730.ealing.irccloud.com) has joined #ceph
[19:15] <erhudy> when an OSD log reports "waiting for subops", does that mean that OSD has already committed a write and is waiting for the other named OSDs to reply back that they have also committed?
[19:16] * wjw-freebsd (~wjw@176.74.240.1) Quit (Ping timeout: 480 seconds)
[19:17] * madkiss (~madkiss@2001:6f8:12c3:f00f:b521:45f5:653f:35fc) Quit (Ping timeout: 480 seconds)
[19:20] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[19:21] * garphy is now known as garphy`aw
[19:26] * kawa2014 (~kawa@2.50.13.105) has joined #ceph
[19:29] * evelu (~erwan@37.163.126.203) Quit (Ping timeout: 480 seconds)
[19:30] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:d139:ef47:92aa:73d8) Quit (Ping timeout: 480 seconds)
[19:31] * madkiss1 (~madkiss@ip5b414c62.dynamic.kabel-deutschland.de) Quit (Quit: Leaving.)
[19:37] * Kyso_1 (~click@7V7AAEWKX.tor-irc.dnsbl.oftc.net) Quit ()
[19:37] * verbalins (~Pirate@safersocial7.lax.webair.com) has joined #ceph
[19:38] * evelu (~erwan@37.162.50.134) has joined #ceph
[19:43] * garphy`aw is now known as garphy
[19:46] * overclk (~quassel@117.202.96.84) Quit (Remote host closed the connection)
[19:49] * wes_dillingham (~wes_dilli@65.112.8.197) Quit (Quit: wes_dillingham)
[19:50] * madkiss (~madkiss@ip5b414c62.dynamic.kabel-deutschland.de) has joined #ceph
[19:57] * bniver (~bniver@nat-pool-bos-u.redhat.com) Quit (Remote host closed the connection)
[20:00] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[20:01] * kefu|afk (~kefu@114.92.122.74) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[20:03] * linuxkidd_ (~linuxkidd@166.170.46.255) has joined #ceph
[20:06] * wes_dillingham (~wes_dilli@140.247.242.44) has joined #ceph
[20:07] * verbalins (~Pirate@06SAACORK.tor-irc.dnsbl.oftc.net) Quit ()
[20:07] * datagutt (~RaidSoft@176.10.99.205) has joined #ceph
[20:08] * linuxkidd (~linuxkidd@38.sub-70-210-245.myvzw.com) Quit (Ping timeout: 480 seconds)
[20:11] * krypto (~krypto@G68-90-105-143.sbcis.sbc.com) Quit (Read error: Connection reset by peer)
[20:15] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[20:16] * phear (~hp1ng@mail.ap-team.ru) has joined #ceph
[20:20] * PoRNo-MoRoZ (~hp1ng@mail.ap-team.ru) Quit (Ping timeout: 480 seconds)
[20:21] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[20:25] * wes_dillingham (~wes_dilli@140.247.242.44) has left #ceph
[20:28] * vbellur1 (~vijay@122.172.243.182) has joined #ceph
[20:31] * vbellur (~vijay@122.178.228.61) Quit (Ping timeout: 480 seconds)
[20:37] * mykola (~Mikolaj@91.225.201.82) Quit (Read error: No route to host)
[20:37] * datagutt (~RaidSoft@7V7AAEWNL.tor-irc.dnsbl.oftc.net) Quit ()
[20:37] * Moriarty1 (~JamesHarr@192.42.115.101) has joined #ceph
[20:38] * mykola (~Mikolaj@91.225.201.82) has joined #ceph
[20:43] * sudocat (~dibarra@192.185.1.20) has left #ceph
[20:43] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[20:44] * lpabon (~quassel@nat-pool-bos-t.redhat.com) has joined #ceph
[20:51] * sudocat (~dibarra@192.185.1.20) Quit (Quit: Leaving.)
[20:51] * jowilkin (~jowilkin@nat-pool-rdu-u.redhat.com) has joined #ceph
[20:59] * ktims_ (~ktims@zero.gotroot.ca) has joined #ceph
[21:00] <ktims_> is it intentional that 'rbd showmapped' errors with '-p' option?
[21:00] * jclm (~jclm@ip68-96-198-45.lv.lv.cox.net) has joined #ceph
[21:00] * rraja (~rraja@121.244.87.117) Quit (Quit: Leaving)
[21:07] * Moriarty1 (~JamesHarr@06SAACOUP.tor-irc.dnsbl.oftc.net) Quit ()
[21:08] <vanham> ktims_, it doesn't here (hammer)
[21:08] <vanham> here rbd showmapped will ignore the -p option
[21:08] <ktims_> hmm weird. i get this 'rbd: unrecognised option '-p'
[21:09] <vanham> What version?
[21:09] <ktims_> jewel
[21:09] <ktims_> i thought i had upgraded but it appears not
[21:09] <vanham> Yeah, on my ceph jewel it doesn't work anymore
[21:10] <ktims_> sorry for the noise, i will get on current
[21:10] <ktims_> thx :)
[21:10] * kawa2014 (~kawa@2.50.13.105) Quit (Quit: Leaving)
[21:11] <vanham> Jewel is current!
[21:11] <vanham> It's the very latest!
[21:11] <vanham> :)
[21:11] <ktims_> well at least i'm not going crazy :)
[21:12] * Sun7zu (~Quackie@marcuse-1.nos-oignons.net) has joined #ceph
[21:14] <ktims_> vanham: have you used a workaround or ?
[21:14] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[21:15] <vanham> You don't need -p, it should show all the mapped rbds
[21:16] <ktims_> -p is added to command in ganeti code, i guess i will have to patch it or downgrade
[21:17] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[21:17] * ledgr (~ledgr@88-222-11-185.meganet.lt) has joined #ceph
[21:18] * ledgr (~ledgr@88-222-11-185.meganet.lt) Quit (Remote host closed the connection)
[21:18] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) has joined #ceph
[21:19] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[21:26] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[21:32] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[21:33] * jclm (~jclm@ip68-96-198-45.lv.lv.cox.net) Quit (Quit: Leaving.)
[21:37] * garphy is now known as garphy`aw
[21:38] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) has joined #ceph
[21:40] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[21:42] * Sun7zu (~Quackie@4MJAAE8IM.tor-irc.dnsbl.oftc.net) Quit ()
[21:42] * Spikey1 (~ZombieTre@7V7AAEWRY.tor-irc.dnsbl.oftc.net) has joined #ceph
[21:44] * xarses (~xarses@64.124.158.100) has joined #ceph
[21:45] * jclm (~jclm@ip68-96-198-45.lv.lv.cox.net) has joined #ceph
[21:47] * erwan_taf (~erwan@62.147.161.106) has joined #ceph
[21:48] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[21:49] * lpabon (~quassel@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[21:50] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[21:53] * jclm (~jclm@ip68-96-198-45.lv.lv.cox.net) Quit (Quit: Leaving.)
[21:54] * evelu (~erwan@37.162.50.134) Quit (Ping timeout: 480 seconds)
[21:54] * ade (~abradshaw@GK-84-46-90-18.routing.wtnet.de) Quit (Ping timeout: 480 seconds)
[21:56] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[22:02] * jclm (~jclm@ip68-96-198-45.lv.lv.cox.net) has joined #ceph
[22:02] * madkiss (~madkiss@ip5b414c62.dynamic.kabel-deutschland.de) Quit (Quit: Leaving.)
[22:03] <ktims_> so i'm trying to downgrde to hammer now, and ceph-deploy wants to start systemctl tasks that aren't installed by the hammer packages
[22:04] * cathode (~cathode@50.232.215.114) has joined #ceph
[22:07] * linjan (~linjan@176.195.246.55) has joined #ceph
[22:08] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[22:10] * mykola (~Mikolaj@91.225.201.82) Quit (Quit: away)
[22:12] * Spikey1 (~ZombieTre@7V7AAEWRY.tor-irc.dnsbl.oftc.net) Quit ()
[22:12] * xul (~datagutt@tor.laquadrature.net) has joined #ceph
[22:13] * Brochacho (~alberto@c-73-45-127-198.hsd1.il.comcast.net) Quit (Quit: Brochacho)
[22:13] * rendar (~I@host84-39-dynamic.60-82-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[22:24] <ktims_> i guess the conclusion is that the hammer packages are broken on ubuntu 16.04
[22:24] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[22:27] * dvanders_ (~dvanders@dvanders-pro.cern.ch) has joined #ceph
[22:27] * georgem (~Adium@206.108.127.16) Quit (Ping timeout: 480 seconds)
[22:27] * ledgr (~ledgr@88-119-196-104.static.zebra.lt) Quit (Remote host closed the connection)
[22:28] * garphy`aw is now known as garphy
[22:29] * jowilkin (~jowilkin@nat-pool-rdu-u.redhat.com) Quit (Remote host closed the connection)
[22:33] * dvanders (~dvanders@dvanders-pro.cern.ch) Quit (Ping timeout: 480 seconds)
[22:35] * winston-d_ (uid98317@id-98317.richmond.irccloud.com) Quit (Quit: Connection closed for inactivity)
[22:39] * rendar (~I@host84-39-dynamic.60-82-r.retail.telecomitalia.it) has joined #ceph
[22:42] * xul (~datagutt@7V7AAEWTD.tor-irc.dnsbl.oftc.net) Quit ()
[22:42] * qable (~Mattress@7V7AAEWUX.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:45] * georgem (~Adium@24.114.72.124) has joined #ceph
[22:45] * georgem (~Adium@24.114.72.124) Quit ()
[22:45] * georgem (~Adium@206.108.127.16) has joined #ceph
[22:46] * mnathani (~mnathani_@192-0-149-228.cpe.teksavvy.com) Quit (Read error: Connection reset by peer)
[22:47] * mnathani (~mnathani_@192-0-149-228.cpe.teksavvy.com) has joined #ceph
[22:49] * jordanP (~jordan@bdv75-2-81-57-250-57.fbx.proxad.net) has joined #ceph
[22:53] * ircolle (~Adium@2601:285:201:633a:a514:4a1e:839a:73ed) has joined #ceph
[22:54] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[23:05] * seosepa (~sepa@aperture.GLaDOS.info) Quit (Remote host closed the connection)
[23:06] * bene (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[23:06] * darkfader (~floh@88.79.251.60) Quit (Read error: Connection reset by peer)
[23:07] * seosepa (~sepa@aperture.GLaDOS.info) has joined #ceph
[23:07] * darkfader (~floh@88.79.251.60) has joined #ceph
[23:12] <neurodrone> Anyone using >1 OSDs per one disk?
[23:12] * qable (~Mattress@7V7AAEWUX.tor-irc.dnsbl.oftc.net) Quit ()
[23:12] * valeech (~valeech@50-205-143-162-static.hfc.comcastbusiness.net) Quit (Quit: valeech)
[23:12] * ircolle (~Adium@2601:285:201:633a:a514:4a1e:839a:73ed) Quit (Quit: Leaving.)
[23:16] * Knuckx (~MonkeyJam@06SAACO18.tor-irc.dnsbl.oftc.net) has joined #ceph
[23:22] * vanham (~vanham@12.199.84.146) Quit (Read error: Connection reset by peer)
[23:26] * bene (~bene@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[23:30] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Ping timeout: 480 seconds)
[23:32] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[23:33] * Miouge (~Miouge@h-4-155-222.a163.priv.bahnhof.se) Quit (Quit: Miouge)
[23:38] <diq> that's the recommendation for larger NVMe SSD's
[23:38] <diq> > 1 OSD per SSD
[23:38] <diq> I haven't done it, but I've seen it in a few whitepapers
[23:39] * georgem (~Adium@206.108.127.16) Quit (Ping timeout: 480 seconds)
[23:42] * cathode (~cathode@50.232.215.114) Quit (Quit: Leaving)
[23:42] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:46] * Knuckx (~MonkeyJam@06SAACO18.tor-irc.dnsbl.oftc.net) Quit ()
[23:46] * demonspork (~N3X15@185.100.85.132) has joined #ceph
[23:53] * jordanP (~jordan@bdv75-2-81-57-250-57.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[23:58] * ircolle (~Adium@c-71-229-136-109.hsd1.co.comcast.net) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.