#ceph IRC Log

Index

IRC Log for 2016-06-02

Timestamps are in GMT/BST.

[0:02] * matejz (~matejz@element.planetq.org) Quit (Quit: matejz)
[0:10] * bene2 (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[0:12] * mattbenjamin1 (~mbenjamin@12.118.3.106) Quit (Quit: Leaving.)
[0:14] * bene (~bene@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[0:14] * Wahmed (~wahmed@s75-158-44-99.ab.hsia.telus.net) Quit (Quit: Nettalk6 - www.ntalk.de)
[0:14] * chrisinajar (~drdanick@06SAADC4G.tor-irc.dnsbl.oftc.net) Quit ()
[0:14] * Skyrider (~cheese^@06SAADC54.tor-irc.dnsbl.oftc.net) has joined #ceph
[0:18] * antongribok (~antongrib@216.207.42.140) Quit (Quit: Leaving...)
[0:19] * penguinRaider (~KiKo@146.185.31.226) Quit (Ping timeout: 480 seconds)
[0:19] * rendar (~I@host148-178-dynamic.7-87-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[0:20] * vanham (~vanham@host-208-68-233-244.biznesshosting.net) has joined #ceph
[0:20] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[0:34] * penguinRaider (~KiKo@172.87.224.66) has joined #ceph
[0:38] * MrHeavy (~MrHeavy@pool-108-29-34-55.nycmny.fios.verizon.net) Quit (Quit: Leaving)
[0:44] * Skyrider (~cheese^@06SAADC54.tor-irc.dnsbl.oftc.net) Quit ()
[0:44] * pepzi (~Heliwr@h-2-71.a322.priv.bahnhof.se) has joined #ceph
[0:47] * fsimonce (~simon@host128-29-dynamic.250-95-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[0:47] * wgao (~wgao@106.120.101.38) Quit (Read error: Connection timed out)
[0:52] * xarses (~xarses@64.124.158.100) Quit (Remote host closed the connection)
[0:52] * xarses (~xarses@64.124.158.100) has joined #ceph
[0:53] * sudocat (~dibarra@192.185.1.20) Quit (Quit: Leaving.)
[0:59] * bene2 (~bene@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[1:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[1:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[1:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[1:05] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[1:09] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Ping timeout: 480 seconds)
[1:11] * vanham (~vanham@host-208-68-233-244.biznesshosting.net) Quit (Ping timeout: 480 seconds)
[1:12] * bniver (~bniver@pool-173-48-58-27.bstnma.fios.verizon.net) has joined #ceph
[1:14] * pepzi (~Heliwr@7V7AAFKQT.tor-irc.dnsbl.oftc.net) Quit ()
[1:14] * xolotl (~allenmelo@173.208.213.114) has joined #ceph
[1:22] * cathode (~cathode@50.232.215.114) Quit (Quit: Leaving)
[1:24] * oms101 (~oms101@p20030057EA784E00C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:32] * oms101 (~oms101@p20030057EA0F9200C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:35] * ibravo (~ibravo@72.198.142.104) has joined #ceph
[1:44] * xolotl (~allenmelo@7V7AAFKSI.tor-irc.dnsbl.oftc.net) Quit ()
[1:44] * n0x1d (~Lunk2@relay1.tor.openinternet.io) has joined #ceph
[1:51] <ronrib> Does cephfs tend to have faster reads or writes? For example, if I can write at 500MB/s, should I be seeing read speeds of around 500MB/s too?
[1:51] * Qu310 (~qnet@qu310.qnet.net.au) Quit ()
[1:54] * squizzi (~squizzi@107.13.31.195) Quit (Remote host closed the connection)
[1:56] * realitysandwich (~perry@b2b-78-94-59-114.unitymedia.biz) Quit (Ping timeout: 480 seconds)
[2:01] * tnarg (~oftc-webi@vpn.uberatc.com) has joined #ceph
[2:02] <tnarg> is this the right place to ask questions about librados usage?
[2:03] * wushudoin (~wushudoin@38.99.12.237) Quit (Ping timeout: 480 seconds)
[2:06] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Quit: Leaving)
[2:08] * BrianA (~BrianA@nrm-1c3-ag5500-02.tco.seagate.com) Quit (Read error: Connection reset by peer)
[2:08] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Ping timeout: 480 seconds)
[2:14] * n0x1d (~Lunk2@7V7AAFKTV.tor-irc.dnsbl.oftc.net) Quit ()
[2:14] * Jebula (~Kealper@freedom.ip-eend.nl) has joined #ceph
[2:15] <joshd> tnarg: ask away
[2:17] * hellertime1 (~Adium@pool-173-48-155-219.bstnma.fios.verizon.net) has joined #ceph
[2:18] * huangjun (~kvirc@113.57.168.154) has joined #ceph
[2:19] * hellertime (~Adium@a72-246-0-10.deploy.akamaitechnologies.com) Quit (Read error: Connection reset by peer)
[2:21] * xarses (~xarses@64.124.158.100) Quit (Ping timeout: 480 seconds)
[2:24] * vata (~vata@cable-192.222.249.207.electronicbox.net) has joined #ceph
[2:41] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[2:42] * LeaChim (~LeaChim@host86-168-126-119.range86-168.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:43] * vanham (~vanham@199.59.96.208) has joined #ceph
[2:44] * Jebula (~Kealper@7V7AAFKUV.tor-irc.dnsbl.oftc.net) Quit ()
[2:45] * TheDoudou_a (~pico@193.189.117.180) has joined #ceph
[2:50] * tnarg (~oftc-webi@vpn.uberatc.com) Quit (Ping timeout: 480 seconds)
[2:56] <badone> joshd: amazing how often that happens isn't it?
[2:57] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[2:57] * wgao (~wgao@106.120.101.38) has joined #ceph
[2:58] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[3:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[3:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[3:03] * northrup (~northrup@75-146-11-137-Nashville.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[3:07] * khyron (~khyron@187.207.11.87) Quit (Quit: The computer fell asleep)
[3:08] * khyron (~khyron@187.207.11.87) has joined #ceph
[3:09] * vbellur (~vijay@c-24-62-127-188.hsd1.ma.comcast.net) has joined #ceph
[3:09] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[3:09] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[3:14] <flaf> Hi. I'm trying the understand why the mount of a cephfs (via ceph-fuse) fails during the boot. I have this error in /var/log/upstart/mountall.log: ???ceph-fuse[744]: ceph mount failed with (1) Operation not permitted???. I'm trying a naive (?) ???rgrep "Operation not permitted" ceph-git-repo??? but I find nothing. How is it possible?
[3:14] * TheDoudou_a (~pico@4MJAAFUS6.tor-irc.dnsbl.oftc.net) Quit ()
[3:16] * khyron (~khyron@187.207.11.87) Quit (Ping timeout: 480 seconds)
[3:17] * vbellur (~vijay@c-24-62-127-188.hsd1.ma.comcast.net) Quit (Remote host closed the connection)
[3:17] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Ping timeout: 480 seconds)
[3:23] * Brochacho (~alberto@2601:243:504:6aa:b577:95f5:8944:bf7d) has joined #ceph
[3:25] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:28] * kefu (~kefu@183.193.182.2) has joined #ceph
[3:30] * shyu (~shyu@218.241.172.114) has joined #ceph
[3:30] * vbellur (~vijay@2601:18f:700:55b0:5e51:4fff:fee8:6a5c) has joined #ceph
[3:31] * kefu_ (~kefu@114.92.122.74) has joined #ceph
[3:38] * kefu (~kefu@183.193.182.2) Quit (Ping timeout: 480 seconds)
[3:45] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[3:47] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[3:51] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[3:51] * georgem (~Adium@104-222-119-175.cpe.teksavvy.com) has joined #ceph
[3:55] * georgem (~Adium@104-222-119-175.cpe.teksavvy.com) Quit ()
[3:55] * georgem (~Adium@206.108.127.16) has joined #ceph
[3:58] * flisky (~Thunderbi@36.110.40.28) has joined #ceph
[4:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[4:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[4:08] * MentalRay (~MentalRay@107.171.161.165) has joined #ceph
[4:11] * shyu (~shyu@218.241.172.114) Quit (Ping timeout: 480 seconds)
[4:14] * Kurimus1 (~zviratko@torlesnet2.relay.coldhak.com) has joined #ceph
[4:15] <badone> flaf: try searching for "ceph mount failed with"
[4:16] <badone> "Operation not permitted??? would probably come from a call to strerror or some such function
[4:17] <flaf> Ok, indeed. Thx.
[4:18] <flaf> In fact, I think I have found my problem and It's probably a bug but not a necessarily a ceph bug... I'm not sure.
[4:19] <flaf> I have a valid line in fstab and the arguments given to /sbin/mount.fuse.ceph are not exactly the same.
[4:21] <flaf> I have ???client_mountpoint=/??? in my fstab but /sbin/mount.fuse.ceph receives ???client_mountpoint=??? without the ???/???.
[4:21] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[4:21] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[4:22] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[4:22] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[4:23] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[4:32] <ronrib> i'm seeing some slow read speeds via cephfs, writes are fine (1GB/s) but reads are only around 100MB/s, an RBD on the same pool can manage about 700MB/s read and write
[4:39] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[4:44] * Kurimus1 (~zviratko@4MJAAFUWM.tor-irc.dnsbl.oftc.net) Quit ()
[4:44] * Rens2Sea (~Kalado@node1.tor-relay.xyz) has joined #ceph
[4:53] * NTTEC (~nttec@119.93.91.136) has joined #ceph
[4:53] <ronrib> multiple read threads all running at the same time get the same speed, i'm guessing it's a latency issue
[4:54] * tnarg (~oftc-webi@vpn.uberatc.com) has joined #ceph
[4:55] * adun153 (~adun153@188.166.241.45) Quit (Quit: Leaving IRC - dircproxy 1.0.5)
[4:55] * shyu (~shyu@218.241.172.114) has joined #ceph
[4:55] * adun153 (~adun153@188.166.241.45) has joined #ceph
[4:55] <tnarg> does librados::Rados deal with reconnecting? What exactly does Rados::connect() connect to? The monitors?
[4:56] <tnarg> I imagine dropped connections to OSDs would be not uncommon
[4:57] <tnarg> Basically I'm trying to understand what if any retry policy exists, and what I need to build myself
[4:58] <badone> tnarg: probably more appropriate for #ceph-devel and you might get better answers closer to EMEA or NA time
[4:58] <tnarg> thanks
[4:59] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[4:59] <badone> tnarg: np, there's the ceph-devel mailing list as well
[5:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[5:01] <joshd> tnarg: librados handles reconnection and resending etc. for you
[5:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[5:01] <joshd> the behavior in general is to block and retry rather than propagating that sort of transient error
[5:03] <tnarg> interesting. does it expose any stats like retry count, latency, etc.?
[5:04] <gregsfortytwo> not on a per-op basis; that's all quite deliberately transparent to the user
[5:04] <joshd> there are some aggregate stats you can see with the admin socket 'perf dump' command
[5:05] <gregsfortytwo> if you look through the perfcounters in the internal source ("perf dump" via the admin socket, and maybe something programmatic?) you could get some info out of those
[5:05] <gregsfortytwo> jynx :p
[5:05] <joshd> yeah :) the best source for that if you're really curious is src/osdc/Objecter.cc
[5:06] <gregsfortytwo> for specific ops you would need to track it yourself, but...there's not much point; all the error handling and reconnection is handled for you so it's not like you can respawn the request or direct it somewhere else
[5:07] <tnarg> thanks!
[5:11] <rkeene> Did you guys see the Torus announcement ? Has this already been discussed ? I've been busy all day.
[5:11] <badone> joshd, gregsfortytwo: thanks for jumping in... guess I should have been able to answer some of that since we block indefinitely in case of OSD problems, etc. :P
[5:11] <badone> thus not propagating an error to the app layer
[5:12] * hellertime1 (~Adium@pool-173-48-155-219.bstnma.fios.verizon.net) Quit (Quit: Leaving.)
[5:13] <rkeene> From what I looked at in Torus it was a big downer -- missing the feature I want the most, checksumming... my distributed replicating filesystem PLAN is based on checksumming, incidentally... but it'll take man-months to get all the features I get with Ceph and I just have not had time to get much of it done
[5:13] <badone> wasn't sure weter we ever did though...
[5:14] * Rens2Sea (~Kalado@4MJAAFUXU.tor-irc.dnsbl.oftc.net) Quit ()
[5:14] <badone> anyway... lunch...
[5:15] <joshd> rkeene: nice quote in the channel earlier "antongribok: Apologies for slightly off topic post... In all my years of running Ceph and reading Hacker News (not at the same time) I've never seen so many mentions of Ceph in the comments about another storage solution: https://news.ycombinator.com/item?id=11816122"
[5:17] <rkeene> I only read the comments this morning when it was new, there were only a couple of references to Ceph at that time
[5:22] * adun153 (~adun153@188.166.241.45) Quit (Quit: Terminated with extreme prejudice - dircproxy 1.0.5)
[5:23] * evilrob is looking for someone to do saltstack+ceph+openstack+linux -- infrastructure as code style devops
[5:24] <rkeene> evilrob, My company sells that, except without OpenStack -- we use OpenNebula instead
[5:24] * AndChat-74900 (~AndChat74@180.168.197.82) has joined #ceph
[5:24] <AndChat-74900> hi, all
[5:25] <rkeene> It works great -- plug in new nodes, tell them to PXE boot and they automatically register their resources to Ceph or OpenNebula depending which native VLAN they booted from -- the system manages the PXE server, you just give it the VLAN IDs
[5:26] <evilrob> rkeene: I'm hiring, not buying. Sounds interesting though.
[5:26] <rkeene> evilrob, The first iteration was using OpenStack... it was terrible.
[5:27] <rkeene> evilrob, I gave a presentation on our work at the OpenNebula TechDay at Harvard about a month ago
[5:28] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[5:29] <rkeene> http://opennebula.org/community/techdays/techday-cambridge-2016/ search for "Roy Keene" to see it... but mostly I was talking, so the slides aren't helpful... I meant to record it
[5:31] * ceph_keke (~AndChat74@180.168.197.82) has joined #ceph
[5:32] * AndChat-74900 (~AndChat74@180.168.197.82) Quit (Ping timeout: 480 seconds)
[5:34] * penguinRaider (~KiKo@172.87.224.66) Quit (Ping timeout: 480 seconds)
[5:35] * AndChat|74900 (~AndChat74@180.168.126.179) has joined #ceph
[5:37] * jsweeney (~oftc-webi@sky-78-19-113-132.bas512.cwt.btireland.net) Quit (Ping timeout: 480 seconds)
[5:40] * xarses_ (~xarses@73.93.155.59) has joined #ceph
[5:40] * ceph_keke (~AndChat74@180.168.197.82) Quit (Ping timeout: 480 seconds)
[5:40] * ceph_keke (~AndChat74@180.168.170.2) has joined #ceph
[5:40] * xarses_ (~xarses@73.93.155.59) Quit (Read error: Connection reset by peer)
[5:40] * Brochacho (~alberto@2601:243:504:6aa:b577:95f5:8944:bf7d) Quit (Quit: Brochacho)
[5:41] * xarses_ (~xarses@73.93.155.59) has joined #ceph
[5:41] * xarses_ (~xarses@73.93.155.59) Quit (Remote host closed the connection)
[5:43] * xarses_ (~xarses@73.93.155.59) has joined #ceph
[5:43] * AndChat|74900 (~AndChat74@180.168.126.179) Quit (Ping timeout: 480 seconds)
[5:43] * xarses_ (~xarses@73.93.155.59) Quit (Remote host closed the connection)
[5:43] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[5:44] * xarses_ (~xarses@73.93.155.59) has joined #ceph
[5:44] * Pettis1 (~Inverness@tor-exit.dhalgren.org) has joined #ceph
[5:44] <vanham> Guys, just got into a very common problem here, I hope there is some doc for it already
[5:44] <vanham> I removed two OSDs at the same time now one pg is down
[5:45] <vanham> I have the data on the old OSD, witch is UP now, but ceph won't copy from it
[5:45] * AndChat|74900 (~AndChat74@180.168.197.82) has joined #ceph
[5:45] <vanham> Ceph doesn't seem to remember that this pg was on that old OSD
[5:46] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:46] * penguinRaider (~KiKo@14.139.82.6) has joined #ceph
[5:48] * ceph_keke (~AndChat74@180.168.170.2) Quit (Ping timeout: 480 seconds)
[5:49] * yatin (~yatin@182.71.248.238) has joined #ceph
[5:50] * ceph_keke (~AndChat74@180.168.170.2) has joined #ceph
[5:51] * Vacuum__ (~Vacuum@88.130.208.86) has joined #ceph
[5:52] * andreww (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[5:53] <rkeene> Is there a quick way to get the IP of a monitor node that is currently working ?
[5:53] * AndChat|74900 (~AndChat74@180.168.197.82) Quit (Ping timeout: 480 seconds)
[5:55] * AndChat|74900 (~AndChat74@222.73.33.154) has joined #ceph
[5:58] * ceph_ke (~chris@180.168.197.82) has joined #ceph
[5:58] * ceph_keke (~AndChat74@180.168.170.2) Quit (Ping timeout: 480 seconds)
[5:58] * Vacuum_ (~Vacuum@i59F796F6.versanet.de) Quit (Ping timeout: 480 seconds)
[5:59] * xarses_ (~xarses@73.93.155.59) Quit (Ping timeout: 480 seconds)
[5:59] * ram (~oftc-webi@static-202-65-140-146.pol.net.in) Quit (Ping timeout: 480 seconds)
[6:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[6:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[6:03] * AndChat|74900 (~AndChat74@222.73.33.154) Quit (Ping timeout: 480 seconds)
[6:04] * tnarg (~oftc-webi@vpn.uberatc.com) Quit (Ping timeout: 480 seconds)
[6:05] <ceph_ke> #ceph
[6:05] * deepthi (~deepthi@122.172.171.121) has joined #ceph
[6:08] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) Quit (Ping timeout: 480 seconds)
[6:08] <ceph_ke> anyone talking ?
[6:14] * Pettis1 (~Inverness@4MJAAFUZS.tor-irc.dnsbl.oftc.net) Quit ()
[6:19] * anadrom (~jwandborg@85.159.237.210) has joined #ceph
[6:26] * shyu (~shyu@218.241.172.114) Quit (Ping timeout: 480 seconds)
[6:29] * ceph_ke (~chris@180.168.197.82) Quit (Quit: Leaving)
[6:30] * ceph_ke (~chris@180.168.197.82) has joined #ceph
[6:30] * rakeshgm (~rakesh@106.51.26.213) Quit (Remote host closed the connection)
[6:31] * ceph_ke (~chris@180.168.197.82) Quit ()
[6:32] * gauravbafna (~gauravbaf@122.172.230.136) has joined #ceph
[6:32] * ceph_devel (~chris@180.168.197.82) has joined #ceph
[6:32] * MentalRay (~MentalRay@107.171.161.165) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:36] * viisking (~viisking@183.80.255.12) has joined #ceph
[6:37] * yk (~yatin@161.163.44.8) has joined #ceph
[6:38] * gauravbafna (~gauravbaf@122.172.230.136) Quit (Read error: Connection reset by peer)
[6:38] * gauravbafna (~gauravbaf@122.172.231.38) has joined #ceph
[6:44] * yatin (~yatin@182.71.248.238) Quit (Ping timeout: 480 seconds)
[6:48] * kefu_ (~kefu@114.92.122.74) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:49] * anadrom (~jwandborg@06SAADDL8.tor-irc.dnsbl.oftc.net) Quit ()
[6:50] * gauravbafna (~gauravbaf@122.172.231.38) Quit (Remote host closed the connection)
[6:51] * gauravbafna (~gauravbaf@122.172.231.38) has joined #ceph
[6:55] * matejz (~matejz@element.planetq.org) has joined #ceph
[6:58] * Miouge (~Miouge@188.189.94.56) has joined #ceph
[7:00] * gauravbafna (~gauravbaf@122.172.231.38) Quit (Remote host closed the connection)
[7:00] * Miouge (~Miouge@188.189.94.56) Quit ()
[7:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[7:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[7:03] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:03] * gauravbafna (~gauravbaf@122.172.231.38) has joined #ceph
[7:03] * vata (~vata@cable-192.222.249.207.electronicbox.net) Quit (Quit: Leaving.)
[7:11] * gauravbafna (~gauravbaf@122.172.231.38) Quit (Remote host closed the connection)
[7:11] * brad[] (~Brad@184-94-17-26.dedicated.allstream.net) Quit (Read error: Connection reset by peer)
[7:11] * brad[] (~Brad@184-94-17-26.dedicated.allstream.net) has joined #ceph
[7:13] * gauravbafna (~gauravbaf@122.172.231.38) has joined #ceph
[7:14] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Quit: Leaving)
[7:15] * spgriffinjr (~spgriffin@66-46-246-206.dedicated.allstream.net) Quit (Ping timeout: 480 seconds)
[7:17] * matejz (~matejz@element.planetq.org) Quit (Quit: matejz)
[7:18] * Miouge (~Miouge@188.189.94.56) has joined #ceph
[7:19] * cheese^ (~Neon@tor.piratenpartei-nrw.de) has joined #ceph
[7:20] * brad[] (~Brad@184-94-17-26.dedicated.allstream.net) Quit (Ping timeout: 480 seconds)
[7:21] * ceph_devel (~chris@180.168.197.82) Quit (Quit: Leaving)
[7:21] * ceph_devel (~chris@180.168.197.82) has joined #ceph
[7:23] * kefu (~kefu@183.193.182.2) has joined #ceph
[7:27] * kefu_ (~kefu@114.92.122.74) has joined #ceph
[7:30] * brad[] (~Brad@184-94-17-26.dedicated.allstream.net) has joined #ceph
[7:31] * kefu (~kefu@183.193.182.2) Quit (Ping timeout: 480 seconds)
[7:32] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) Quit (Quit: Leaving)
[7:41] * overclk (~quassel@117.202.96.124) has joined #ceph
[7:41] * karnan (~karnan@106.51.130.73) has joined #ceph
[7:42] * brad- (~Brad@184-94-17-26.dedicated.allstream.net) has joined #ceph
[7:45] * gauravba_ (~gauravbaf@122.178.205.244) has joined #ceph
[7:46] * brad[] (~Brad@184-94-17-26.dedicated.allstream.net) Quit (Ping timeout: 480 seconds)
[7:47] * gauravba_ (~gauravbaf@122.178.205.244) Quit (Remote host closed the connection)
[7:49] * cheese^ (~Neon@4MJAAFU26.tor-irc.dnsbl.oftc.net) Quit ()
[7:49] * brad[] (~Brad@184-94-17-26.dedicated.allstream.net) has joined #ceph
[7:50] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) has joined #ceph
[7:50] * gauravbafna (~gauravbaf@122.172.231.38) Quit (Ping timeout: 480 seconds)
[7:50] * brad- (~Brad@184-94-17-26.dedicated.allstream.net) Quit (Ping timeout: 480 seconds)
[7:50] <NTTEC> hello anyone here.
[7:51] <NTTEC> anyone has an idea about this
[7:51] <NTTEC> [2016-05-30 19:15:09,001][ceph03][WARNING] command_check_call: Running command: /usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /var/local/osd0/activate.monmap --osd-data /var/local/osd0 --osd-journal /var/local/osd0/journal --osd-uuid ef026140-af89-44b5-b19d-64a2852a2d08 --keyring /var/local/osd0/keyring --setuser ceph --setgroup ceph
[7:51] <NTTEC> [2016-05-30 19:15:09,017][ceph03][WARNING] 2016-05-30 19:15:09.176821 7fd19a72d8c0 -1 filestore(/var/local/osd0) mkfs: write_version_stamp() failed: (13) Permission denied
[7:51] <NTTEC> [2016-05-30 19:15:09,017][ceph03][WARNING] 2016-05-30 19:15:09.176838 7fd19a72d8c0 -1 OSD::mkfs: ObjectStore::mkfs failed with error -13
[7:51] <NTTEC> [2016-05-30 19:15:09,017][ceph03][WARNING] 2016-05-30 19:15:09.176875 7fd19a72d8c0 -1 ^[[0;31m ** ERROR: error creating empty object store in /var/local/osd0: (13) Permission denied^[[0m
[7:51] * gauravbafna (~gauravbaf@122.178.205.244) has joined #ceph
[7:52] * rraja (~rraja@121.244.87.117) has joined #ceph
[7:55] * gauravbafna (~gauravbaf@122.178.205.244) Quit (Remote host closed the connection)
[7:56] * brad- (~Brad@184-94-17-26.dedicated.allstream.net) has joined #ceph
[8:00] * brad[] (~Brad@184-94-17-26.dedicated.allstream.net) Quit (Ping timeout: 480 seconds)
[8:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[8:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[8:03] * spgriffinjr (~spgriffin@66.46.246.206) has joined #ceph
[8:03] * badone (~badone@113.29.24.218) Quit (Ping timeout: 480 seconds)
[8:03] <NTTEC> nvrmind chmod just chmod issue
[8:04] * vicente_ (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[8:05] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[8:06] * badone (~badone@66.187.239.16) has joined #ceph
[8:07] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[8:10] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[8:14] * branto (~branto@nat-pool-brq-t.redhat.com) has joined #ceph
[8:18] * northrup (~northrup@173.14.101.193) has joined #ceph
[8:19] <northrup> ok - I am trying to setup CephFS and keep getting this awesome, nondescript, error on all the clients: mount error 5 = Input/output error
[8:20] <northrup> the clients can all reach the mon servers
[8:20] <northrup> the clients can all reach the mds servers
[8:20] <northrup> the mon, mds, and osd services are running on all the appropriate targets
[8:20] <northrup> ceph health is OK
[8:20] <northrup> what on earth am I missing?
[8:21] <northrup> ( also, the ceph-fs-common HAS been installed on all the clients )
[8:21] <Be-El> northrup: are the clients able to reach the osd server?
[8:22] <Be-El> northrup: and does the key used by cephfs on client side allow it to access all necessary pool + services?
[8:22] <northrup> the clients can reach all the OSD servers
[8:23] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[8:23] <northrup> and what I have in /etc/ceph/admin.secret is the text of the key found in ceph.client.admin.keyring
[8:23] <northrup> which I presume to be the correct thing?
[8:23] <northrup> or does admin.secret need to be in the same format as ceph.client.admin.keyring?
[8:24] <Be-El> if you use ceph-fuse, you need the key in keyring format. for kernel based cephfs you need just the key string
[8:24] <Be-El> it easier to start with ceph-fuse since the kernel stuff might have problems with current ceph releases
[8:25] <northrup> I can't use ceph-fuse because of project restrictions placed upon me.
[8:25] <northrup> So I do have the secret file correct...
[8:25] <northrup> is there any way to make ceph.mount MORE verbose as to what that message actually means?
[8:26] <Be-El> northrup: do you use mount or mount.ceph?
[8:26] <northrup> I'm trying mount.ceph
[8:27] <northrup> the command I'm issuing: mount.ceph ceph-mon01,ceph-mon02:/ /opt/gitlab -o name=admin,secretfile=/etc/ceph/admin.secret,rsize=1048576
[8:27] <Be-El> and the verbose flag does not give you the necessary information?
[8:28] <northrup> bah... it's bitching about rsize
[8:28] <northrup> :-/
[8:28] <northrup> ceph: Unknown mount option rsize
[8:29] * brad[] (~Brad@184-94-17-26.dedicated.allstream.net) has joined #ceph
[8:29] * spgriffi_ (~spgriffin@66.46.246.206) has joined #ceph
[8:29] <Be-El> well, rsize is listed as supported option in the manpage
[8:30] <viisking> Hi everyone
[8:30] <viisking> I have a 3-node cluster
[8:31] <viisking> each node run both OSD and Mon daemon
[8:31] <northrup> Be-El is parses the options now, but still just dumps to "mount error 5 = Input/output error" even in verbose mode
[8:31] * matejz (~matejz@141.255.254.208) has joined #ceph
[8:31] <viisking> when testing the HA, I shutdown one node
[8:31] <Be-El> northrup: and the kernel log does not contain any hints?
[8:32] * matejz is now known as matj345314
[8:32] <viisking> however #ceph osd tree on other nodes
[8:32] <viisking> it still show that 3 OSDs is still up
[8:32] <viisking> I'm not sure what's wrong with my setup
[8:33] <viisking> is anyone has experiance with this?
[8:33] * spgriffinjr (~spgriffin@66.46.246.206) Quit (Ping timeout: 480 seconds)
[8:33] <Be-El> viisking: how long did you wait after shutting down the node?
[8:33] * brad- (~Brad@184-94-17-26.dedicated.allstream.net) Quit (Ping timeout: 480 seconds)
[8:33] <northrup> Be-El Hmm.... "mon0 172.31.28.252:6789 socket error on read" and followed by "libceph: mon1 172.31.28.250:6789 feature set mismatch, my 4a042aca < server's 2004a042aca,missing 20000000000"
[8:33] <viisking> I didn't wait
[8:34] <Be-El> northrup: so you need a newer kernel
[8:34] <viisking> just issue the command on the other nodes
[8:34] <Be-El> viisking: it takes some time for the cluster to recognize that a host is down (afaik 30 or 60 seconds)
[8:34] <viisking> ah, I even tried after 2-3 mins, still the same
[8:35] <northrup> Be-El so the kernel on the client must match the kernel on the server?
[8:35] * sileht (~sileht@gizmo.sileht.net) Quit (Ping timeout: 480 seconds)
[8:35] <Be-El> viisking: with 3 nodes you may have to tune the parameters for heartbeat detection
[8:36] <Be-El> northrup: no, but the kernel on the server has to support the ceph release
[8:36] <Be-El> northrup: which ceph release do you use and what's the kernel version on the clients?
[8:36] <viisking> oh, what's that exactly?
[8:36] <northrup> jewel is what I'm using
[8:37] <northrup> currently 3.13.0-74, I'm rolling to 3.13.0-87 now
[8:37] <Be-El> viisking: see http://docs.ceph.com/docs/master/rados/configuration/mon-osd-interaction/
[8:37] <viisking> will do. Thanks a lot Be-El :)
[8:37] <Be-El> viisking: with one out of three hosts down you may not reach the necessary number of reporters
[8:38] <viisking> could you explain more?
[8:39] <Be-El> northrup: jewel is supported by the latest kernels only, e.g. 4.4/4.5 or newer (not sure about the version)
[8:39] <viisking> on the Okay hosts, they notice immediately about Monitor down, but not OSD
[8:39] <Be-El> viisking: the mons collect heartbeat reports from the osds (so osds detect that other osds are down)
[8:40] <Be-El> viisking: the mons need a certain number of reports from a certain number of different osds to mark an osd as out or down
[8:40] <viisking> I see
[8:41] <northrup> Be-El can you point me at the documentation for that? I can't seem to find a solid reference for that
[8:41] <viisking> so in my scenario, I might not see OSD down?
[8:41] <Be-El> viisking: the mons also try to detect missing osd, but with a much larger timeout (mon osd report timeout)
[8:41] <Be-El> viisking: or the mons to not get enough reports
[8:41] * northrup (~northrup@173.14.101.193) Quit (Max SendQ exceeded)
[8:42] <viisking> I see. Will dig a bit deeper into it
[8:43] * northrup (~northrup@173.14.101.193) has joined #ceph
[8:43] * mohmultihouse (~mohmultih@gw01.mhitp.dk) has joined #ceph
[8:44] <northrup> sorry - got dropped....
[8:45] * sileht (~sileht@gizmo.sileht.net) has joined #ceph
[8:47] * northrup (~northrup@173.14.101.193) Quit (Max SendQ exceeded)
[8:51] * dugravot61 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) has joined #ceph
[8:52] * penguinRaider (~KiKo@14.139.82.6) Quit (Ping timeout: 480 seconds)
[8:53] * rushworld (~Gibri@atlantic480.us.unmetered.com) has joined #ceph
[8:56] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[8:56] * huangjun (~kvirc@113.57.168.154) Quit (Read error: Connection reset by peer)
[8:56] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[8:57] * huangjun (~kvirc@113.57.168.154) has joined #ceph
[9:00] * kefu_ is now known as kefu
[9:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[9:01] * wjw-freebsd (~wjw@176.74.240.1) has joined #ceph
[9:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[9:04] * ade (~abradshaw@nat-pool-str-u.redhat.com) has joined #ceph
[9:05] * NTTEC (~nttec@119.93.91.136) Quit (Ping timeout: 480 seconds)
[9:09] <viisking> Be-El, : ah it showed correct info now, 2 OSD up, one down
[9:09] <vanham> Guys, I need your help please. Cluster is down for I lost 1 osd.
[9:10] <viisking> I have to wait around 15 mins
[9:10] <vanham> 9 pgs are marked incomplete
[9:10] <vanham> Tried setting osd.12 as lost, removing, etc
[9:10] <viisking> Be-El: Thanks so much
[9:10] <Be-El> viisking: so that's the 900 seconds default for mon<->osd timeout
[9:11] <Be-El> viisking: if you lower that values the mons should detect the missing osds faster
[9:11] <viisking> yeah
[9:11] <vanham> ceph pg query will tell peering_blocked_by_history_les_bound
[9:11] <viisking> should I lower this value?
[9:11] <vanham> But then I set it to lost and then it will say that it is not peering because osd is down
[9:11] <viisking> or it is recommended?
[9:12] * kawa2014 (~kawa@212.110.41.244) has joined #ceph
[9:12] * ade (~abradshaw@nat-pool-str-u.redhat.com) Quit (Ping timeout: 480 seconds)
[9:12] <Be-El> viisking: you probably need to adjust some of the heartbeat monitoring values in a 3-node scenario. but i'm not an expert on that
[9:13] <vanham> I already ready everything on the docs
[9:14] <viisking> Be-El: Thanks
[9:15] <Be-El> vanham: are you able to bring back that failed osd?
[9:16] <vanham> No
[9:16] * analbeard (~shw@support.memset.com) has joined #ceph
[9:16] <vanham> Don't mind returning to an old version of that pg
[9:16] <vanham> The other copies are there
[9:17] <vanham> osd.12 is totally lost
[9:17] <vanham> Right now pg query will say peering_blocked_by_history_les_bound
[9:18] * kutija (~kutija@89.216.27.139) has joined #ceph
[9:18] <vanham> I just want to make the other copies active
[9:18] <Be-El> i do not have much experience with incomplete pgs, but there might be two solutions: attempt a pg repair, or bringing up a fake osd.12 using the same ceph key
[9:19] <Be-El> thu i'm not sure which solution destroys more of your data
[9:19] <vanham> I tried a pg repair already
[9:20] <vanham> how can i do that second option?
[9:20] <Be-El> you may want to wait until more people are awake and may give you better hints
[9:20] <Be-El> or write to the ceph mailing list
[9:20] <vanham> Not an option really Be-El
[9:20] <vanham> I have less than 2 hours to bring this back online
[9:20] <vanham> Don't mind losing a few minutes of data
[9:21] <Be-El> is osd.12 still part of the ceph configuration, or did you remove it completely?
[9:21] <vanham> Between tried I did a ceph osd rm 12 once or twice
[9:21] <vanham> Bought it online, then down, then lost...
[9:22] <Be-El> did you also remove it from crush?
[9:22] <vanham> no
[9:22] <vanham> same place
[9:23] <vanham> maybe getting it out of crush will make the other copies active?
[9:23] * ade (~abradshaw@nat-pool-str-t.redhat.com) has joined #ceph
[9:23] * rushworld (~Gibri@4MJAAFU6X.tor-irc.dnsbl.oftc.net) Quit ()
[9:23] * Scaevolus (~SinZ|offl@tor-exit.insane.us.to) has joined #ceph
[9:23] <Be-El> ok, without any guarantee that this will not eat all data in the cluster: remove osd.12 from crush and as osd
[9:23] <vanham> ok
[9:24] * NTTEC (~nttec@203.177.235.23) has joined #ceph
[9:25] * branto (~branto@nat-pool-brq-t.redhat.com) Quit (Read error: Connection reset by peer)
[9:25] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[9:25] <vanham> pgs went into peering and then back to incomplete
[9:26] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[9:26] <vanham> still some data moving arround
[9:26] * branto (~branto@nat-pool-brq-t.redhat.com) has joined #ceph
[9:27] <Be-El> can you upload the ceph pq query output to some pastebin?
[9:28] <vanham> http://paste.ubuntu.com/16916058/
[9:29] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph
[9:32] <Be-El> osd 13 does not have any data for that pg
[9:32] <vanham> osd.9 have a good copy
[9:32] <vanham> osd.13 doesn't even have the directory
[9:33] <vanham> Sorry
[9:33] <vanham> wait
[9:33] <vanham> wrong pg
[9:33] * gauravbafna (~gauravbaf@49.32.0.108) has joined #ceph
[9:33] <vanham> osd.13 have a blank copy
[9:34] <vanham> osd.9 have all the objects
[9:35] <Be-El> and your replication size is 2?
[9:36] * swami1 (~swami@49.32.0.112) has joined #ceph
[9:36] <vanham> yes
[9:36] <vanham> Thinking about https://www.sebastien-han.fr/blog/2015/04/27/ceph-manually-repair-object/
[9:37] <Be-El> or http://ceph.com/community/incomplete-pgs-oh-my/
[9:37] <Be-El> short form: export the pg's content with ceph-objectstore-tool, import it in osd.13
[9:38] <Be-El> but i did ever had to use ceph-objectstore-tool, so i cannot help you with it
[9:38] * rotbeard (~redbeard@aftr-109-90-233-106.unity-media.net) has joined #ceph
[9:38] <vanham> Thank you very much Be-El. I'll let you know
[9:39] * realitysandwich (~perry@b2b-78-94-59-114.unitymedia.biz) has joined #ceph
[9:39] <viisking> Be-El: Do you know why on my nodes, there is no /etc/init.d/ceph
[9:40] <viisking> I can only restart service using #systemctl restart ceph-mon.target?
[9:40] <viisking> or ceph-osd.target
[9:40] <viisking> I'm using Jewel (10.2.1) and Centos7
[9:40] <swami1> hi
[9:41] <swami1> is encryption for data in rest supported in ceph?
[9:41] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) Quit (Ping timeout: 480 seconds)
[9:49] * i_m (~ivan.miro@deibp9eh1--blueice4n1.emea.ibm.com) has joined #ceph
[9:51] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) has joined #ceph
[9:53] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[9:53] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[9:53] * Scaevolus (~SinZ|offl@06SAADDTN.tor-irc.dnsbl.oftc.net) Quit ()
[9:53] <vanham> Be-El, didn't work!
[9:54] <vanham> oh my god
[9:55] <Be-El> vanham: which part didn't work?
[9:55] <vanham> I did everything on the website
[9:55] <vanham> now both osds have the same copy
[9:56] <vanham> but it still listed as incomplete, pending history blah blah blah
[9:56] <vanham> It still thinks it's an old version
[9:57] <Be-El> does ceph osd dump still lists osd.12?
[9:57] <vanham> nope
[9:57] <vanham> it's not on crush as well
[9:58] <Be-El> but it is still shown in ceph pg query for the incomplete one?
[9:58] * DanFoster (~Daniel@office.34sp.com) has joined #ceph
[9:59] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[9:59] * pabluk_ is now known as pabluk
[10:00] <vanham> yeap
[10:00] * NTTEC (~nttec@203.177.235.23) Quit (Remote host closed the connection)
[10:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[10:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[10:01] <vanham> yeah, tried again with different pg and still didn't work
[10:08] <vanham> I guess it's wait until someone wakes up then... :(
[10:08] <Be-El> vanham: ok, last attempt i've found on the mailing list: temporarely start the _primary_ osd of the affected pg new with osd_find_best_info_ignore_history_les set to true in ceph.conf
[10:08] <vanham> Awesome!!!!
[10:08] <vanham> Wow!
[10:08] <vanham> Wow!!!
[10:08] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[10:12] * Miouge_ (~Miouge@188.189.73.154) has joined #ceph
[10:14] * dgurtner (~dgurtner@178.197.233.142) has joined #ceph
[10:15] <vanham> And, when you read that "Wow!!!", think like the chewbacca mask girl says it, because it worked!
[10:15] * Miouge (~Miouge@188.189.94.56) Quit (Ping timeout: 480 seconds)
[10:15] * Miouge_ is now known as Miouge
[10:15] <vanham> Man, I can't thank you enough
[10:15] <vanham> I owe you big time
[10:16] <vanham> I really do
[10:18] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[10:18] <vanham> only one went to an eternal peering mode, but that's easy to fix with the previous method
[10:19] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[10:21] * dugravot61 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) Quit (Quit: Leaving.)
[10:22] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[10:25] <s3an2> vikhyat, Thanks for your help on Tuesday - after recovery finished the mon store size dropped from 30GB to 500MB without the need to call compact.
[10:27] * ndru_ (~jawsome@104.236.94.35) has joined #ceph
[10:28] * KungFuHamster (~DJComet@tor-exit-2.netdive.xyz) has joined #ceph
[10:30] * evilrob_ (~evilrob@2600:3c00::f03c:91ff:fedf:1d3d) has joined #ceph
[10:31] * mohmultihouse (~mohmultih@gw01.mhitp.dk) Quit (Ping timeout: 480 seconds)
[10:32] * ndru (~jawsome@104.236.94.35) Quit (synthon.oftc.net weber.oftc.net)
[10:32] * evilrob (~evilrob@2600:3c00::f03c:91ff:fedf:1d3d) Quit (synthon.oftc.net weber.oftc.net)
[10:32] * cephalobot (~cephalobo@li246-48.members.linode.com) Quit (synthon.oftc.net weber.oftc.net)
[10:34] * TMM (~hp@185.5.121.201) has joined #ceph
[10:34] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Remote host closed the connection)
[10:35] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[10:37] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[10:43] * LeaChim (~LeaChim@host86-168-126-119.range86-168.btcentralplus.com) has joined #ceph
[10:43] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Remote host closed the connection)
[10:47] * pabluk is now known as pabluk_
[10:47] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) has joined #ceph
[10:47] * cephalobot (~cephalobo@li246-48.members.linode.com) has joined #ceph
[10:48] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit (Remote host closed the connection)
[10:50] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) has joined #ceph
[10:50] * mohmultihouse (~mohmultih@gw01.mhitp.dk) has joined #ceph
[10:52] <Be-El> vanham: great....so the next time i run into trouble i already have a master plan ;-)
[10:52] <vanham> And if you don't, you can call me! I'll find one for you!
[10:55] * penguinRaider (~KiKo@14.139.82.6) has joined #ceph
[10:55] <vanham> Man, heading out. Can't stay in this data center one more minute. Almost 24 hours working. I think it's a record
[10:55] <vanham> See you latter! Thank you again!
[10:56] * b0e (~aledermue@213.95.25.82) has joined #ceph
[10:57] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[10:58] * KungFuHamster (~DJComet@4MJAAFU97.tor-irc.dnsbl.oftc.net) Quit ()
[10:58] * KristopherBel (~Scymex@relay1.tor.openinternet.io) has joined #ceph
[11:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[11:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[11:02] * allaok (~allaok@machine107.orange-labs.com) has left #ceph
[11:06] * vanham (~vanham@199.59.96.208) Quit (Ping timeout: 480 seconds)
[11:08] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[11:08] <vikhyat> s3an2: \o/
[11:13] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit (Remote host closed the connection)
[11:13] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) has joined #ceph
[11:14] * hommie (~hommie@2a00:ec8:404:1113:c572:5278:4451:1ca8) has joined #ceph
[11:15] * dvanders (~dvanders@2001:1458:202:225::101:124a) has joined #ceph
[11:16] <hommie> Guys, after the update to 0.94.7 (from 0.94.6) everytime I replaced a broken OSD (1 out of 300) I get flooded by "[WRN] failed to encode map eXXX with expected crc", and the amount of blocked requests (> 32 secs) increase drastically, killing all radosgw sessions. Is this a new "known" feature?
[11:17] <badone> vikhyat: great stuff mate :)
[11:17] * dvanders_ (~dvanders@dvanders-pro.cern.ch) Quit (Ping timeout: 480 seconds)
[11:17] <vikhyat> badone: Hello buddy
[11:18] <badone> vikhyat: hey mate
[11:18] <vikhyat> badone: :) I hope you meant from mon one
[11:18] <badone> vikhyat: yes, s3an2 's problem
[11:18] <vikhyat> right
[11:18] <vikhyat> we have seen like this in backfill and recovery
[11:19] <badone> yeah, I thought you'd be the man
[11:19] <vikhyat> he he thanks buddy
[11:19] <badone> I've seen you and kefu working through quite a few mon compaction issues
[11:20] <vikhyat> right
[11:20] <badone> vikhyat: anyway, just wanted to say "nice job" :)
[11:20] <vikhyat> kefu: is rock star :D
[11:20] <badone> indeed he is
[11:20] <vikhyat> badone: thanks buddy
[11:20] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:e4fc:ae1f:ea84:d75) has joined #ceph
[11:22] <kefu> vikhyat =)
[11:22] <vikhyat> kefu: \o/
[11:25] * shylesh (~shylesh@121.244.87.118) has joined #ceph
[11:28] * KristopherBel (~Scymex@06SAADDX9.tor-irc.dnsbl.oftc.net) Quit ()
[11:28] * ade (~abradshaw@nat-pool-str-t.redhat.com) Quit (Ping timeout: 480 seconds)
[11:28] * measter (~Morde@tollana.enn.lu) has joined #ceph
[11:34] <Heebie> "mon compaction" sounds bad 'n' stuff? What might that be?
[11:36] * realitysandwich (~perry@b2b-78-94-59-114.unitymedia.biz) Quit (Ping timeout: 480 seconds)
[11:39] * ade (~abradshaw@nat-pool-str-u.redhat.com) has joined #ceph
[11:41] * pabluk_ is now known as pabluk
[11:41] * kefu (~kefu@114.92.122.74) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[11:43] * kefu (~kefu@183.193.182.2) has joined #ceph
[11:44] * yk (~yatin@161.163.44.8) Quit (Remote host closed the connection)
[11:47] * yatin (~yatin@161.163.44.8) has joined #ceph
[11:48] <Anticimex> how does one debug ceph? :)
[11:49] * allaok (~allaok@machine107.orange-labs.com) has left #ceph
[11:49] <Anticimex> getting segfaults after recent (~7 days) RH (7-series) packages from upstream when executing i.e "rbd -p rbd-ssd ls"
[11:51] * kefu_ (~kefu@114.92.122.74) has joined #ceph
[11:55] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[11:56] * kefu (~kefu@183.193.182.2) Quit (Ping timeout: 480 seconds)
[11:58] * measter (~Morde@4MJAAFVB1.tor-irc.dnsbl.oftc.net) Quit ()
[11:59] * kefu_ (~kefu@114.92.122.74) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[12:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[12:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[12:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[12:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[12:02] * flisky (~Thunderbi@36.110.40.28) Quit (Ping timeout: 480 seconds)
[12:12] * NTTEC (~nttec@122.53.162.158) has joined #ceph
[12:16] * rotbeard (~redbeard@aftr-109-90-233-106.unity-media.net) Quit (Quit: Leaving)
[12:21] <chrome0> A question about RGW and large number of objects. AIUI it's using 1 index bucket per swift container, correct?
[12:23] <chrome0> To support a large number of objects we therefore might want to shard those (rgw override bucket index max shards). Are there guidelines on how much shards are advisable?
[12:23] * Miouge (~Miouge@188.189.73.154) Quit (Quit: Miouge)
[12:23] <chrome0> Use case here is a write-heavy application, we eg. don't care much about listing as there's an in-app index for that
[12:24] * flisky (~Thunderbi@36.110.40.28) has joined #ceph
[12:25] * Miouge (~Miouge@188.189.73.154) has joined #ceph
[12:25] <etienneme> How many objects will you store?
[12:25] <chrome0> Long-term up to 200M
[12:26] <chrome0> But fairly small ~10k
[12:26] <etienneme> We do have a cluster with a little more than 200M objects and we use "num_shards": 8
[12:26] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[12:27] <etienneme> It works without issue
[12:27] * Hemanth (~hkumar_@121.244.87.117) has joined #ceph
[12:27] <chrome0> Cool. I've been hearing something like 1-2M obj/shard which made me wonder
[12:28] * Lattyware (~Hidendra@91.109.29.120) has joined #ceph
[12:28] <etienneme> Well I think you should not have more than 1 million object when you don't use shard. But it worked great until many millions... :p
[12:30] <chrome0> Hm, I might be missing something - I thought if you turn off sharding you essentially have num_shards: 1 ?
[12:31] * Miouge (~Miouge@188.189.73.154) Quit (Quit: Miouge)
[12:31] <etienneme> Probably but i was not sure (so I said "don't use" ^^)
[12:31] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) Quit (Ping timeout: 480 seconds)
[12:31] <chrome0> Heh fair enough :-)
[12:31] * Miouge (~Miouge@188.189.73.154) has joined #ceph
[12:32] <flaf> Hi.
[12:32] <chrome0> Would it be possible to share your approx. write throughput overall?
[12:32] * dgurtner (~dgurtner@178.197.233.142) Quit (Read error: No route to host)
[12:32] <etienneme> You can also specify "bucket_index_max_shards": 0 so... :) I don't really know
[12:36] * NTTEC (~nttec@122.53.162.158) Quit (Remote host closed the connection)
[12:36] * Miouge (~Miouge@188.189.73.154) Quit (Quit: Miouge)
[12:37] * Miouge (~Miouge@188.189.73.154) has joined #ceph
[12:44] * wjw-freebsd (~wjw@176.74.240.1) Quit (Quit: Nettalk6 - www.ntalk.de)
[12:44] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[12:45] * wjw-freebsd (~wjw@176.74.240.1) has joined #ceph
[12:45] * rendar (~I@host33-183-dynamic.46-79-r.retail.telecomitalia.it) has joined #ceph
[12:48] * Miouge (~Miouge@188.189.73.154) Quit (Quit: Miouge)
[12:53] * Miouge (~Miouge@188.189.73.154) has joined #ceph
[12:55] * IvanJobs (~ivanjobs@103.50.11.146) Quit ()
[12:58] * Lattyware (~Hidendra@4MJAAFVEI.tor-irc.dnsbl.oftc.net) Quit ()
[12:59] * Miouge (~Miouge@188.189.73.154) Quit (Quit: Miouge)
[12:59] * smokedmeets_ (~smokedmee@c-73-158-201-226.hsd1.ca.comcast.net) has joined #ceph
[13:00] * Miouge (~Miouge@188.189.73.154) has joined #ceph
[13:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[13:01] * haomaiwang (~haomaiwan@106.187.51.170) has joined #ceph
[13:02] * smokedmeets (~smokedmee@c-73-158-201-226.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[13:02] * smokedmeets_ is now known as smokedmeets
[13:03] * Coestar (~Eric@4MJAAFVFO.tor-irc.dnsbl.oftc.net) has joined #ceph
[13:05] * hellertime (~Adium@72.246.3.14) has joined #ceph
[13:05] * itamarl (~itamarl@bzq-237-168-31-164.red.bezeqint.net) has joined #ceph
[13:05] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[13:05] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[13:05] * Miouge (~Miouge@188.189.73.154) Quit (Quit: Miouge)
[13:05] * Miouge (~Miouge@188.189.73.154) has joined #ceph
[13:07] * itamarl (~itamarl@bzq-237-168-31-164.red.bezeqint.net) Quit ()
[13:10] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[13:10] * huangjun (~kvirc@113.57.168.154) Quit (Ping timeout: 480 seconds)
[13:13] * liiwi (liiwi@idle.fi) Quit (Ping timeout: 480 seconds)
[13:16] * vicente_ (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Quit: Leaving)
[13:18] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[13:21] * NTTEC (~nttec@122.53.162.158) has joined #ceph
[13:22] * rotbeard (~redbeard@aftr-109-90-233-106.unity-media.net) has joined #ceph
[13:26] * AscII (~marks@perseus.mschade.de) has left #ceph
[13:32] * Coestar (~Eric@4MJAAFVFO.tor-irc.dnsbl.oftc.net) Quit ()
[13:32] * starcoder (~Revo84@185.100.85.101) has joined #ceph
[13:34] * hommie (~hommie@2a00:ec8:404:1113:c572:5278:4451:1ca8) Quit (Remote host closed the connection)
[13:35] * dgurtner (~dgurtner@178.197.233.142) has joined #ceph
[13:36] <swami1> is encryption for data in rest supported in ceph?
[13:38] * yatin (~yatin@161.163.44.8) Quit (Remote host closed the connection)
[13:40] * Miouge_ (~Miouge@91.177.5.255) has joined #ceph
[13:40] * nisha (~nisha@112.110.104.228) has joined #ceph
[13:40] <Gugge-47527> "encryption for data in rest"?
[13:40] * penguinRaider (~KiKo@14.139.82.6) Quit (Ping timeout: 480 seconds)
[13:42] * nisha (~nisha@112.110.104.228) has left #ceph
[13:42] <BranchPredictor> swami1: http://permalink.gmane.org/gmane.comp.file-systems.ceph.devel/28644
[13:42] * Miouge (~Miouge@188.189.73.154) Quit (Ping timeout: 480 seconds)
[13:43] * Miouge (~Miouge@188.189.73.154) has joined #ceph
[13:45] * liiwi (liiwi@idle.fi) has joined #ceph
[13:45] * NTTEC (~nttec@122.53.162.158) Quit (Remote host closed the connection)
[13:46] * Miouge (~Miouge@188.189.73.154) Quit ()
[13:47] * Miouge (~Miouge@188.189.73.154) has joined #ceph
[13:47] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[13:48] * Miouge_ (~Miouge@91.177.5.255) Quit (Ping timeout: 480 seconds)
[13:48] <swami1> BranchPredictor: Thank you....yes I have seen this link - It says improving the data at rest....that means ceph also ready supports it??
[13:48] <swami1> Gugge-47527: s/in/at
[13:50] * penguinRaider (~KiKo@172.87.224.66) has joined #ceph
[13:51] * Miouge_ (~Miouge@91.177.5.255) has joined #ceph
[13:53] <BranchPredictor> swami1: no, you need to put dm-crypt between ceph and drive(s), as Radoslaw wrote.
[13:55] * Miouge (~Miouge@188.189.73.154) Quit (Ping timeout: 480 seconds)
[13:55] * Miouge_ is now known as Miouge
[13:56] <swami1> BranchPredictor: Ok...if I add the dm-crypt, can do this... or some more configuration needed ?
[13:59] * huangjun (~kvirc@117.151.54.244) has joined #ceph
[14:00] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[14:01] * haomaiwang (~haomaiwan@106.187.51.170) Quit (Remote host closed the connection)
[14:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[14:02] * Jesse (~oftc-webi@175.100.202.254) has joined #ceph
[14:02] * starcoder (~Revo84@4MJAAFVGM.tor-irc.dnsbl.oftc.net) Quit ()
[14:02] * OODavo (~Linkshot@tor-exit.boingboing.net) has joined #ceph
[14:02] <Jesse> hello
[14:02] <Jesse> all
[14:03] * Jesse is now known as Guest2841
[14:03] <Guest2841> Where is "ceph daemon" command in ceph 10.2.1 ?
[14:04] <Guest2841> I get this
[14:04] <Guest2841> admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[14:04] <Guest2841> when I run " ceph daemon osd.0 config show"
[14:04] <The_Ball> Is there any timetable for adding namespaces to Ceph?
[14:07] <etienneme> Guest2841: You are sure to have the right ID?
[14:08] <Guest2841> Yes,that's right
[14:08] <Guest2841> osd.1 osd.2 osd.3 ......
[14:08] <Guest2841> I run admin on mds server
[14:15] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) Quit (Ping timeout: 480 seconds)
[14:23] <PoRNo-MoRoZ> can i use consumer-grade ssds for journals ?
[14:24] <PoRNo-MoRoZ> like samsung evo 850
[14:25] <PoRNo-MoRoZ> i'm gonna use it infront of 1tb 10k 2.5"
[14:25] <PoRNo-MoRoZ> like 1 ssd per 3 hdds
[14:25] <PoRNo-MoRoZ> is that okay ?
[14:25] <T1w> no
[14:26] <PoRNo-MoRoZ> why no ?
[14:26] <T1w> do not even attempt it
[14:26] <T1w> you WILL be sorry
[14:26] * deepthi (~deepthi@122.172.171.121) Quit (Quit: Leaving)
[14:26] <T1w> poor performance
[14:26] <T1w> and most important: they die
[14:27] <T1w> within a very very short time (depending on your cluster load, but it's bad!)
[14:28] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) has joined #ceph
[14:29] <PoRNo-MoRoZ> well i was thinking about the same
[14:29] <PoRNo-MoRoZ> but boss said 'ask that guys on that channel' :D
[14:29] <The_Ball> I'm using these KC400 in a home cluster, http://www.kingston.com/en/ssd/business
[14:29] <The_Ball> They should be ok shouldn't they?
[14:31] <PoRNo-MoRoZ> what about Intel SSD DC P3500 Series ?
[14:32] <PoRNo-MoRoZ> or should i stick to 2.5" form-factor ?
[14:32] * OODavo (~Linkshot@7V7AAFLOA.tor-irc.dnsbl.oftc.net) Quit ()
[14:33] <Be-El> PoRNo-MoRoZ: https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
[14:34] <PoRNo-MoRoZ> yep i can use google )
[14:34] <PoRNo-MoRoZ> i though someone will give some kinda 'live story' :D
[14:34] <PoRNo-MoRoZ> anyway thanks
[14:35] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[14:37] * i_m (~ivan.miro@deibp9eh1--blueice4n1.emea.ibm.com) Quit (Quit: Leaving.)
[14:37] * pepzi (~arsenaali@static-83-41-68-212.sadecehosting.net) has joined #ceph
[14:41] * kefu (~kefu@183.193.187.174) has joined #ceph
[14:42] * ibravo (~ibravo@72.198.142.104) Quit (Ping timeout: 480 seconds)
[14:42] * scg (~zscg@c-50-169-194-217.hsd1.ma.comcast.net) has joined #ceph
[14:47] <BranchPredictor> PoRNo-MoRoZ: evo 850 is a tlc ssd, with rated tbw = 150, you'll get around 550MB/s, and after writing 150TB of data they'll be gone.
[14:50] <BranchPredictor> in other words, it'll take around 4 days of non-stop writes to kill them.
[14:51] * gauravbafna (~gauravbaf@49.32.0.108) Quit (Remote host closed the connection)
[14:51] <Be-El> the 850 pro might survive somewhat longer, but it's performance is also bad
[14:51] <Be-El> the dc-pro series on the other hand might be an option
[14:51] <BranchPredictor> Be-El: aren't they limited by sata bus speed?
[14:51] <sugoruyo> hey folks, I was wondering if someone can help me figure out what is going on: I have a host in my cluster where all the OSDs are down+out even though the OSD processes are running. Based on logging those OSDs seem to have lost contact with the rest of the clustrer
[14:52] <Gugge-47527> ive killed dc s3510 in under 200 days :)
[14:52] <BranchPredictor> 6gb/s = ~768MB/s
[14:53] <Be-El> BranchPredictor: sure, and nvme ssds also have a lower latency in most cases
[14:57] * gauravbafna (~gauravbaf@49.32.0.108) has joined #ceph
[14:58] * icey_ (~Chris@pool-74-103-175-25.phlapa.fios.verizon.net) has joined #ceph
[14:59] * goretoxo (~psilva@84.124.11.230.static.user.ono.com) has joined #ceph
[14:59] * goretoxo (~psilva@84.124.11.230.static.user.ono.com) Quit ()
[15:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[15:01] * TMM (~hp@185.5.121.201) Quit (Ping timeout: 480 seconds)
[15:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[15:01] <sugoruyo> anyone have any idea why all OSDs on a host would be seen as down even though the processes are running and the host seems to work properly?
[15:05] <The_Ball> sugoruyo, keyrings?
[15:05] * gauravbafna (~gauravbaf@49.32.0.108) Quit (Ping timeout: 480 seconds)
[15:05] * mohmultihouse (~mohmultih@gw01.mhitp.dk) Quit (Ping timeout: 480 seconds)
[15:05] * vikhyat is now known as vikhyat|brb
[15:05] * icey (~Chris@0001bbad.user.oftc.net) Quit (Ping timeout: 480 seconds)
[15:07] * pepzi (~arsenaali@4MJAAFVJG.tor-irc.dnsbl.oftc.net) Quit ()
[15:07] * gauravbafna (~gauravbaf@49.32.0.108) has joined #ceph
[15:10] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[15:11] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[15:11] <sugoruyo> The_Ball: I don't think so, no evidence of anything changing regarding keyrings
[15:12] * georgem (~Adium@104-222-119-175.cpe.teksavvy.com) has joined #ceph
[15:12] <sugoruyo> there was a reset of network interfaces triggered by configuration management on the machine about 10 seconds before the OSDs started complaining about not getting HB messages from other OSDs
[15:12] * georgem (~Adium@104-222-119-175.cpe.teksavvy.com) Quit ()
[15:12] * georgem (~Adium@206.108.127.16) has joined #ceph
[15:15] * vikhyat|brb is now known as vikhyat
[15:15] * gauravbafna (~gauravbaf@49.32.0.108) Quit (Ping timeout: 480 seconds)
[15:17] * ibravo (~ibravo@72.83.69.64) has joined #ceph
[15:19] * mattbenjamin1 (~mbenjamin@12.118.3.106) has joined #ceph
[15:19] * m0zes__ (~mozes@ip70-179-131-225.fv.ks.cox.net) has joined #ceph
[15:19] <BranchPredictor> sugoruyo: try restartig those OSDs
[15:22] <sugoruyo> BranchPredictor: I have, nothing changes
[15:22] <BranchPredictor> sugoruyo: all interfaces are up and working?
[15:23] <sugoruyo> BranchPredictor: they are all up but I've just tried pinging from one host to the other
[15:24] <sugoruyo> public net works, cluster net doesn't
[15:24] <sugoruyo> between other hosts both nets work
[15:24] <BranchPredictor> sugoruyo: well, you just answered yourself. you need to fix cluster net.
[15:26] <sugoruyo> hmmm, ethtool says "Link Detected: no", I guess layer 1 network problem...
[15:26] * TMM (~hp@185.5.121.201) has joined #ceph
[15:26] * Drankis (~drankis__@mikrotik.hostnet.lv) has joined #ceph
[15:33] * brad[] (~Brad@184-94-17-26.dedicated.allstream.net) Quit (Quit: Leaving)
[15:37] * Solvius (~djidis__@edwardsnowden0.torservers.net) has joined #ceph
[15:37] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[15:39] * rendar (~I@host33-183-dynamic.46-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[15:41] * kawa2014 (~kawa@212.110.41.244) Quit (Quit: Leaving)
[15:42] * Racpatel (~Racpatel@2601:87:3:3601::4edb) has joined #ceph
[15:43] * haomaiwang (~haomaiwan@106.187.51.170) has joined #ceph
[15:49] * smokedmeets (~smokedmee@c-73-158-201-226.hsd1.ca.comcast.net) Quit (Quit: smokedmeets)
[15:50] * haomaiwang (~haomaiwan@106.187.51.170) Quit (Remote host closed the connection)
[15:50] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[15:51] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[15:51] * haomaiwang (~haomaiwan@106.187.51.170) has joined #ceph
[15:51] * TMM_ (~hp@185.5.121.201) has joined #ceph
[15:51] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) Quit (Quit: Leaving)
[15:51] * ceph_chengpeng (~ceph_chen@180.174.254.217) has joined #ceph
[15:51] * TMM_ (~hp@185.5.121.201) Quit ()
[15:52] <ceph_chengpeng> someone free ?
[15:52] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[15:53] * squizzi (~squizzi@107.13.31.195) has joined #ceph
[15:54] <ceph_chengpeng> I have some question !
[15:54] <etienneme> just ask :)
[15:54] <ceph_chengpeng> ceph-osd.log :cephx: verify_reply couldn't decrypt with error: error decoding block for decryption
[15:55] <ceph_chengpeng> is ntp?
[15:55] * TMM (~hp@185.5.121.201) Quit (Ping timeout: 480 seconds)
[15:55] <ceph_chengpeng> I seek it in source code and find it in cephx
[15:56] * flisky (~Thunderbi@36.110.40.28) Quit (Ping timeout: 480 seconds)
[15:58] * bene (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[15:58] * dgurtner (~dgurtner@178.197.233.142) Quit (Ping timeout: 480 seconds)
[15:59] <etienneme> It's a cryptographic protocol used for auth, maybe there is an issue with your config?
[15:59] * hellertime (~Adium@72.246.3.14) has left #ceph
[15:59] <ceph_chengpeng> I think it not config
[15:59] * dvanders_ (~dvanders@pb-d-128-141-246-104.cern.ch) has joined #ceph
[16:00] <ceph_chengpeng> because the cluster run a long time
[16:00] <ceph_chengpeng> auth/cephx/CephxProtocol.cc:478: ldout(cct, 0) << "verify_reply couldn't decrypt with error: " << error << dendl;
[16:00] * andreww (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:00] <ceph_chengpeng> this is source code
[16:01] * haomaiwang (~haomaiwan@106.187.51.170) Quit (Remote host closed the connection)
[16:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[16:02] * TMM (~hp@185.5.121.201) has joined #ceph
[16:02] <etienneme> Have you upgraded something?
[16:02] <ceph_chengpeng> no
[16:02] * dvanders (~dvanders@2001:1458:202:225::101:124a) Quit (Ping timeout: 480 seconds)
[16:02] <ceph_chengpeng> I change nothing about cluster
[16:03] <vikhyat> ceph_chengpeng: mostly it is time sync issue
[16:03] <vikhyat> are all of your nodes are synced with time
[16:04] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[16:04] <vikhyat> including Mon and OSDs
[16:04] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[16:04] <ceph_chengpeng> yes, I agree with vikhyat
[16:04] <vikhyat> once you sync the time
[16:04] <ceph_chengpeng> the client block op, for some min,
[16:05] <vikhyat> restart the daemons which are facing this issue
[16:05] * EinstCrazy (~EinstCraz@116.224.225.85) has joined #ceph
[16:05] <vikhyat> it could be mon or osd
[16:05] <ceph_chengpeng> yel
[16:05] <ceph_chengpeng> and I update the time from utpd server
[16:06] <ceph_chengpeng> all node include mon and osd
[16:06] <vikhyat> yes you can take care of that with ntpserver running in your environment
[16:06] <vikhyat> may be ntpdate -u <ntpserver hostname>
[16:06] * bene2 (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[16:06] <ceph_chengpeng> I want confirm this issue
[16:07] * Solvius (~djidis__@4MJAAFVMO.tor-irc.dnsbl.oftc.net) Quit ()
[16:09] <ceph_chengpeng> thank u #vikhyat, #etienneme
[16:09] <vikhyat> \o/
[16:10] <ceph_chengpeng> I will follow the tracks of this issue
[16:10] * NTTEC (~nttec@122.53.162.158) has joined #ceph
[16:10] * bene (~bene@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[16:11] <etienneme> :)
[16:11] * dgurtner (~dgurtner@194.230.159.70) has joined #ceph
[16:12] * ceph_ (~chris@180.168.197.82) has joined #ceph
[16:13] * rotbeard (~redbeard@aftr-109-90-233-106.unity-media.net) Quit (Quit: Leaving)
[16:15] * andreww (~xarses@64.124.158.100) has joined #ceph
[16:15] * matj345314 (~matejz@141.255.254.208) Quit (Quit: matj345314)
[16:18] * swami1 (~swami@49.32.0.112) Quit (Quit: Leaving.)
[16:18] * ceph_devel (~chris@180.168.197.82) Quit (Ping timeout: 480 seconds)
[16:20] <analbeard> hi guys, we had some issues with a pool's metadata a while back which we could only solve by creating a new pool and copying all the rbds over to the new pool. i'd like to remove the old pool now and i'm wondering if there's any reliance on the old pool which I may not be able to see
[16:20] <analbeard> basically, i fi remove the old pool is it going to bork my current pool?
[16:20] * branto (~branto@nat-pool-brq-t.redhat.com) has left #ceph
[16:21] * NTTEC (~nttec@122.53.162.158) Quit (Remote host closed the connection)
[16:24] <PoRNo-MoRoZ> BranchPredictor thanks dude :)
[16:25] <PoRNo-MoRoZ> i'm not sure about ssd journals at all at this moment
[16:26] * vata (~vata@207.96.182.162) has joined #ceph
[16:26] * debian112 (~bcolbert@24.126.201.64) Quit (Quit: Leaving.)
[16:27] * joelc (~joelc@rrcs-71-41-248-34.sw.biz.rr.com) has joined #ceph
[16:28] * kefu_ (~kefu@114.92.122.74) has joined #ceph
[16:29] * yatin (~yatin@203.212.245.90) has joined #ceph
[16:30] * dgurtner_ (~dgurtner@178.197.233.142) has joined #ceph
[16:31] * branto (~branto@nat-pool-brq-t.redhat.com) has joined #ceph
[16:31] * Guest2841 (~oftc-webi@175.100.202.254) Quit (Ping timeout: 480 seconds)
[16:32] * dgurtner (~dgurtner@194.230.159.70) Quit (Ping timeout: 480 seconds)
[16:32] * kefu (~kefu@183.193.187.174) Quit (Ping timeout: 480 seconds)
[16:33] * amj-47292 (~Anders@91.193.136.10) has joined #ceph
[16:34] * amj-47292 (~Anders@91.193.136.10) has left #ceph
[16:36] * wushudoin (~wushudoin@2601:646:9501:d2b2:2ab2:bdff:fe0b:a6ee) has joined #ceph
[16:41] * post-factum (~post-fact@vulcan.natalenko.name) Quit (Killed (NickServ (Too many failed password attempts.)))
[16:41] * xolotl (~Xylios@atlantic850.dedicatedpanel.com) has joined #ceph
[16:41] * post-factum (~post-fact@vulcan.natalenko.name) has joined #ceph
[16:44] * icey_ (~Chris@pool-74-103-175-25.phlapa.fios.verizon.net) Quit (Read error: No route to host)
[16:45] * icey (~Chris@pool-74-103-175-25.phlapa.fios.verizon.net) has joined #ceph
[16:45] * yatin (~yatin@203.212.245.90) Quit (Ping timeout: 480 seconds)
[16:47] * bara (~bara@ip4-83-240-10-82.cust.nbox.cz) has joined #ceph
[16:53] * kutija (~kutija@89.216.27.139) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[16:54] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[16:54] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[16:54] * kefu_ is now known as kefu
[16:54] * Wahmed (~wahmed@206.174.203.195) has joined #ceph
[16:55] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[16:55] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[16:58] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[17:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[17:01] * haomaiwang (~haomaiwan@li401-170.members.linode.com) has joined #ceph
[17:01] * chardan (~chardan@173.240.241.94) has joined #ceph
[17:02] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[17:02] * TMM (~hp@185.5.121.201) Quit (Quit: Ex-Chat)
[17:03] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[17:03] * georgem (~Adium@206.108.127.16) has joined #ceph
[17:07] * georgem1 (~Adium@206.108.127.16) has joined #ceph
[17:07] * georgem (~Adium@206.108.127.16) Quit (Read error: Connection reset by peer)
[17:09] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[17:11] * xolotl (~Xylios@4MJAAFVP7.tor-irc.dnsbl.oftc.net) Quit ()
[17:12] * dvanders (~dvanders@pb-d-128-141-3-250.cern.ch) has joined #ceph
[17:13] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[17:13] * dvanders_ (~dvanders@pb-d-128-141-246-104.cern.ch) Quit (Ping timeout: 480 seconds)
[17:17] * analbeard (~shw@support.memset.com) Quit (Ping timeout: 480 seconds)
[17:19] * branto (~branto@nat-pool-brq-t.redhat.com) Quit (Quit: Leaving.)
[17:19] * kefu (~kefu@114.92.122.74) Quit (Max SendQ exceeded)
[17:20] * kefu (~kefu@205.147.105.112) has joined #ceph
[17:24] * vbellur (~vijay@2601:18f:700:55b0:5e51:4fff:fee8:6a5c) Quit (Ping timeout: 480 seconds)
[17:29] * vanham (~vanham@12.199.84.146) has joined #ceph
[17:33] * m0zes__ (~mozes@ip70-179-131-225.fv.ks.cox.net) Quit (Quit: m0zes__)
[17:35] * EinstCrazy (~EinstCraz@116.224.225.85) Quit (Remote host closed the connection)
[17:37] * Miouge_ (~Miouge@91.177.58.174) has joined #ceph
[17:37] * rendar (~I@host11-179-dynamic.27-79-r.retail.telecomitalia.it) has joined #ceph
[17:37] * vbellur (~vijay@nat-pool-bos-t.redhat.com) has joined #ceph
[17:38] * Miouge_ (~Miouge@91.177.58.174) Quit ()
[17:39] * Miouge (~Miouge@91.177.5.255) Quit (Ping timeout: 480 seconds)
[17:40] * Miouge (~Miouge@91.177.58.174) has joined #ceph
[17:41] * Rehevkor (~Swompie`@192.42.115.101) has joined #ceph
[17:45] * Miouge_ (~Miouge@91.177.58.174) has joined #ceph
[17:45] * Miouge (~Miouge@91.177.58.174) Quit (Read error: Connection reset by peer)
[17:45] * Miouge_ is now known as Miouge
[17:46] * Hemanth (~hkumar_@121.244.87.117) Quit (Ping timeout: 480 seconds)
[17:50] * huangjun (~kvirc@117.151.54.244) Quit (Ping timeout: 480 seconds)
[17:57] * ivancich (~ivancich@12.118.3.106) has joined #ceph
[17:58] * ade (~abradshaw@nat-pool-str-u.redhat.com) Quit (Ping timeout: 480 seconds)
[18:00] * rraja (~rraja@121.244.87.117) Quit (Quit: Leaving)
[18:00] * haomaiwang (~haomaiwan@li401-170.members.linode.com) Quit (Remote host closed the connection)
[18:00] * m0zes__ (~mozes@ip70-179-131-225.fv.ks.cox.net) has joined #ceph
[18:02] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) has joined #ceph
[18:04] * chardan (~chardan@173.240.241.94) has left #ceph
[18:07] * joelc (~joelc@rrcs-71-41-248-34.sw.biz.rr.com) Quit (Remote host closed the connection)
[18:11] * Rehevkor (~Swompie`@06SAADEI5.tor-irc.dnsbl.oftc.net) Quit ()
[18:11] * cryptk (~Frostshif@193.90.12.90) has joined #ceph
[18:11] * wjw-freebsd (~wjw@176.74.240.1) Quit (Ping timeout: 480 seconds)
[18:12] * Brochacho (~alberto@2601:243:504:6aa:871:4052:ae6b:5f1c) has joined #ceph
[18:17] * georgem1 (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[18:17] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[18:17] * flisky (~Thunderbi@106.37.236.188) has joined #ceph
[18:18] * flisky (~Thunderbi@106.37.236.188) Quit ()
[18:18] * ceph_chengpeng (~ceph_chen@180.174.254.217) Quit (Read error: Connection reset by peer)
[18:18] * ceph_chengpeng (~ceph_chen@218.83.112.81) has joined #ceph
[18:18] * debian112 (~bcolbert@24.126.201.64) Quit ()
[18:22] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) Quit (Quit: Leaving.)
[18:25] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:28] * gauravbafna (~gauravbaf@122.167.103.2) has joined #ceph
[18:28] * kefu (~kefu@205.147.105.112) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:30] * kefu (~kefu@183.193.187.174) has joined #ceph
[18:41] <The_Ball> Can I change the default name from client.admin? I tried setting name = client.foo in /etc/ceph/ceph.conf's global section, but doesn't seem to apply
[18:41] * cryptk (~Frostshif@06SAADEKH.tor-irc.dnsbl.oftc.net) Quit ()
[18:42] * secate (~Secate@196-210-55-165.dynamic.isadsl.co.za) has joined #ceph
[18:43] * shylesh (~shylesh@121.244.87.118) Quit (Remote host closed the connection)
[18:43] * swami1 (~swami@27.7.169.107) has joined #ceph
[18:45] * Drankis (~drankis__@mikrotik.hostnet.lv) Quit (Ping timeout: 480 seconds)
[18:46] * Arcturus (~Spessu@chomsky.torservers.net) has joined #ceph
[18:48] * swami2 (~swami@106.216.160.79) has joined #ceph
[18:51] * swami1 (~swami@27.7.169.107) Quit (Ping timeout: 480 seconds)
[18:54] * DanFoster (~Daniel@office.34sp.com) Quit (Quit: Leaving)
[18:55] * gauravbafna (~gauravbaf@122.167.103.2) Quit (Remote host closed the connection)
[18:58] * Drankis (~drankis__@95.68.39.16) has joined #ceph
[18:58] * vanham (~vanham@12.199.84.146) Quit (Ping timeout: 480 seconds)
[18:59] * joshd (~jdurgin@71-92-201-212.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[19:03] * pabluk is now known as pabluk_
[19:03] * smokedmeets (~smokedmee@c-73-158-201-226.hsd1.ca.comcast.net) has joined #ceph
[19:04] * swami1 (~swami@27.7.169.107) has joined #ceph
[19:05] * kefu (~kefu@183.193.187.174) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:07] * joelc (~joelc@rrcs-71-41-248-34.sw.biz.rr.com) has joined #ceph
[19:09] * secate (~Secate@196-210-55-165.dynamic.isadsl.co.za) Quit (Remote host closed the connection)
[19:10] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[19:10] * dgurtner_ (~dgurtner@178.197.233.142) Quit (Ping timeout: 480 seconds)
[19:10] * swami2 (~swami@106.216.160.79) Quit (Ping timeout: 480 seconds)
[19:15] * overclk (~quassel@117.202.96.124) Quit (Read error: Connection reset by peer)
[19:15] * joelc (~joelc@rrcs-71-41-248-34.sw.biz.rr.com) Quit (Ping timeout: 480 seconds)
[19:16] * Arcturus (~Spessu@4MJAAFVWP.tor-irc.dnsbl.oftc.net) Quit ()
[19:16] * Jones (~Zombiekil@5.135.65.145) has joined #ceph
[19:16] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) Quit (Quit: WeeChat 1.5)
[19:16] * swami1 (~swami@27.7.169.107) Quit (Ping timeout: 480 seconds)
[19:17] * swami1 (~swami@106.216.166.116) has joined #ceph
[19:20] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) has joined #ceph
[19:21] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[19:22] * NTTEC (~nttec@122.53.162.158) has joined #ceph
[19:25] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[19:28] * swami1 (~swami@106.216.166.116) Quit (Ping timeout: 480 seconds)
[19:30] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[19:30] * NTTEC (~nttec@122.53.162.158) Quit (Ping timeout: 480 seconds)
[19:30] * bara (~bara@ip4-83-240-10-82.cust.nbox.cz) Quit (Quit: Bye guys! (??????????????????? ?????????)
[19:31] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[19:33] * Drankis (~drankis__@95.68.39.16) Quit (Ping timeout: 480 seconds)
[19:41] * mykola (~Mikolaj@91.245.76.80) has joined #ceph
[19:41] * mgolub (~Mikolaj@91.245.76.80) has joined #ceph
[19:42] * mgolub (~Mikolaj@91.245.76.80) Quit ()
[19:42] * Drankis (~drankis__@mikrotik.hostnet.lv) has joined #ceph
[19:43] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Remote host closed the connection)
[19:43] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[19:44] <chrome0> How can I configure filestore merge threshold for an existing cluster, resp. split multiple?
[19:46] * Jones (~Zombiekil@06SAADENP.tor-irc.dnsbl.oftc.net) Quit ()
[19:46] * jakekosberg (~mollstam@politkovskaja.torservers.net) has joined #ceph
[19:49] <chrome0> I was thinking I'd just need to recreate OSDs but somehow that doesn't work out
[19:58] * Drankis (~drankis__@mikrotik.hostnet.lv) Quit (Ping timeout: 480 seconds)
[19:58] * karnan (~karnan@106.51.130.73) Quit (Quit: Leaving)
[20:00] * joelc (~joelc@rrcs-71-41-248-34.sw.biz.rr.com) has joined #ceph
[20:01] * rakeshgm (~rakesh@121.244.87.117) Quit (Remote host closed the connection)
[20:03] * vata (~vata@207.96.182.162) Quit (Quit: Leaving.)
[20:10] * squizzi (~squizzi@107.13.31.195) Quit (Remote host closed the connection)
[20:16] * jakekosberg (~mollstam@7V7AAFL76.tor-irc.dnsbl.oftc.net) Quit ()
[20:16] * allenmelon1 (~anadrom@4MJAAFV0P.tor-irc.dnsbl.oftc.net) has joined #ceph
[20:16] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:e4fc:ae1f:ea84:d75) Quit (Ping timeout: 480 seconds)
[20:18] * shylesh__ (~shylesh@45.124.225.104) has joined #ceph
[20:21] * joshd (~jdurgin@206.169.83.146) has joined #ceph
[20:32] * gauravbafna (~gauravbaf@122.172.242.111) has joined #ceph
[20:39] * gauravba_ (~gauravbaf@122.172.201.110) has joined #ceph
[20:40] * gauravbafna (~gauravbaf@122.172.242.111) Quit (Ping timeout: 480 seconds)
[20:42] * m0zes__ (~mozes@ip70-179-131-225.fv.ks.cox.net) Quit (Quit: m0zes__)
[20:46] * allenmelon1 (~anadrom@4MJAAFV0P.tor-irc.dnsbl.oftc.net) Quit ()
[20:46] * PappI (~utugi____@tor-exit-node--proxy.scalaire.com) has joined #ceph
[20:47] * gauravba_ (~gauravbaf@122.172.201.110) Quit (Ping timeout: 480 seconds)
[20:49] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[20:51] * matj345314 (~matj34531@element.planetq.org) has joined #ceph
[20:51] <devicenull> so, I'm having an issue where some part of the OS/ceph is chowing my journals back to root:disk
[20:51] <devicenull> which prevents osd's from starting up
[20:52] * georgem (~Adium@104-222-119-175.cpe.teksavvy.com) has joined #ceph
[20:52] <devicenull> is there some part of ceph that would fix this?
[20:55] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[21:01] * debian112 (~bcolbert@24.126.201.64) Quit (Ping timeout: 480 seconds)
[21:03] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[21:09] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[21:10] * georgem (~Adium@104-222-119-175.cpe.teksavvy.com) Quit (Quit: Leaving.)
[21:10] * georgem (~Adium@206.108.127.16) has joined #ceph
[21:15] * jmp242 (~kvirc@lnx6187.classe.cornell.edu) has joined #ceph
[21:16] * antongribok (~antongrib@216.207.42.140) has joined #ceph
[21:16] * PappI (~utugi____@4MJAAFV16.tor-irc.dnsbl.oftc.net) Quit ()
[21:25] * mykola (~Mikolaj@91.245.76.80) Quit (Quit: away)
[21:38] * Wahmed (~wahmed@206.174.203.195) Quit (Quit: Nettalk6 - www.ntalk.de)
[21:41] * cathode (~cathode@50.232.215.114) has joined #ceph
[21:42] * Wahmed (~wahmed@206.174.203.195) has joined #ceph
[21:46] * inf_b (~o_O@2a02:908:c30:31c0:e119:fa97:f6d6:8a7) has joined #ceph
[21:47] * inf_b (~o_O@2a02:908:c30:31c0:e119:fa97:f6d6:8a7) Quit (Remote host closed the connection)
[21:47] * inf_b (~o_O@2a02:908:c30:31c0:e119:fa97:f6d6:8a7) has joined #ceph
[21:48] * m0zes__ (~mozes@ip70-179-131-225.fv.ks.cox.net) has joined #ceph
[21:50] * matj345314 (~matj34531@element.planetq.org) Quit (Quit: matj345314)
[22:00] * dvanders_ (~dvanders@2001:1458:202:225::101:124a) has joined #ceph
[22:01] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[22:04] * Miouge (~Miouge@91.177.58.174) Quit (Quit: Miouge)
[22:07] * dvanders (~dvanders@pb-d-128-141-3-250.cern.ch) Quit (Ping timeout: 480 seconds)
[22:07] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[22:07] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[22:15] <T1> PoRNo-MoRoZ: regarding samsung / desktop ssds - take a look at the mailinglist - there are quite a few threads where people talk about what their experience is with especially samsung 840/850 evo and pro
[22:16] <T1> after reading (and talking to others in this channel) I quickly changed my mind from using a bunch of samsung 840 pros I had laying around from a bunch of laptops that were scraped
[22:17] <T1> on person had 30+ 840 pros die on him within 6 months - out of 80 or so
[22:17] <T1> so after those 6 months he changed every remaining samsung to something else
[22:18] <T1> others have had equally terrible experiences
[22:18] <T1> .. and do keep in mind that if you loose the a journal disk, you loose every OSD that uses that disk for journal
[22:18] <T1> - the
[22:28] * Miouge (~Miouge@91.177.58.174) has joined #ceph
[22:28] * vata (~vata@207.96.182.162) has joined #ceph
[22:29] <darkfader> oh, they're fine if one accepts to be held liable by employer if things go really bad
[22:29] <darkfader> and wants things to never work well
[22:30] <darkfader> scnr :)
[22:30] * KindOne_ (~KindOne@h90.31.141.67.dynamic.ip.windstream.net) has joined #ceph
[22:34] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[22:34] * KindOne_ is now known as KindOne
[22:35] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[22:36] * inf_b (~o_O@2a02:908:c30:31c0:e119:fa97:f6d6:8a7) Quit (Remote host closed the connection)
[22:36] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[22:37] * Miouge (~Miouge@91.177.58.174) Quit (Quit: Miouge)
[22:41] * antongribok (~antongrib@216.207.42.140) Quit (Quit: Leaving...)
[22:42] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[22:42] * vbellur (~vijay@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving.)
[22:46] * rapedex (~TomyLobo@customer-46-39-102-250.stosn.net) has joined #ceph
[22:46] * allaok (~allaok@ARennes-658-1-231-16.w2-13.abo.wanadoo.fr) has joined #ceph
[22:47] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Remote host closed the connection)
[22:48] * gregmark (~Adium@68.87.42.115) has joined #ceph
[22:58] * penguinRaider (~KiKo@172.87.224.66) Quit (Ping timeout: 480 seconds)
[23:01] * joelc (~joelc@rrcs-71-41-248-34.sw.biz.rr.com) Quit (Remote host closed the connection)
[23:10] * ade (~abradshaw@tmo-102-196.customers.d1-online.com) has joined #ceph
[23:10] * ade (~abradshaw@tmo-102-196.customers.d1-online.com) Quit ()
[23:11] * penguinRaider (~KiKo@69.163.33.182) has joined #ceph
[23:16] * rapedex (~TomyLobo@7V7AAFMG0.tor-irc.dnsbl.oftc.net) Quit ()
[23:19] * mattbenjamin1 (~mbenjamin@12.118.3.106) Quit (Quit: Leaving.)
[23:23] * dgurtner (~dgurtner@178.197.239.239) has joined #ceph
[23:28] * m0zes__ (~mozes@ip70-179-131-225.fv.ks.cox.net) Quit (Quit: m0zes__)
[23:29] * m0zes__ (~mozes@ip70-179-131-225.fv.ks.cox.net) has joined #ceph
[23:29] * scg (~zscg@c-50-169-194-217.hsd1.ma.comcast.net) Quit (Quit: Leaving)
[23:29] * scg (~zscg@c-50-169-194-217.hsd1.ma.comcast.net) has joined #ceph
[23:31] * shylesh__ (~shylesh@45.124.225.104) Quit (Remote host closed the connection)
[23:31] * bene2 (~bene@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[23:33] * bene (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[23:34] * bene (~bene@nat-pool-bos-t.redhat.com) Quit ()
[23:36] <scg> does bluestore will be production ready in the next Ceph release?
[23:37] * m0zes__ (~mozes@ip70-179-131-225.fv.ks.cox.net) Quit (Ping timeout: 480 seconds)
[23:40] <MentalRay> quick question
[23:42] <MentalRay> we have aanyone played with osd_recovery_delay_start so far?
[23:46] <bla> when trying to mount my cephfs (first.time) i get an 'mount error 5 = Input/output error'
[23:46] * chrisinajar (~TheDoudou@7V7AAFMJU.tor-irc.dnsbl.oftc.net) has joined #ceph
[23:50] <bla> oh it looks like there is no mds daemon running
[23:55] <bla> the systemd entry is mising

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.