#ceph IRC Log

Index

IRC Log for 2016-10-01

Timestamps are in GMT/BST.

[0:02] <doppelgrau> more PGs = more equal data distribution
[0:03] * scuttlemonkey is now known as scuttle|afk
[0:03] <doppelgrau> but more PGs = (a bit) more cpu and memory consumption at the osds
[0:03] <blizzow> It's about as clear as mud. The calculator says 2048, the placement-groups page in the doc recommends 4096. And the documentation makes it sound like recovery is there are less PGs per OSD. I have 48 OSDs
[0:04] <blizzow> *sound like recovery is faster if there are less PGs per OSD
[0:04] * wak-work (~wak-work@2620:15c:2c5:3:2497:7a21:8815:f0e7) Quit (Remote host closed the connection)
[0:05] * wak-work (~wak-work@2620:15c:2c5:3:2497:7a21:8815:f0e7) has joined #ceph
[0:06] <T1> well.. do you have enough memory to give each OSD at least 1GB on each node?
[0:06] <T1> or possibly 2GB per OSD per node?
[0:06] <T1> do you use erasure coding pools?
[0:08] * davidz (~davidz@2605:e000:1313:8003:142:1aeb:1be8:2a10) has joined #ceph
[0:09] <blizzow> Yes, I have the spare RAM. Each OSD node is currently running 12GB RAM. Except for my 3 large capacity OSD nodes, which run 48GB RAM for the node with 32TB of total storage and 24GB RAM for the nodes with 16TB storage.
[0:09] <T1> then go with 4096
[0:11] <T1> a higher number of PGs means CRUSH must calculate placement a bit more often, but it's better to have enough than to be undersized
[0:12] <blizzow> crap, I just resized to 2048 per the calculator and am 80% misplaced now. Guess I'll wait for the rebalance to finish and up it again.
[0:12] * Nanobot (~mason@46.166.138.136) Quit ()
[0:12] <T1> what did you have before the change?
[0:14] <doppelgrau> in real life, I???ve seen no problems with a high number of pgs, if you have emough memory - with about 600 PGs/OSD and some failures during backfil, each OSDs used about 4GB memory
[0:14] <T1> .. nor have I
[0:19] * j3roen (~j3roen@93.188.248.149) Quit (Ping timeout: 480 seconds)
[0:20] * mattbenjamin (~mbenjamin@12.118.3.106) Quit (Ping timeout: 480 seconds)
[0:23] <blizzow> T1 I set up my pools using the default. I assume that's not erasure code.
[0:23] <T1> no, most definately not
[0:23] <T1> but how many PGs before chanign to 2048?
[0:23] <T1> changing even
[0:23] <blizzow> 512
[0:24] <T1> oh well.. :)
[0:24] * j3roen (~j3roen@93.188.248.149) has joined #ceph
[0:25] <blizzow> oh well what?
[0:35] * squizzi (~squizzi@107.13.237.240) has joined #ceph
[0:36] * tserong (~tserong@203-214-92-220.dyn.iinet.net.au) Quit (Quit: Leaving)
[0:38] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[0:42] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[0:43] * badone (~badone@66.187.239.16) Quit (Quit: k?thxbyebyenow)
[0:53] * blizzow (~jburns@50-243-148-102-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[0:53] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Ping timeout: 480 seconds)
[0:56] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) Quit (Quit: treenerd_)
[1:03] * BrianA (~BrianA@c-73-189-153-151.hsd1.ca.comcast.net) has joined #ceph
[1:10] * fsimonce (~simon@host98-71-dynamic.1-87-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[1:14] * jermudgeon (~jhaustin@31.207.56.59) has joined #ceph
[1:22] * mhack (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) Quit (Remote host closed the connection)
[1:29] * oms101 (~oms101@p20030057EA007900C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:32] * [0x4A6F]_ (~ident@p4FC26EC2.dip0.t-ipconnect.de) has joined #ceph
[1:35] * [0x4A6F] (~ident@0x4a6f.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:35] * [0x4A6F]_ is now known as [0x4A6F]
[1:38] * oms101 (~oms101@p20030057EA6F0200C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:51] <evilrob> so it was suggested that we have too many OSDs per server and that may be the cause of our OSDs dying on us under load.
[1:52] <evilrob> we've got 30 OSDs on each of our 5 storage nodes
[1:52] <jermudgeon> is bandwidth between nodes constrained?
[1:52] <jermudgeon> what is process/cpu load like on the nodes? RAM?
[1:53] * xarses (~xarses@64.124.158.3) Quit (Ping timeout: 480 seconds)
[1:54] <evilrob> we've got 40Gb between nodes.
[1:54] * wushudoin (~wushudoin@2601:646:8200:c9f0:2ab2:bdff:fe0b:a6ee) Quit (Quit: Leaving)
[1:54] <evilrob> we're not seeing near that in IO from clients or recovery at the moment
[1:54] <evilrob> system load today was 800+ when I reweighted some OSDs to try to move data off them.
[1:54] <jermudgeon> ouch!
[1:54] <evilrob> but typically runs 20ish
[1:55] <evilrob> ouch is right.
[1:55] <jermudgeon> how many cores?
[1:55] <jermudgeon> have you run atop on the nodes?
[1:55] <evilrob> 48cores each, 512GB RAM
[1:55] <evilrob> installing it now
[1:55] <jermudgeon> that seems like tons of ram per osd
[1:58] <evilrob> the storage we inherited on these is all SAN based. it's looking in atop like those are all really busy
[1:58] <jermudgeon> slow IO?
[1:58] <jermudgeon> atop is pretty handy for showing how CPU time/wait states correlate to specific drives/processes
[1:59] <jermudgeon> wait, OSDs are on a SAN? I???m confused
[1:59] <evilrob> https://i.imgur.com/hv0oJaa.png
[1:59] <evilrob> yeah... we got some hardware handed to us without our input. so we carved out what's essentially JBOD to OSD nodes
[2:00] <jermudgeon> Gotcha.
[2:00] <jermudgeon> I don???t think this is a??? supported ceph architecture :)
[2:00] <evilrob> we're bringing in 16 2U boxen with 10 8TB disks each
[2:00] <jermudgeon> that sounds more like it!
[2:01] <jermudgeon> what interface to the SAN?
[2:01] <evilrob> 10Gb fabric
[2:01] <jermudgeon> so it should be able to do more than the ~80 MB/sec youre seeing on writes, despite the JBOD issue
[2:02] <evilrob> there are 4 bonded together for each blade chassis, but you konw that never works out linearly
[2:02] <jermudgeon> ono kidding
[2:03] <evilrob> even contemplating some 1U boxen to map these luns to instead of blades. the blades we have are awfully beefy
[2:04] <jermudgeon> so it???s one blade per JBOD platter?
[2:04] <evilrob> though the new boxen we're bringing in should get us another 350TB usable... might just use those SAN LUNs for elasticsearch
[2:04] <evilrob> no, one blade is 30 OSDs
[2:05] <evilrob> each OSD is actually 2 drives. it's the closest we could come to JBOD with this hardware
[2:05] <jermudgeon> gotcha. ceph usually has a 1:1 ratio of drives to osd
[2:05] <evilrob> the system thinks its a single drive
[2:05] <evilrob> it's presented as a single lun
[2:05] <jermudgeon> so you have no idea which platter gets hit for a specific carveout?
[2:06] <evilrob> oh yes. we know. we didn't do something silly like stripe across them all
[2:06] <jermudgeon> ok
[2:06] <jermudgeon> well then it still doesn???t quite explain why performance is abysmal
[2:06] <jermudgeon> have you watched the osd logs?
[2:06] <evilrob> a pair of 4TB drives presents a single 8TB lun which is mounted and run as a LUN
[2:06] <jermudgeon> yeah, that sounds a little more sane than what I imagined at first :)
[2:08] <evilrob> stuff like https://paste.ee/p/QNzK5 is pretty typical
[2:08] <evilrob> (osd logs)
[2:09] * squizzi (~squizzi@107.13.237.240) Quit (Ping timeout: 480 seconds)
[2:10] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[2:10] <jermudgeon> so ceph health is oK, but server load is still high?
[2:10] <jermudgeon> (or was)
[2:10] <evilrob> generally yeah. right now it's moving stuff around after a reweight
[2:10] * squizzi (~squizzi@107.13.237.240) has joined #ceph
[2:10] <jermudgeon> yeah, that can be thrashy
[2:11] <evilrob> the reason we brought someone from another group in to look at it is whenever we had some recovery kick off, or tons of IO somewhere, we'd get VMs pausing in openstack.
[2:12] <jermudgeon> did you do any rados benchmarking before deployment, or after?
[2:12] <evilrob> changed the max recovery to 2 to solve that problem
[2:12] <evilrob> no
[2:12] <jermudgeon> I found it helpful when troubleshooting ??? to establish a baseline for cluster performance especially
[2:13] <jermudgeon> I also found a bad 10G DAC by watching ceph health detail every 2s
[2:13] <jermudgeon> and correlating with hardware errors
[2:13] <evilrob> something like http://tracker.ceph.com/projects/ceph/wiki/Benchmark_Ceph_Cluster_Performance
[2:13] <evilrob> ?
[2:13] <jermudgeon> lot of delayed writes
[2:13] <jermudgeon> yep, that???s what I used, rados bench (about halfway down)
[2:13] <evilrob> ok... I can do that later tonight.
[2:14] <jermudgeon> something like a 60-second write will show you interesting things about performance ??? expose journal delays, etc
[2:14] <evilrob> I'll wait for recovery to finish
[2:14] <jermudgeon> I assume you???re using on-disk journals?
[2:14] <evilrob> yes
[2:14] <jermudgeon> k
[2:14] <evilrob> it's a pretty default setup
[2:14] <evilrob> we've got new boxen coming in with some SSDs to use for journals
[2:14] <jermudgeon> that definitely helps. I???m running some SSD-only arrays right now
[2:15] <jermudgeon> and I did a bunch of testing with rados bench to compare the impact of SSD vs platter, vs. platter+SSD journal
[2:16] <evilrob> I'll do some in-VM dd testing too. I was getting close to 1GB/s writes on 20MB files when I first set this up. Actually faster than native disk.
[2:17] <evilrob> (in house app uses 2-32MB files, so I benched with 20)
[2:18] <jermudgeon> what is your min and max replicas?
[2:18] <evilrob> 2 and 3
[2:18] <jermudgeon> 20MB? that seems kind of small for accurate testing
[2:19] <jermudgeon> I???m usually doing 4 to 10 GB
[2:19] <evilrob> yeah... it probably didn't get outside of the cache.
[2:19] <jermudgeon> well, depends on cache mode
[2:19] <jermudgeon> is the SAN battery backed?
[2:19] <evilrob> yes
[2:19] <jermudgeon> I think your???e right about the cache then
[2:19] <jermudgeon> can???t type
[2:21] <evilrob> well, pizza is here.
[2:22] <jermudgeon> pizza on
[2:23] * Sue_ (~sue@2601:204:c600:d638:6600:6aff:fe4e:4542) Quit (Ping timeout: 480 seconds)
[2:23] <evilrob> I'll do some benchmarking after things settle. It annoyed me that we had OSDs randomly dying. Though that's at least stopped happening. (bumped pid_max to something insane and found 2 dead drives)
[2:24] <jermudgeon> like the osd process was quitting, or the drives themselves were bad?
[2:24] * salwasser (~Adium@2601:197:101:5cc1:cae0:ebff:fe18:8237) has joined #ceph
[2:40] * cathode (~cathode@50-232-215-114-static.hfc.comcastbusiness.net) Quit (Quit: Leaving)
[2:46] * salwasser (~Adium@2601:197:101:5cc1:cae0:ebff:fe18:8237) Quit (Quit: Leaving.)
[2:47] * wak-work (~wak-work@2620:15c:2c5:3:2497:7a21:8815:f0e7) Quit (Remote host closed the connection)
[2:47] * wak-work (~wak-work@2620:15c:2c5:3:2497:7a21:8815:f0e7) has joined #ceph
[3:09] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) Quit (Ping timeout: 480 seconds)
[3:14] <evilrob> OSDs were quitting. 2 were consistent. found bad drives.
[3:16] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) has joined #ceph
[3:17] <jermudgeon> evilrob: ceph seems to handle that fairly well in general, from what I???ve heard
[3:18] <evilrob> bad drives? yeah. the IO errors would make the OSD die. that would cause PGs to backfill elsewhere.
[3:18] <evilrob> the random quitting of OSD processes though kind of killed us
[3:19] <jermudgeon> yeah, sounds like maybe you???re hitting some process /kernel limits?
[3:19] <jermudgeon> I have kernel.pid_max = 4194303
[3:19] <jermudgeon> ^ something insane
[3:19] <evilrob> probably. couldn't find much of a performance tuning guide or "making your ceph scale" guide. didn't read whitepapers and such but was hoping for a checklist
[3:20] <jermudgeon> yea!
[3:20] <evilrob> yeah, I think mine is 1M or something
[3:20] <jermudgeon> I found a web ui for doing performance tuning, but it???s meant to be deployed on bare metal, when you???re just getting going
[3:21] <evilrob> I work for cisco. we found a guy internal who worked for one of the cloud providers we bought.
[3:21] <evilrob> so he's going to give it the once-over.
[3:21] <jermudgeon> it seems like enough people are using it that it ought for work for most of the advertised use cases
[3:21] <jermudgeon> talked to a guy with a quarter peta, 100 osds
[3:21] <jermudgeon> he was wanting to migrate to tiering + EC pool
[3:22] <jermudgeon> cool, but for backups ??? not good for VMs (what you and I do)
[3:22] <evilrob> I'm 1057T raw
[3:22] <evilrob> about to double that
[3:22] <jermudgeon> awesome
[3:23] <evilrob> it's nice working for a company that makes hardware. we pay 10% of the msrp :)
[3:23] <jermudgeon> stop it right now :)
[3:25] <evilrob> it's not cheap stuff though. those 16 2U boxen with 12 8TB drives in them (2 for OS, + 10 OSDs) are going to cost us almost 150K :)
[3:26] <evilrob> getting better
[3:26] <evilrob> HEALTH_WARN 20 pgs backfill; 88 pgs backfilling; 108 pgs stuck unclean; recovery 38/407674124 objects degraded (0.000%); recovery 25094144/407674124 objects misplaced (6.155%)
[3:26] <jermudgeon> so the stuck unclean are entirely because of backfill and min 2?
[3:27] <evilrob> I bet we had an OSD die on us before I started this all. I didn't look (/me stupid)
[3:27] * vbellur (~vijay@2601:18f:700:55b0:5e51:4fff:fee8:6a5c) Quit (Remote host closed the connection)
[3:27] <jermudgeon> what are you using for your pane of glass/monitoring?
[3:27] <evilrob> min 2 is a good setting right?
[3:27] <evilrob> pane of glass? what's that? :)
[3:27] <jermudgeon> your master view of health
[3:27] <jermudgeon> whatever system you???re using
[3:28] <jermudgeon> ???single pane of glass"
[3:28] <jermudgeon> it???s a bit of a cliche right now
[3:28] <evilrob> (I know... we're just a bit slow)
[3:28] <jermudgeon> I???ve been xploring various ceph monitoring plugins for miscellaneous NMS
[3:28] <evilrob> ceph_prometheus is our goal. we're just overloaded at the moment
[3:28] <jermudgeon> as I???m kind of??? between NMS for IT purposes
[3:29] <evilrob> oh cool... there is swift benchmarking too....
[3:30] <evilrob> nice. one of our devs is worried about throughput on rgw
[3:30] <jermudgeon> I saw that, but didn???t try it yet
[3:30] <evilrob> ok... 20:30 here.... I'm off to find some alcoholic beverage and a cigar.
[3:30] <jermudgeon> after initial dd work, I saw that at least I could get more granular data out of rados bench??? started spreadsheeting it and highlighting color by range
[3:30] <jermudgeon> evilrob: you do that, well met
[3:30] <evilrob> thanks for the pointers.
[3:30] <evilrob> well met indeed
[3:31] <jermudgeon> nah, i didn???t do nothing, just bounced ideas
[3:32] * TMM (~hp@dhcp-077-248-009-229.chello.nl) has joined #ceph
[3:38] <jermudgeon> does anyone know whether xfs or ext4 is likely to perform better on a rbd?
[3:38] * baotiao (~baotiao@43.255.178.184) has joined #ceph
[3:42] * jermudgeon (~jhaustin@31.207.56.59) Quit (Quit: jermudgeon)
[3:48] * davidz (~davidz@2605:e000:1313:8003:142:1aeb:1be8:2a10) Quit (Quit: Leaving.)
[3:58] * baotiao (~baotiao@43.255.178.184) Quit (Quit: baotiao)
[4:12] * baotiao (~baotiao@43.255.178.184) has joined #ceph
[4:26] * kefu (~kefu@114.92.125.128) has joined #ceph
[4:35] * squizzi (~squizzi@107.13.237.240) Quit (Ping timeout: 480 seconds)
[4:40] * vicente (~vicente@111-241-36-95.dynamic.hinet.net) has joined #ceph
[4:42] * squizzi (~squizzi@107.13.237.240) has joined #ceph
[4:51] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) Quit (Ping timeout: 480 seconds)
[5:01] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) has joined #ceph
[5:04] * squizzi (~squizzi@107.13.237.240) Quit (Ping timeout: 480 seconds)
[5:09] * Jeffrey4l (~Jeffrey@110.252.103.185) has joined #ceph
[5:13] * vicente (~vicente@111-241-36-95.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[5:13] * baotiao (~baotiao@43.255.178.184) Quit (Quit: baotiao)
[5:17] * BrianA (~BrianA@c-73-189-153-151.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:19] * BrianA (~BrianA@c-73-189-153-151.hsd1.ca.comcast.net) has joined #ceph
[5:24] * rotbeard (~redbeard@2a02:908:df13:bb00:64f8:e460:2624:f79f) has joined #ceph
[5:32] * rotbeard (~redbeard@2a02:908:df13:bb00:64f8:e460:2624:f79f) Quit (Ping timeout: 480 seconds)
[5:42] * yanzheng (~zhyan@125.70.23.147) has joined #ceph
[5:43] * rotbeard (~redbeard@aftr-109-90-233-215.unity-media.net) has joined #ceph
[5:53] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[5:59] * Vacuum_ (~Vacuum@88.130.203.33) has joined #ceph
[6:01] * BrianA (~BrianA@c-73-189-153-151.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[6:03] * BrianA (~BrianA@c-73-189-153-151.hsd1.ca.comcast.net) has joined #ceph
[6:05] * Vacuum__ (~Vacuum@88.130.209.236) Quit (Ping timeout: 480 seconds)
[6:08] * BrianA (~BrianA@c-73-189-153-151.hsd1.ca.comcast.net) has left #ceph
[6:11] * walcubi (~walcubi@p5795B96B.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[6:11] * walcubi (~walcubi@p5795B0C5.dip0.t-ipconnect.de) has joined #ceph
[6:16] * vicente (~vicente@111-241-36-95.dynamic.hinet.net) has joined #ceph
[6:17] * BrianA1 (~BrianA@c-73-189-153-151.hsd1.ca.comcast.net) has joined #ceph
[6:21] * kefu (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:24] * vicente (~vicente@111-241-36-95.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[6:39] * puvo (~Esvandiar@108.61.122.152) has joined #ceph
[6:40] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[6:46] * kefu (~kefu@114.92.125.128) has joined #ceph
[6:54] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[6:58] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[7:01] * BrianA (~BrianA@c-73-189-153-151.hsd1.ca.comcast.net) has joined #ceph
[7:03] * BrianA1 (~BrianA@c-73-189-153-151.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[7:09] * puvo (~Esvandiar@108.61.122.152) Quit ()
[7:35] * kefu (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[7:36] * kefu (~kefu@114.92.125.128) has joined #ceph
[7:54] * rotbeard (~redbeard@aftr-109-90-233-215.unity-media.net) Quit (Ping timeout: 480 seconds)
[7:55] * John_ (~John@27.11.112.248) has joined #ceph
[7:55] <John_> ?
[7:55] <John_> ?
[7:55] <John_> ?
[7:55] * rotbeard (~redbeard@aftr-109-90-233-215.unity-media.net) has joined #ceph
[7:56] <Green> SUSE
[8:02] * Goodi (~Hannu@dsl-ktkbrasgw1-50dd4d-148.dhcp.inet.fi) has joined #ceph
[8:11] * spgriffinjr (~spgriffin@66.46.246.206) Quit (Read error: Connection reset by peer)
[8:13] * efirs (~firs@73.93.155.174) has joined #ceph
[8:20] * efirs (~firs@73.93.155.174) Quit (Remote host closed the connection)
[8:22] * baotiao (~baotiao@43.255.178.184) has joined #ceph
[8:28] * efirs (~firs@73.93.155.174) has joined #ceph
[8:29] * efirs (~firs@73.93.155.174) Quit ()
[8:36] * rotbeard (~redbeard@aftr-109-90-233-215.unity-media.net) Quit (Quit: Leaving)
[8:36] * efirs (~firs@73.93.155.174) has joined #ceph
[8:37] * mykola (~Mikolaj@91.245.72.48) has joined #ceph
[8:41] * martikka (~Hannu@dsl-ktkbrasgw1-50dd4d-148.dhcp.inet.fi) has joined #ceph
[8:43] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[8:43] * Goodi (~Hannu@dsl-ktkbrasgw1-50dd4d-148.dhcp.inet.fi) Quit (Ping timeout: 480 seconds)
[8:44] * efirs (~firs@73.93.155.174) Quit (Ping timeout: 480 seconds)
[8:47] * John (~John@27.11.112.248) has joined #ceph
[8:50] * baotiao (~baotiao@43.255.178.184) Quit (Quit: baotiao)
[8:51] * baotiao (~baotiao@43.255.178.184) has joined #ceph
[8:53] * John_ (~John@27.11.112.248) Quit (Ping timeout: 480 seconds)
[9:02] * BrianA (~BrianA@c-73-189-153-151.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[9:10] * derjohn_mobi (~aj@2001:4c50:37f:2400:ecff:e0bf:fcc2:5eb4) has joined #ceph
[9:11] * fridim (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[9:13] * raso (~raso@ns.deb-multimedia.org) Quit (Read error: Connection reset by peer)
[9:14] * raso (~raso@ns.deb-multimedia.org) has joined #ceph
[9:16] * yanzheng1 (~zhyan@125.70.23.147) has joined #ceph
[9:17] * yanzheng (~zhyan@125.70.23.147) Quit (Read error: Connection reset by peer)
[9:20] * martikka (~Hannu@dsl-ktkbrasgw1-50dd4d-148.dhcp.inet.fi) Quit (Ping timeout: 480 seconds)
[9:26] * martikka (~Hannu@dsl-ktkbrasgw1-50dd4d-148.dhcp.inet.fi) has joined #ceph
[9:35] * martikka (~Hannu@dsl-ktkbrasgw1-50dd4d-148.dhcp.inet.fi) Quit (Quit: Leaving)
[9:49] * nardial (~ls@p54894AC2.dip0.t-ipconnect.de) has joined #ceph
[9:50] * imcsk8 (~ichavero@189.231.6.100) has joined #ceph
[9:51] * Nicho1as (~nicho1as@00022427.user.oftc.net) has joined #ceph
[9:55] * kefu (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[9:56] * kefu (~kefu@114.92.125.128) has joined #ceph
[10:02] * raphaelsc (~raphaelsc@177.157.175.32) Quit (Ping timeout: 480 seconds)
[10:12] * raphaelsc (~raphaelsc@177.206.69.165.dynamic.adsl.gvt.net.br) has joined #ceph
[10:22] * fridim (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[10:28] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:4418:f044:c01f:37d4) has joined #ceph
[10:33] * kefu (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[10:37] * derjohn_mobi (~aj@2001:4c50:37f:2400:ecff:e0bf:fcc2:5eb4) Quit (Ping timeout: 480 seconds)
[10:41] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit (Quit: Ex-Chat)
[10:49] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[11:24] * fridim (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[11:39] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:4418:f044:c01f:37d4) Quit (Ping timeout: 480 seconds)
[11:48] * peetaur (~peter@p200300E10BCB030020164AFFFEF30905.dip0.t-ipconnect.de) has joined #ceph
[12:04] * baotiao (~baotiao@43.255.178.184) Quit (Quit: baotiao)
[12:15] * nardial (~ls@p54894AC2.dip0.t-ipconnect.de) Quit (Quit: Leaving)
[12:33] * [0x4A6F]_ (~ident@p508CD539.dip0.t-ipconnect.de) has joined #ceph
[12:36] * [0x4A6F] (~ident@0x4a6f.user.oftc.net) Quit (Ping timeout: 480 seconds)
[12:36] * [0x4A6F]_ is now known as [0x4A6F]
[12:46] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[12:47] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[12:47] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[12:48] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[12:53] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[13:06] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[13:07] * yanzheng1 (~zhyan@125.70.23.147) Quit (Quit: This computer has gone to sleep)
[13:29] * ivve (~zed@c83-254-7-92.bredband.comhem.se) has joined #ceph
[13:50] * JamesHarrison (~GuntherDW@tor-2.armbrust.me) has joined #ceph
[13:55] * tobiash (~quassel@212.118.206.70) has joined #ceph
[14:02] * BrianA (~BrianA@c-73-189-153-151.hsd1.ca.comcast.net) has joined #ceph
[14:14] * andreww (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[14:15] * rwheeler (~rwheeler@pool-108-7-196-31.bstnma.fios.verizon.net) Quit (Quit: Leaving)
[14:18] <peetaur> how do you create a bluestore osd? I keep getting errors like here https://bpaste.net/show/47a667e2c980
[14:20] * JamesHarrison (~GuntherDW@tor-2.armbrust.me) Quit ()
[14:21] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[14:47] * peetaur (~peter@p200300E10BCB030020164AFFFEF30905.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[14:47] * kefu (~kefu@114.92.125.128) has joined #ceph
[14:47] * peetaur (~peter@p200300E10BCB0300187F5DFFFE23DA99.dip0.t-ipconnect.de) has joined #ceph
[14:57] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[15:07] * salwasser (~Adium@2601:197:101:5cc1:bcf3:71ea:1583:cedd) has joined #ceph
[15:14] * salwasser (~Adium@2601:197:101:5cc1:bcf3:71ea:1583:cedd) Quit (Quit: Leaving.)
[15:19] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[15:21] * Concubidated (~cube@c-73-12-218-131.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[15:26] <limebyte> well seems like i found a Solution
[15:26] <limebyte> a meshed VPN
[15:26] <limebyte> which uses failover and shortest routes
[15:26] <limebyte> a project did that already before
[15:27] * Racpatel (~Racpatel@2601:87:3:31e3::4d2a) Quit (Ping timeout: 480 seconds)
[15:32] * mason (~phyphor@108.61.123.70) has joined #ceph
[15:50] * ivve (~zed@c83-254-7-92.bredband.comhem.se) Quit (Ping timeout: 480 seconds)
[15:52] * ivve (~zed@c83-254-7-92.bredband.comhem.se) has joined #ceph
[15:56] <darkfader> limebyte: so like what i said yesterday? before explaining that's far too complex if you just have 4 nodes
[15:57] <limebyte> well found some howto's
[15:57] <limebyte> lets see
[15:57] <limebyte> but seems to be easy
[15:58] <limebyte> "Tinc Mesh VPN"
[16:01] <limebyte> well I bought another 2TB yesterday for 20 bucks with Setup fee, not gonna waste it
[16:02] * mason (~phyphor@108.61.123.70) Quit ()
[16:06] <peetaur> I figured out my issue (how to make a bluestore osd)... I just needed "enable experimental unrecoverable data corrupting features = bluestore rocksdb" and only had bluestore there before
[16:12] * squizzi (~squizzi@107.13.237.240) has joined #ceph
[16:18] * slaweq (~oftc-webi@vpn-out.ovh.net) has joined #ceph
[16:18] * peetaur (~peter@p200300E10BCB0300187F5DFFFE23DA99.dip0.t-ipconnect.de) Quit (Quit: Konversation terminated!)
[16:18] * ivve (~zed@c83-254-7-92.bredband.comhem.se) Quit (Ping timeout: 480 seconds)
[16:22] * squizzi (~squizzi@107.13.237.240) Quit (Ping timeout: 480 seconds)
[16:22] * rotbeard (~redbeard@aftr-109-90-233-215.unity-media.net) has joined #ceph
[16:22] <slaweq> hello, we have problem with unfound objects blocking cluster - can anyone help us solve this problem?
[16:25] * squizzi (~squizzi@107.13.237.240) has joined #ceph
[16:26] * tkuzemko (~oftc-webi@vpn-out.ovh.net) has joined #ceph
[16:44] * fridim (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[16:48] * John_ (~John@27.11.112.248) has joined #ceph
[16:55] * John (~John@27.11.112.248) Quit (Ping timeout: 480 seconds)
[16:59] * kysse (kysse@empty.zpm.fi) has joined #ceph
[17:01] * rotbeard (~redbeard@aftr-109-90-233-215.unity-media.net) Quit (Ping timeout: 480 seconds)
[17:13] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[17:21] * tkuzemko (~oftc-webi@vpn-out.ovh.net) Quit (Quit: Page closed)
[17:22] * vicente (~vicente@111-241-36-95.dynamic.hinet.net) has joined #ceph
[17:24] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[17:32] * Racpatel (~Racpatel@c-73-194-155-223.hsd1.nj.comcast.net) has joined #ceph
[17:35] * lmb (~Lars@2a02:8109:8100:1d2c:2ad2:44ff:fedf:3318) Quit (Ping timeout: 480 seconds)
[17:37] * Grimmer (~Joppe4899@46.166.138.130) has joined #ceph
[17:46] * lmb (~Lars@ip5b404bab.dynamic.kabel-deutschland.de) has joined #ceph
[17:47] * sudocat (~dibarra@2602:306:8bc7:4c50:f479:1bad:a78f:3bb9) has left #ceph
[17:48] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) has joined #ceph
[17:48] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[17:51] * squizzi (~squizzi@107.13.237.240) Quit (Ping timeout: 480 seconds)
[18:06] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[18:06] * Grimmer (~Joppe4899@46.166.138.130) Quit ()
[18:11] * bauruine (~bauruine@mail.tuxli.ch) Quit (Ping timeout: 480 seconds)
[18:14] * rikai (~Tralin|Sl@62.102.148.67) has joined #ceph
[18:18] * linuxkidd (~linuxkidd@ip70-189-214-97.lv.lv.cox.net) Quit (Quit: Leaving)
[18:21] * minnesotags (~herbgarci@c-50-137-242-97.hsd1.mn.comcast.net) has joined #ceph
[18:24] * Nicho1as (~nicho1as@00022427.user.oftc.net) Quit (Quit: A man from the Far East; using WeeChat 1.5)
[18:35] * slaweq (~oftc-webi@vpn-out.ovh.net) Quit (Remote host closed the connection)
[18:41] * kristen (~kristen@134.134.139.78) has joined #ceph
[18:44] * rikai (~Tralin|Sl@62.102.148.67) Quit ()
[18:47] * `Jin (~SquallSee@37.203.209.18) has joined #ceph
[18:48] * vicente (~vicente@111-241-36-95.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[18:51] * kefu (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:59] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[19:07] * Tenk (~richardus@185.65.134.80) has joined #ceph
[19:17] * `Jin (~SquallSee@37.203.209.18) Quit ()
[19:18] * Concubidated (~cube@c-73-12-218-131.hsd1.ca.comcast.net) has joined #ceph
[19:22] * Throlkim (~Neon@exit0.radia.tor-relays.net) has joined #ceph
[19:27] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[19:37] * Tenk (~richardus@185.65.134.80) Quit ()
[19:41] * BrianA (~BrianA@c-73-189-153-151.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:51] * Throlkim (~Neon@exit0.radia.tor-relays.net) Quit ()
[20:02] * nathani1 (~nathani@frog.winvive.com) has joined #ceph
[20:03] * malevolent (~quassel@192.146.172.118) Quit (Quit: No Ping reply in 180 seconds.)
[20:04] * malevolent (~quassel@192.146.172.118) has joined #ceph
[20:06] * BrianA (~BrianA@c-73-189-153-151.hsd1.ca.comcast.net) has joined #ceph
[20:07] * nathani (~nathani@2607:f2f8:ac88::) Quit (Ping timeout: 480 seconds)
[20:10] * fridim (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[20:10] * TMM (~hp@dhcp-077-248-009-229.chello.nl) Quit (Quit: Ex-Chat)
[20:19] * Revo84 (~Keiya@exit0.radia.tor-relays.net) has joined #ceph
[20:42] * salwasser (~Adium@2601:193:8201:8f30:793c:ec09:50a7:5674) has joined #ceph
[20:48] * Revo84 (~Keiya@exit0.radia.tor-relays.net) Quit ()
[20:52] * salwasser (~Adium@2601:193:8201:8f30:793c:ec09:50a7:5674) Quit (Quit: Leaving.)
[20:54] <limebyte> https://github.com/debops/ansible-tinc premium
[20:56] * derjohn_mobi (~aj@2001:4c50:37f:2400:e508:31e7:ebd2:5898) has joined #ceph
[20:58] * rhonabwy (~Tralin|Sl@46.166.138.130) has joined #ceph
[21:01] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:4418:f044:c01f:37d4) has joined #ceph
[21:07] * click1 (~Guest1390@exit1.radia.tor-relays.net) has joined #ceph
[21:07] <darkfader> limebyte: +1 for anything made by the debops guy
[21:09] <minnesotags> I'm getting "unable to locate package ceph-deploy, despite following the instructions here:http://docs.ceph.com/docs/firefly/start/quick-start-preflight/#ceph-deploy-setup
[21:09] <minnesotags> And yes, I need to use firefly.
[21:11] * fridim (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[21:19] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:4418:f044:c01f:37d4) Quit (Ping timeout: 480 seconds)
[21:28] * rhonabwy (~Tralin|Sl@46.166.138.130) Quit ()
[21:36] * click1 (~Guest1390@exit1.radia.tor-relays.net) Quit ()
[21:41] * sardonyx (~hgjhgjh@exit0.radia.tor-relays.net) has joined #ceph
[21:52] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Quit: ...)
[22:01] * KindOne (kindone@h229.169.16.98.dynamic.ip.windstream.net) has joined #ceph
[22:03] * m0zes (~mozes@ns1.beocat.ksu.edu) Quit (Ping timeout: 480 seconds)
[22:09] * effractu1 is now known as effractur
[22:11] * sardonyx (~hgjhgjh@exit0.radia.tor-relays.net) Quit ()
[22:13] * mgolub (~Mikolaj@91.245.79.65) has joined #ceph
[22:17] * click1 (~Skyrider@static.82.149.243.136.clients.your-server.de) has joined #ceph
[22:18] * mykola (~Mikolaj@91.245.72.48) Quit (Ping timeout: 480 seconds)
[22:34] * Jeffrey4l_ (~Jeffrey@120.10.39.92) has joined #ceph
[22:38] * Jeffrey4l (~Jeffrey@110.252.103.185) Quit (Ping timeout: 480 seconds)
[22:44] * squizzi (~squizzi@172.56.26.19) has joined #ceph
[22:47] * click1 (~Skyrider@static.82.149.243.136.clients.your-server.de) Quit ()
[22:55] * fridim (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[23:01] * mgolub (~Mikolaj@91.245.79.65) Quit (Quit: away)
[23:06] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) Quit (Quit: doppelgrau)
[23:12] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[23:28] * danielsj (~cyphase@185.3.135.2) has joined #ceph
[23:39] * squizzi (~squizzi@172.56.26.19) Quit (Ping timeout: 480 seconds)
[23:50] * squizzi (~squizzi@172.56.26.66) has joined #ceph
[23:58] * danielsj (~cyphase@185.3.135.2) Quit ()

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.