#ceph IRC Log

Index

IRC Log for 2013-04-27

Timestamps are in GMT/BST.

[0:00] * tnt (~tnt@109.130.96.140) Quit (Ping timeout: 480 seconds)
[0:00] <sagewk> would touch /var/lib/ceph/tmp/suppress-udev-activate do the trick?
[0:00] <sagewk> (as an interface?)
[0:00] <paravoid> https://gerrit.wikimedia.org/r/#/c/60997/4/modules/ceph/files/ceph-add-disk
[0:01] <paravoid> (I said it was nasty)
[0:01] <paravoid> mv /usr/sbin/ceph-disk-activate /usr/sbin/ceph-disk-activate.off
[0:01] <paravoid> ceph-disk-prepare ${disk}
[0:01] <paravoid> sleep 2
[0:01] <paravoid> mv /usr/sbin/ceph-disk-activate.off /usr/sbin/ceph-disk-activate
[0:01] <paravoid> heh
[0:01] * TiCPU|Home (jerome@p4.i.ticpu.net) has joined #ceph
[0:02] <sagewk> right. yeah, i ideally this would be a per-exec of ceph-disk-prepare and not a host-wide switch.
[0:02] <paravoid> nod
[0:03] <paravoid> I didn't attempt to provide patches because I thought I was doing something wrong
[0:03] <paravoid> considering ceph-disk-prepare's selling feature is to pre-prepare spare disks :)
[0:03] <paravoid> but I might give it a stab now
[0:04] <paravoid> the problem is that udev is asynchronous, so the event might come even after prepare is terminated
[0:04] <pioto> so, if i understand things right, a possible part of ceph that may not scale super well w/o adding lots of hardware is... the number of pools. is that right?
[0:05] <pioto> since each pool has its own set of placement groups, and each of those requires cpu on each OSD?
[0:05] <sagewk> yes, yes, no -- they use up a bit of memory and there is communication overhead associated with pgs
[0:05] <pioto> hm. well, something told me "cpu"
[0:05] <pioto> http://ceph.com/docs/master/rados/operations/pools/
[0:06] <pioto> well "computationally expensive"
[0:06] <sagewk> i would start to get worried when you hit 1000+ pools
[0:06] <sagewk> right
[0:06] <nhm> pioto: also, with recent versions of ceph the distribution of PGs for each pool is different, so if you expect to be using lots of pools concurrently you may be able to get away with fewer PGs per pool.
[0:06] <pioto> well. 1000+ pools, with how many OSD hosts?
[0:06] <sagewk> sorry, i read that as 'each pg needs a cpu' instead of 'some cpu' :)
[0:07] <pioto> yes, sorry. "some cpu time"
[0:07] <sagewk> for most ppl the problem with 1000+ pools is the # of pgs, not the # of pools.
[0:07] <pioto> well. i could see my needs having many pools, but many of them not being particularly large
[0:07] <sagewk> lots of small pools keeps the total pg count low, but then each (small) pool is only using a few devices, so you need to be careful abou twhen it grows etc.
[0:07] <pioto> the concern being mostly one of throughput?
[0:08] <pioto> sine fewer PGs means fewer spindles to gte data from?
[0:08] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[0:08] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit ()
[0:08] <pioto> *since
[0:08] <pioto> but for a particuarly large pool (e.g. soemthing containing a lot of rbd images), i guess you'd want a large pg_num?
[0:08] <pioto> also, is the pg_num tunable later on?
[0:09] <pioto> if, say, you added more OSDs over time
[0:09] <sagewk> you can increase it
[0:09] <nhm> pioto: PGs just provide a mapping for groupings of OSDs. The more PGs you have, the better the distribution, but at the cost of more memory and monitor CPU usage.
[0:09] <sagewk> but can't decrease it yet
[0:10] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[0:10] <nhm> pioto: With fewer PGs you may not spread data over your OSDs quite as evenly.
[0:11] <nhm> pioto: But if you are writing data to lots of pools concurrently, that can help because the PG mappings in one pool won't be the same as the PG mappings in another pool (with recent versions of Ceph)
[0:12] <pioto> ah, i see
[0:12] <pioto> before 0.00 would map to the same osd as 1.00 or whatever?
[0:12] <pioto> (i think that's how pg nums look? with the first part being the pool number?)
[0:13] <nhm> there was a bug previously where multiple pools on the same cluster would have very similar PG distributions.
[0:14] <nhm> So 16384 PGs in 1 pool provided a better distribution than 16384 PGs in 8 pools.
[0:15] * Havre (~Havre@2a01:e35:8a2c:b230:2cd5:a92f:87c0:a2d1) has joined #ceph
[0:17] <pioto> so, is there any rule of thumb for the scaling limits of things like: total number of pools/objects/placement groups/...?
[0:17] <pioto> or is that mainly limited by how much hardware you can throw at it?
[0:20] <pioto> for example, with ZFS, i've learned that once you get to about 4000 snapshots, performance suffers a lot
[0:20] <nhm> pioto: With 1 mon and 24 OSDs, the mon starts getting pretty bogged down for me with around 100,000 PGs. With more hardware, more mons, and some tweaking it should be possible to do more.
[0:20] <pioto> in addition to things like "don't run at near full", which i know is a possible issue for ceph too and other newer storage systems (because of the fact that more garbage collection is forced at once)
[0:21] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) Quit (Quit: Leaving.)
[0:21] <pioto> unlike ZFS, though, i hope that ceph can scale more easily... i hope that any of these problems can be solved by just throwing another osd or mon into the mix. is that the case?
[0:22] <pioto> or are there some possible "gotchas" i shoudl be aware of?
[0:23] <pioto> (i saw a post about placement group sizing on ceph-devel, and got a bit scared...)
[0:23] <nhm> pioto: Ceph was designed with scalability in mind, but for any kind of really huge deployment there are always going to be gotchas.
[0:23] <nhm> pioto: that's why we sell support and consulting services. :)
[0:25] <pioto> yes, of course.
[0:26] <pioto> but i'm not sure i'm quite in the "really huge" department
[0:26] <pioto> at least not initially
[0:28] <pioto> but, ok. "start small, and pump up pg_num as needed" sounds like it'll work
[0:28] <pioto> so thanks
[0:29] <nhm> pioto: I think pg splitting is still experimental, so keep that in mind.
[0:29] <pioto> hm. yes, i see. that's what'd happen when you increase it
[0:29] <pioto> and pg "merging" just isn't there at all i guess/
[0:29] <sagewk> paravoid: please let us know if bobtial-deploy works for you guys
[0:29] <paravoid> I'm currently on a tight schedule so I won't be able to test it soon I'm afraid
[0:30] <sagewk> no worries
[0:30] <paravoid> good news is that we're finally putting ceph into initial steps of production on monday
[0:31] <nhm> paravoid: cool!
[0:32] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[0:32] <pioto> speaking of schedules, i figure you guys are on a tight schedule too, but... if anyone has a chance to help fill in some of the blanks on http://wiki.ceph.com/01Planning/02Blueprints/Dumpling/Client_Security_for_CephFS, and maybe point me in the right direction, code-wise, i'd appreciate it. i think gregaf gave me some initial pointers about where such a hook could go, but i haven't gotten that far in digging into how the messges passe
[0:32] <sagewk> mikedawson: can you attach ceph-mon.b.log to #4815?
[0:33] <mikedawson> sagewk: will do
[0:33] <sagewk> tnx
[0:34] * iggy_ is now known as iggy
[0:38] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: Leaving)
[0:41] <mikedawson> sagewk: i pushed a larger log covering that time period to cephdrop "issue4815-ceph-mon.b.log"
[0:41] <sagewk> thanks
[0:43] * BillK (~BillK@58-7-127-45.dyn.iinet.net.au) has joined #ceph
[0:44] * sjustlaptop (~sam@2607:f298:a:697:d426:5432:b06c:6147) Quit (Ping timeout: 480 seconds)
[0:46] <paravoid> btw, here's an idea: an apport-like tool to submit bugs
[0:46] * dpippenger (~riven@206-169-78-213.static.twtelecom.net) Quit (Quit: Leaving.)
[0:46] <paravoid> that collects pg map, mon map, osd tree etc.
[0:46] * gregaf1 (~Adium@2607:f298:a:607:b536:9b5f:7474:a0d1) Quit (Quit: Leaving.)
[0:46] <paravoid> plus selected logs, possibly by ssh'ing
[0:49] <sagewk> ceph-debugpack
[0:49] * kfox1111 (~kfox@96-41-208-2.dhcp.elbg.wa.charter.com) has joined #ceph
[0:49] <kfox1111> Question. Is there a reason ceph does not provide a pkg-config file?
[0:50] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[0:51] <paravoid> holy crap
[0:51] <paravoid> this actually exists?!
[0:51] <paravoid> oh my
[0:52] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) has joined #ceph
[0:55] <kfox1111> Its very very useful.
[0:59] <sagewk> not many people use it, so it may be rough around the edges, but there's an easy way to fix that :)
[1:03] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) Quit (Ping timeout: 480 seconds)
[1:05] <dmick> sagewk: I think kfox1111 was talking about pkg-config
[1:05] <sagewk> oh :)
[1:05] <dmick> and I think the answer is "just haven't gotten to it"
[1:05] <kfox1111> sagewk: If all you want's a file, have a look here: https://github.com/EMSL-MSC/service-poke/blob/master/src/libservice-poke/service-poke.pc.in. s/service-poke/rados/ and your good to go. :)
[1:05] <sagewk> and/or "what is pkg-config"
[1:06] <kfox1111> pkg-config is the system a lot of libraries use to tell the developer what compile and link flags are needed to use their library.
[1:07] <kfox1111> http://en.wikipedia.org/wiki/Pkg-config
[1:07] * noob2 is now known as noob22
[1:08] <dmick> $ pkg-config --version libpcre
[1:08] <dmick> 0.26
[1:08] <dmick> $ pkg-config --libs libpcre
[1:08] <dmick> -lpcre
[1:09] * mech422 (~steve@ip68-98-107-102.ph.ph.cox.net) has joined #ceph
[1:09] <kfox1111> The short of it is, in autoconf, I could then just do "PKG_CHECK_MODULES([RADOS],[librados])" and know everything I need to build against it and if its installed, and what dependencies it has.
[1:09] <dmick> etc. The more complicated the output, the more useful.
[1:10] <kfox1111> yeah. See pkg-config --cflags --libs gtk+-2.0 :)
[1:11] <mech422> Hi all - I'm going to be setting up a new ceph cluster, and was wondering how to setup replication? I'll have 5 machines running a mon and 2 OSD's each. I'll be using this for RDB storage of VM images. I'd like to setup replication so I can withstand a 2 node failure without losing data. Unfortunately, I'm not sure how to setup the crush map to accomplish this ?
[1:11] <kfox1111> It does good stuff like collapsing common dependencies so the compiler goes faster too.
[1:11] * mikedawson_ (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[1:13] * cjh_ (~cjh@ps123903.dreamhost.com) has joined #ceph
[1:13] <cjh_> would it be possible to graft nfs onto ceph like how gluster has their nfs -> glusterfs client?
[1:14] <sagewk> makedawson: any luck with that package?
[1:14] <sagewk> mikedawson: ^
[1:14] <cjh_> i think they wrote a new nfs server and handle the calls with gluster calls
[1:16] <sagewk> cjh_ you probably want to look at ganesha and teh ceph FSAL module for it
[1:17] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[1:17] * mikedawson_ is now known as mikedawson
[1:19] <sagewk> mikedawson_: any luck with that package?
[1:20] * loicd (~loic@2a01:e35:2eba:db10:f85d:deb2:da97:ec06) Quit (Quit: Leaving.)
[1:21] * noahmehl_ (~noahmehl@cpe-71-67-115-16.cinci.res.rr.com) has joined #ceph
[1:22] <sagewk> mikedawson_: actually, what would be helpful is generating a fresh set of logs with the latest next package. having a hard time debugging things on old version with all these changes. also, that mon.a log is strange in that the thread just hangs...
[1:22] <sagewk> if you can reproduce and then attach with gdb to see what the threads are doing that would be helpful
[1:24] <mikedawson> sagewk: haven't been able to get those precise packages to install yet
[1:24] * noahmehl (~noahmehl@cpe-71-67-115-16.cinci.res.rr.com) Quit (Ping timeout: 480 seconds)
[1:24] * noahmehl_ is now known as noahmehl
[1:24] <sagewk> well the good news is the raring gitbuilder is up and running and has a package for you :)
[1:24] <mikedawson> nice!
[1:25] * eschnou (~eschnou@95.88-201-80.adsl-dyn.isp.belgacom.be) has joined #ceph
[1:25] <sagewk> oh, hmm, it didn't rsync.
[1:25] <sagewk> hold on a sec
[1:29] <sagewk> building now
[1:32] <cjh_> sagewk: thanks i'll check that out
[1:32] * noahmehl (~noahmehl@cpe-71-67-115-16.cinci.res.rr.com) Quit (Quit: noahmehl)
[1:32] <cjh_> i'm also thinking of setting up the radosgw and giving the key to multiple users so they all see the same namespace
[1:36] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:39] * rustam_ (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[1:40] * rustam (~rustam@94.15.91.30) has joined #ceph
[1:41] <cjh_> sagewk: have you tried ganesha out?
[1:41] <sagewk> not personally
[1:42] <sagewk> the linuxbox guys are probably the ones working on the ceph fsal for it
[1:42] <sagewk> er, *are* the ones, and probably who you should talk to about it :)
[1:43] <cjh_> yeah i'll check it out. it looks interesting
[1:43] <cjh_> we're mostly an nfs shop so i'm looking for solutions
[1:46] <cjh_> from a high lvl perspective do you guys think users would experience higher performance just going to the radosgw instead of the native kernel client? with the kernel client you have to layer a file system on top of it again which creates more layers. With the radosgw i'd have to imagine the call stack is smaller to transform calls into rados and forward on
[1:48] * slang (~slang@72.28.162.16) Quit (Read error: Connection reset by peer)
[1:54] * xmltok (~xmltok@pool101.bizrate.com) Quit (Ping timeout: 480 seconds)
[1:57] * alram (~alram@38.122.20.226) Quit (Quit: leaving)
[2:01] * john_barbee1 (~nobody@c-98-226-73-253.hsd1.in.comcast.net) has joined #ceph
[2:01] * john_barbee1 (~nobody@c-98-226-73-253.hsd1.in.comcast.net) Quit ()
[2:04] * dpippenger (~riven@206-169-78-213.static.twtelecom.net) has joined #ceph
[2:13] * eschnou (~eschnou@95.88-201-80.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[2:13] * dwt (~dwt@128-107-239-233.cisco.com) Quit (Read error: Connection reset by peer)
[2:15] <mikedawson> sagewk: new logs are cephdrop'ed. mikedawson/ceph-mon.*.log with version ceph version 0.60-669-ga2a23cc.
[2:25] <sagewk> k thanks
[2:27] <SpamapS> sagewk: hey, I was in the office for the openstack meetup last night. Really cool stuff on the way (cross-region replication for radosgw .. wicked)
[2:30] * sagelap1 (~sage@2600:1012:b011:fcb4:7896:aa93:7771:c315) has joined #ceph
[2:36] <sagelap1> mikedawson: still there?
[2:36] * sagelap1 is now known as sagelap
[2:36] <mikedawson> yessir
[2:36] <sagelap> can you reproduce it again, but with 'debug ms = 20' on mon.a and mon.b?
[2:37] <sagelap> it looks like a message is disappearing into the ether
[2:37] <mikedawson> sagelap: sure. greg had me turn ms down. too noisy
[2:37] <sagelap> heh
[2:48] <mikedawson> sagelap: new logs are cephdrop'ed under mikedawson2
[2:49] <sagelap> tnx
[2:49] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[2:49] <mikedawson> sagelap: I also noticed just now that on mon.a's init script I had ceph-create-keys commented out from the fallout of the caps and return value bugs. It is back on for this run (which also has debug ms = 20)
[2:50] <sagelap> no ceph-mon.a ?
[2:50] <sagelap> k
[2:51] <mikedawson> whoops, its there now
[2:53] * Cube (~Cube@12.248.40.138) Quit (Quit: Leaving.)
[2:55] * dpippenger (~riven@206-169-78-213.static.twtelecom.net) Quit (Quit: Leaving.)
[2:57] <sagelap> can you attach to the mon.a process with gdb and do 'thread apply all bt'?
[2:57] <sagelap> mikedawson: ^
[2:59] <mikedawson> sagelap: uploaded one before this round of logs to the tracker for 4815.. http://tracker.ceph.com/attachments/download/801/ceph-mon-backtrace.txt
[3:00] <sagelap> is it using cpu or disk?
[3:00] <sagelap> looks to be stuck in a write in leveldb
[3:00] <sagelap> (sorry i missed that before)
[3:01] <mikedawson> 6017 root 20 0 9842m 129m 10m S 0 0.3 0:00.94 ceph-mon
[3:01] <mikedawson> 6054 root 20 0 36772 7188 2356 S 0 0.0 0:00.02 ceph-create-key
[3:03] <mikedawson> sagelap: not sure how big leveldb should get, but...
[3:03] <mikedawson> 19G /var/lib/ceph/mon/ceph-a/store.db
[3:03] <sagelap> oof
[3:04] <mikedawson> if you want to look, I ceph dropped them earlier today
[3:05] <sagelap> is it also big on the other mons?
[3:06] <sagelap> yeah
[3:06] <mikedawson> 36G /var/lib/ceph/mon/ceph-b/store.db
[3:07] <mikedawson> 36G /var/lib/ceph/mon/ceph-c/store.db
[3:07] <sagelap> mon.a disk isn't near full or anything right?
[3:08] <mikedawson> 9% used
[3:09] <sagelap> well.. this looks like a leveldb problem. we could try adjusting the tunables and see if that magically works around it..
[3:09] <sagelap> mon_leveldb_write_buffer_size = 33554432
[3:09] <sagelap> mon_leveldb_cache_size = 0
[3:09] <sagelap> mon_leveldb_block_size = 4194304
[3:09] <sagelap> mon_leveldb_bloom_size = 0
[3:09] <sagelap> mon_leveldb_max_open_files = 0
[3:09] <sagelap> mon_leveldb_compression = false
[3:09] <sagelap> are the defaults
[3:10] <sagelap> i'm done for the evening, though. will be able to look more tomorrow.
[3:10] <mikedawson> thanks sagelap!
[3:10] <sagelap> thanks for helping track this down!
[3:11] <sagelap> the stores are way big.. that 's not normal. we're hoping its a side-effect of the sync thrashing. first need to make it finish sync tho :)
[3:12] <mikedawson> sagelap: is there a chance to sync, then truncate to normal size, or is this setup hosed?
[3:13] <mikedawson> also if I needed to get it functional, could I dump mon.a and re-add it or something like that?
[3:18] * loicd (~loic@magenta.dachary.org) has joined #ceph
[3:20] * sagelap (~sage@2600:1012:b011:fcb4:7896:aa93:7771:c315) Quit (Ping timeout: 480 seconds)
[3:29] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[3:39] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[3:55] * diegows (~diegows@190.190.2.126) Quit (Ping timeout: 480 seconds)
[4:03] * treaki__ (a953a77983@p4FDF772F.dip0.t-ipconnect.de) has joined #ceph
[4:07] * treaki_ (89125f261d@p4FDF7D2A.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[4:07] * mech422 (~steve@ip68-98-107-102.ph.ph.cox.net) has left #ceph
[4:12] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[4:19] * tkensiski (~tkensiski@2600:1010:b017:9cbc:e069:1eb4:3582:5ba0) has joined #ceph
[4:19] * tkensiski (~tkensiski@2600:1010:b017:9cbc:e069:1eb4:3582:5ba0) Quit (Write error: connection closed)
[4:43] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[4:43] * kfox1111 (~kfox@96-41-208-2.dhcp.elbg.wa.charter.com) Quit (Ping timeout: 480 seconds)
[4:45] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[5:01] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) has joined #ceph
[5:01] <paravoid> so, upgrading radosgw to 0.60+ and keeping 0.56.4's librados seems to segfault
[5:09] * noahmehl (~noahmehl@cpe-71-67-115-16.cinci.res.rr.com) has joined #ceph
[5:11] * BillK (~BillK@58-7-127-45.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[5:20] * BillK (~BillK@124-148-115-241.dyn.iinet.net.au) has joined #ceph
[5:28] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[5:33] * sjustlaptop (~sam@71-83-191-116.dhcp.gldl.ca.charter.com) has joined #ceph
[5:36] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[5:47] * sjustlaptop (~sam@71-83-191-116.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[5:48] * chutz (~chutz@rygel.linuxfreak.ca) Quit (Quit: Leaving)
[5:48] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[5:52] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) has left #ceph
[6:02] * sjustlaptop (~sam@71-83-191-116.dhcp.gldl.ca.charter.com) has joined #ceph
[6:04] * BillK (~BillK@124-148-115-241.dyn.iinet.net.au) Quit (Read error: Operation timed out)
[6:11] * gregaf1 (~Adium@cpe-76-174-249-52.socal.res.rr.com) has joined #ceph
[6:12] * gregaf1 (~Adium@cpe-76-174-249-52.socal.res.rr.com) has left #ceph
[6:14] * BillK (~BillK@124-148-223-188.dyn.iinet.net.au) has joined #ceph
[6:20] * sjustlaptop (~sam@71-83-191-116.dhcp.gldl.ca.charter.com) Quit (Remote host closed the connection)
[6:23] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[6:43] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[6:44] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:48] * Cube1 (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[6:53] * Cube1 (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[6:54] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[6:55] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[6:58] * nigwil_ is now known as nigwil
[7:02] * jtangwk1 (~Adium@2001:770:10:500:a459:af2a:a87b:3264) Quit (Ping timeout: 480 seconds)
[7:04] * jtangwk (~Adium@2001:770:10:500:a459:af2a:a87b:3264) has joined #ceph
[7:06] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[7:21] * dmick (~dmick@2607:f298:a:607:b872:b2ac:376e:1053) Quit (Quit: Leaving.)
[7:43] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[7:54] * treaki__ (a953a77983@p4FDF772F.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[8:10] * Kioob1 (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) has joined #ceph
[8:10] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) Quit (Write error: connection closed)
[8:11] * Kioob1 (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) Quit ()
[8:11] * Kioob (~kioob@luuna.daevel.fr) has joined #ceph
[8:19] * Kioob (~kioob@luuna.daevel.fr) Quit (Ping timeout: 480 seconds)
[8:48] * tnt (~tnt@109.130.96.140) has joined #ceph
[8:58] * coyo (~unf@pool-71-164-242-68.dllstx.fios.verizon.net) has joined #ceph
[9:03] * tkensiski (~tkensiski@c-98-234-160-131.hsd1.ca.comcast.net) has joined #ceph
[9:04] * tkensiski (~tkensiski@c-98-234-160-131.hsd1.ca.comcast.net) has left #ceph
[9:06] * mrjack (mrjack@office.smart-weblications.net) Quit (Ping timeout: 480 seconds)
[9:17] * noahmehl (~noahmehl@cpe-71-67-115-16.cinci.res.rr.com) Quit (Quit: noahmehl)
[9:39] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[9:50] * noahmehl (~noahmehl@cpe-71-67-115-16.cinci.res.rr.com) has joined #ceph
[10:09] * loicd (~loic@magenta.dachary.org) has joined #ceph
[10:11] * noahmehl (~noahmehl@cpe-71-67-115-16.cinci.res.rr.com) Quit (Quit: noahmehl)
[10:13] * mnash (~chatzilla@vpn.expressionanalysis.com) Quit (Remote host closed the connection)
[10:53] * vo1d (~v0@212-183-100-27.adsl.highway.telekom.at) has joined #ceph
[10:58] * v0id (~v0@193-83-49-6.adsl.highway.telekom.at) Quit (Ping timeout: 480 seconds)
[10:58] * joshd1 (~jdurgin@2602:306:c5db:310:881a:87fc:52ea:35ce) Quit (Ping timeout: 480 seconds)
[11:00] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[11:00] * loicd (~loic@magenta.dachary.org) has joined #ceph
[11:01] * joshd1 (~jdurgin@2602:306:c5db:310:881a:87fc:52ea:35ce) has joined #ceph
[12:05] * jtangwk (~Adium@2001:770:10:500:a459:af2a:a87b:3264) Quit (Ping timeout: 480 seconds)
[12:19] * brambles_ (lechuck@s0.barwen.ch) has joined #ceph
[12:19] * tnt (~tnt@109.130.96.140) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * rturk-away (~rturk@ds2390.dreamservers.com) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * DarkAceZ (~BillyMays@50.107.54.92) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * jackhill_ (~jackhill@71.20.247.147) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * dosaboy_ (~dosaboy@host86-161-164-218.range86-161.btcentralplus.com) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * b1tbkt_ (~Peekaboo@68-184-193-142.dhcp.stls.mo.charter.com) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * Tribaal (uid3081@hillingdon.irccloud.com) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * dgbaley27 (~matt@mrct45-133-dhcp.resnet.colorado.edu) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * doubleg (~doubleg@69.167.130.11) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * thelan_ (~thelan@paris.servme.fr) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * paravoid (~paravoid@scrooge.tty.gr) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * tserong (~tserong@124-171-116-238.dyn.iinet.net.au) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * portante (~user@66.187.233.206) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * NaioN (stefan@andor.naion.nl) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * athrift (~nz_monkey@222.47.255.123.static.snap.net.nz) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * Meths (rift@2.25.193.124) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * stefunel (~stefunel@static.38.162.46.78.clients.your-server.de) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * morse (~morse@supercomputing.univpm.it) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * prudhvi (~prudhvi@tau.supr.io) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * brambles (lechuck@s0.barwen.ch) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * Gugge-47527 (gugge@kriminel.dk) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * SpamapS (~clint@xencbyrum2.srihosting.com) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * raso (~raso@deb-multimedia.org) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * scheuk (~scheuk@204.246.67.78) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * joshd1 (~jdurgin@2602:306:c5db:310:881a:87fc:52ea:35ce) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * loicd (~loic@magenta.dachary.org) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * vo1d (~v0@212-183-100-27.adsl.highway.telekom.at) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * chutz (~chutz@rygel.linuxfreak.ca) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * Havre (~Havre@2a01:e35:8a2c:b230:2cd5:a92f:87c0:a2d1) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * TiCPU|Home (jerome@p4.i.ticpu.net) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * noob22 (~cjh@2620:0:1cfe:28:9cf8:21a5:b78d:b5ed) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * sagewk (~sage@2607:f298:a:607:b0f4:5462:45db:ca49) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * masterpe (~masterpe@2001:990:0:1674::1:82) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * matt_ (~matt@220-245-1-152.static.tpgi.com.au) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * stacker666 (~stacker66@33.pool85-58-181.dynamic.orange.es) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * yehudasa (~yehudasa@2607:f298:a:607:e918:deb4:5e7:63ec) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * joelio (~Joel@88.198.107.214) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * Psi-jack (~psi-jack@psi-jack.user.oftc.net) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * niklas (~niklas@2001:7c0:409:8001::32:115) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * mrjack_ (mrjack@office.smart-weblications.net) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * psomas (~psomas@inferno.cc.ece.ntua.gr) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * sage (~sage@76.89.177.113) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * Dieter_be (~Dieterbe@dieter2.plaetinck.be) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * MooingLemur (~troy@phx-pnap.pinchaser.com) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * Elbandi_ (~ea333@elbandi.net) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * cjh_ (~cjh@ps123903.dreamhost.com) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * DLange (~DLange@dlange.user.oftc.net) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * Volture (~Volture@office.meganet.ru) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * `10` (~10@juke.fm) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * illuminatis (~illuminat@0001adba.user.oftc.net) Quit (reticulum.oftc.net resistance.oftc.net)
[12:19] * maswan (maswan@kennedy.acc.umu.se) Quit (reticulum.oftc.net resistance.oftc.net)
[12:20] * joshd1 (~jdurgin@2602:306:c5db:310:881a:87fc:52ea:35ce) has joined #ceph
[12:20] * loicd (~loic@magenta.dachary.org) has joined #ceph
[12:20] * vo1d (~v0@212-183-100-27.adsl.highway.telekom.at) has joined #ceph
[12:20] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[12:20] * tnt (~tnt@109.130.96.140) has joined #ceph
[12:20] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[12:20] * cjh_ (~cjh@ps123903.dreamhost.com) has joined #ceph
[12:20] * Havre (~Havre@2a01:e35:8a2c:b230:2cd5:a92f:87c0:a2d1) has joined #ceph
[12:20] * TiCPU|Home (jerome@p4.i.ticpu.net) has joined #ceph
[12:20] * noob22 (~cjh@2620:0:1cfe:28:9cf8:21a5:b78d:b5ed) has joined #ceph
[12:20] * rturk-away (~rturk@ds2390.dreamservers.com) has joined #ceph
[12:20] * sagewk (~sage@2607:f298:a:607:b0f4:5462:45db:ca49) has joined #ceph
[12:20] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) has joined #ceph
[12:20] * jackhill_ (~jackhill@71.20.247.147) has joined #ceph
[12:20] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[12:20] * masterpe (~masterpe@2001:990:0:1674::1:82) has joined #ceph
[12:20] * matt_ (~matt@220-245-1-152.static.tpgi.com.au) has joined #ceph
[12:20] * dosaboy_ (~dosaboy@host86-161-164-218.range86-161.btcentralplus.com) has joined #ceph
[12:20] * b1tbkt_ (~Peekaboo@68-184-193-142.dhcp.stls.mo.charter.com) has joined #ceph
[12:20] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[12:20] * Tribaal (uid3081@hillingdon.irccloud.com) has joined #ceph
[12:20] * DLange (~DLange@dlange.user.oftc.net) has joined #ceph
[12:20] * dgbaley27 (~matt@mrct45-133-dhcp.resnet.colorado.edu) has joined #ceph
[12:20] * doubleg (~doubleg@69.167.130.11) has joined #ceph
[12:20] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[12:20] * stacker666 (~stacker66@33.pool85-58-181.dynamic.orange.es) has joined #ceph
[12:20] * thelan_ (~thelan@paris.servme.fr) has joined #ceph
[12:20] * paravoid (~paravoid@scrooge.tty.gr) has joined #ceph
[12:20] * tserong (~tserong@124-171-116-238.dyn.iinet.net.au) has joined #ceph
[12:20] * Volture (~Volture@office.meganet.ru) has joined #ceph
[12:20] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[12:20] * yehudasa (~yehudasa@2607:f298:a:607:e918:deb4:5e7:63ec) has joined #ceph
[12:20] * joelio (~Joel@88.198.107.214) has joined #ceph
[12:20] * Psi-jack (~psi-jack@psi-jack.user.oftc.net) has joined #ceph
[12:20] * niklas (~niklas@2001:7c0:409:8001::32:115) has joined #ceph
[12:20] * mrjack_ (mrjack@office.smart-weblications.net) has joined #ceph
[12:20] * psomas (~psomas@inferno.cc.ece.ntua.gr) has joined #ceph
[12:20] * portante (~user@66.187.233.206) has joined #ceph
[12:20] * NaioN (stefan@andor.naion.nl) has joined #ceph
[12:20] * sage (~sage@76.89.177.113) has joined #ceph
[12:20] * athrift (~nz_monkey@222.47.255.123.static.snap.net.nz) has joined #ceph
[12:20] * Meths (rift@2.25.193.124) has joined #ceph
[12:20] * stefunel (~stefunel@static.38.162.46.78.clients.your-server.de) has joined #ceph
[12:20] * Dieter_be (~Dieterbe@dieter2.plaetinck.be) has joined #ceph
[12:20] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[12:20] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[12:20] * prudhvi (~prudhvi@tau.supr.io) has joined #ceph
[12:20] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[12:20] * Gugge-47527 (gugge@kriminel.dk) has joined #ceph
[12:20] * SpamapS (~clint@xencbyrum2.srihosting.com) has joined #ceph
[12:20] * raso (~raso@deb-multimedia.org) has joined #ceph
[12:20] * scheuk (~scheuk@204.246.67.78) has joined #ceph
[12:20] * MooingLemur (~troy@phx-pnap.pinchaser.com) has joined #ceph
[12:20] * illuminatis (~illuminat@0001adba.user.oftc.net) has joined #ceph
[12:20] * Elbandi_ (~ea333@elbandi.net) has joined #ceph
[12:20] * `10` (~10@juke.fm) has joined #ceph
[12:20] * maswan (maswan@kennedy.acc.umu.se) has joined #ceph
[12:21] * DarkAceZ (~BillyMays@50.107.54.92) has joined #ceph
[12:36] * rustam (~rustam@94.15.91.30) has joined #ceph
[12:45] * rustam (~rustam@94.15.91.30) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * scheuk (~scheuk@204.246.67.78) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * Gugge-47527 (gugge@kriminel.dk) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * prudhvi (~prudhvi@tau.supr.io) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * NaioN (stefan@andor.naion.nl) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * tserong (~tserong@124-171-116-238.dyn.iinet.net.au) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * thelan_ (~thelan@paris.servme.fr) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * doubleg (~doubleg@69.167.130.11) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * dgbaley27 (~matt@mrct45-133-dhcp.resnet.colorado.edu) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * Tribaal (uid3081@hillingdon.irccloud.com) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * b1tbkt_ (~Peekaboo@68-184-193-142.dhcp.stls.mo.charter.com) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * dosaboy_ (~dosaboy@host86-161-164-218.range86-161.btcentralplus.com) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * jackhill_ (~jackhill@71.20.247.147) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * tnt (~tnt@109.130.96.140) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * morse (~morse@supercomputing.univpm.it) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * stefunel (~stefunel@static.38.162.46.78.clients.your-server.de) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * portante (~user@66.187.233.206) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * Meths (rift@2.25.193.124) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * athrift (~nz_monkey@222.47.255.123.static.snap.net.nz) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * raso (~raso@deb-multimedia.org) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * SpamapS (~clint@xencbyrum2.srihosting.com) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * paravoid (~paravoid@scrooge.tty.gr) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * rturk-away (~rturk@ds2390.dreamservers.com) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * MooingLemur (~troy@phx-pnap.pinchaser.com) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * psomas (~psomas@inferno.cc.ece.ntua.gr) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * niklas (~niklas@2001:7c0:409:8001::32:115) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * joelio (~Joel@88.198.107.214) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * yehudasa (~yehudasa@2607:f298:a:607:e918:deb4:5e7:63ec) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * stacker666 (~stacker66@33.pool85-58-181.dynamic.orange.es) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * matt_ (~matt@220-245-1-152.static.tpgi.com.au) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * sagewk (~sage@2607:f298:a:607:b0f4:5462:45db:ca49) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * noob22 (~cjh@2620:0:1cfe:28:9cf8:21a5:b78d:b5ed) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * TiCPU|Home (jerome@p4.i.ticpu.net) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * chutz (~chutz@rygel.linuxfreak.ca) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * vo1d (~v0@212-183-100-27.adsl.highway.telekom.at) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * joshd1 (~jdurgin@2602:306:c5db:310:881a:87fc:52ea:35ce) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * sage (~sage@76.89.177.113) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * masterpe (~masterpe@2001:990:0:1674::1:82) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * Dieter_be (~Dieterbe@dieter2.plaetinck.be) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * Elbandi_ (~ea333@elbandi.net) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * Havre (~Havre@2a01:e35:8a2c:b230:2cd5:a92f:87c0:a2d1) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * Psi-jack (~psi-jack@psi-jack.user.oftc.net) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * loicd (~loic@magenta.dachary.org) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * mrjack_ (mrjack@office.smart-weblications.net) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * `10` (~10@juke.fm) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * illuminatis (~illuminat@0001adba.user.oftc.net) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * Volture (~Volture@office.meganet.ru) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * cjh_ (~cjh@ps123903.dreamhost.com) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * maswan (maswan@kennedy.acc.umu.se) Quit (reticulum.oftc.net resistance.oftc.net)
[12:45] * DLange (~DLange@dlange.user.oftc.net) Quit (reticulum.oftc.net resistance.oftc.net)
[12:46] * rustam (~rustam@94.15.91.30) has joined #ceph
[12:46] * joshd1 (~jdurgin@2602:306:c5db:310:881a:87fc:52ea:35ce) has joined #ceph
[12:46] * loicd (~loic@magenta.dachary.org) has joined #ceph
[12:46] * vo1d (~v0@212-183-100-27.adsl.highway.telekom.at) has joined #ceph
[12:46] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[12:46] * tnt (~tnt@109.130.96.140) has joined #ceph
[12:46] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[12:46] * cjh_ (~cjh@ps123903.dreamhost.com) has joined #ceph
[12:46] * Havre (~Havre@2a01:e35:8a2c:b230:2cd5:a92f:87c0:a2d1) has joined #ceph
[12:46] * TiCPU|Home (jerome@p4.i.ticpu.net) has joined #ceph
[12:46] * noob22 (~cjh@2620:0:1cfe:28:9cf8:21a5:b78d:b5ed) has joined #ceph
[12:46] * rturk-away (~rturk@ds2390.dreamservers.com) has joined #ceph
[12:46] * sagewk (~sage@2607:f298:a:607:b0f4:5462:45db:ca49) has joined #ceph
[12:46] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) has joined #ceph
[12:46] * jackhill_ (~jackhill@71.20.247.147) has joined #ceph
[12:46] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[12:46] * masterpe (~masterpe@2001:990:0:1674::1:82) has joined #ceph
[12:46] * matt_ (~matt@220-245-1-152.static.tpgi.com.au) has joined #ceph
[12:46] * dosaboy_ (~dosaboy@host86-161-164-218.range86-161.btcentralplus.com) has joined #ceph
[12:46] * b1tbkt_ (~Peekaboo@68-184-193-142.dhcp.stls.mo.charter.com) has joined #ceph
[12:46] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[12:46] * Tribaal (uid3081@hillingdon.irccloud.com) has joined #ceph
[12:46] * DLange (~DLange@dlange.user.oftc.net) has joined #ceph
[12:46] * dgbaley27 (~matt@mrct45-133-dhcp.resnet.colorado.edu) has joined #ceph
[12:46] * doubleg (~doubleg@69.167.130.11) has joined #ceph
[12:46] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[12:46] * stacker666 (~stacker66@33.pool85-58-181.dynamic.orange.es) has joined #ceph
[12:46] * thelan_ (~thelan@paris.servme.fr) has joined #ceph
[12:46] * paravoid (~paravoid@scrooge.tty.gr) has joined #ceph
[12:46] * tserong (~tserong@124-171-116-238.dyn.iinet.net.au) has joined #ceph
[12:46] * Volture (~Volture@office.meganet.ru) has joined #ceph
[12:46] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[12:46] * yehudasa (~yehudasa@2607:f298:a:607:e918:deb4:5e7:63ec) has joined #ceph
[12:46] * joelio (~Joel@88.198.107.214) has joined #ceph
[12:46] * Psi-jack (~psi-jack@psi-jack.user.oftc.net) has joined #ceph
[12:46] * niklas (~niklas@2001:7c0:409:8001::32:115) has joined #ceph
[12:46] * mrjack_ (mrjack@office.smart-weblications.net) has joined #ceph
[12:46] * psomas (~psomas@inferno.cc.ece.ntua.gr) has joined #ceph
[12:46] * portante (~user@66.187.233.206) has joined #ceph
[12:46] * NaioN (stefan@andor.naion.nl) has joined #ceph
[12:46] * sage (~sage@76.89.177.113) has joined #ceph
[12:46] * athrift (~nz_monkey@222.47.255.123.static.snap.net.nz) has joined #ceph
[12:46] * Meths (rift@2.25.193.124) has joined #ceph
[12:46] * stefunel (~stefunel@static.38.162.46.78.clients.your-server.de) has joined #ceph
[12:46] * Dieter_be (~Dieterbe@dieter2.plaetinck.be) has joined #ceph
[12:46] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[12:46] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[12:46] * prudhvi (~prudhvi@tau.supr.io) has joined #ceph
[12:46] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[12:46] * Gugge-47527 (gugge@kriminel.dk) has joined #ceph
[12:46] * SpamapS (~clint@xencbyrum2.srihosting.com) has joined #ceph
[12:46] * raso (~raso@deb-multimedia.org) has joined #ceph
[12:46] * scheuk (~scheuk@204.246.67.78) has joined #ceph
[12:46] * MooingLemur (~troy@phx-pnap.pinchaser.com) has joined #ceph
[12:46] * illuminatis (~illuminat@0001adba.user.oftc.net) has joined #ceph
[12:46] * Elbandi_ (~ea333@elbandi.net) has joined #ceph
[12:46] * `10` (~10@juke.fm) has joined #ceph
[12:46] * maswan (maswan@kennedy.acc.umu.se) has joined #ceph
[12:53] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[13:13] <mikedawson> wido: ping
[13:54] * diegows (~diegows@190.190.2.126) has joined #ceph
[13:54] * madkiss1 (~madkiss@2001:6f8:12c3:f00f:506:58ff:da81:533b) Quit (Quit: Leaving.)
[13:57] * madkiss (~madkiss@2001:6f8:12c3:f00f:7d81:dbdc:1bad:b30f) has joined #ceph
[13:59] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) has joined #ceph
[14:22] * slang (~slang@72.28.162.16) has joined #ceph
[14:54] * Vjarjadian (~IceChat77@90.214.208.5) has joined #ceph
[15:19] <jtang> herm, there are ruby bindings for rados?
[15:19] <jtang> librados that is
[15:19] <jtang> or is wikipedia telling me a lie
[15:34] * jmlowe (~Adium@c-71-201-31-207.hsd1.in.comcast.net) has joined #ceph
[15:34] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[15:38] * BillK (~BillK@124-148-223-188.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[15:46] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Quit: Leaving.)
[15:47] * BillK (~BillK@58-7-219-76.dyn.iinet.net.au) has joined #ceph
[15:50] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[15:53] * rustam (~rustam@94.15.91.30) has joined #ceph
[16:06] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[16:11] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[16:13] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[16:17] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) has joined #ceph
[16:17] * rustam (~rustam@94.15.91.30) has joined #ceph
[16:28] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[16:29] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[16:29] * loicd (~loic@magenta.dachary.org) has joined #ceph
[16:32] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[16:33] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Remote host closed the connection)
[16:37] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[16:45] <dgbaley27> I have a server with 12*3TB drives. If I replace a single drive with an SSD and partition it, any thoughts an weather the SSD would work well for the journal for all of the OSDs? Can it be used for the mon also?
[16:52] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[16:53] * slang (~slang@72.28.162.16) Quit (Read error: Connection reset by peer)
[16:56] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[16:59] * noahmehl (~noahmehl@cpe-71-67-115-16.cinci.res.rr.com) has joined #ceph
[17:00] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[17:00] * loicd (~loic@magenta.dachary.org) has joined #ceph
[17:11] * kyle_ (~kyle@ip03.foxyf.simplybits.net) Quit (Quit: Leaving)
[17:25] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[18:22] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) has joined #ceph
[18:30] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[18:33] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[18:38] * BillK (~BillK@58-7-219-76.dyn.iinet.net.au) Quit (Read error: Operation timed out)
[18:50] * BillK (~BillK@124-148-224-101.dyn.iinet.net.au) has joined #ceph
[18:51] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[18:55] * BillK (~BillK@124-148-224-101.dyn.iinet.net.au) Quit (Read error: Connection reset by peer)
[18:59] * noahmehl_ (~noahmehl@cpe-71-67-115-16.cinci.res.rr.com) has joined #ceph
[19:00] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[19:03] * noahmehl (~noahmehl@cpe-71-67-115-16.cinci.res.rr.com) Quit (Ping timeout: 480 seconds)
[19:03] * noahmehl_ is now known as noahmehl
[19:09] * BillK (~BillK@58-7-104-61.dyn.iinet.net.au) has joined #ceph
[19:14] * nhm (~nhm@65-128-150-185.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[19:23] * nhm (~nhm@65-128-150-185.mpls.qwest.net) has joined #ceph
[19:24] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) has joined #ceph
[19:33] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[19:38] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) has joined #ceph
[19:48] * noahmehl (~noahmehl@cpe-71-67-115-16.cinci.res.rr.com) Quit (Read error: Connection reset by peer)
[19:49] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[19:49] * noahmehl (~noahmehl@cpe-71-67-115-16.cinci.res.rr.com) has joined #ceph
[19:49] * loicd (~loic@magenta.dachary.org) has joined #ceph
[19:51] * TMM (~hp@535240C7.cm-6-3b.dynamic.ziggo.nl) Quit (Ping timeout: 480 seconds)
[20:04] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Quit: Leaving.)
[20:25] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) has joined #ceph
[20:34] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Quit: Leaving.)
[20:36] * noob2 (~cjh@pool-96-249-205-19.snfcca.dsl-w.verizon.net) has joined #ceph
[20:42] <noob2> anyone around?
[20:42] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[20:52] * diegows (~diegows@190.190.2.126) Quit (Ping timeout: 480 seconds)
[21:02] * diegows (~diegows@190.190.2.126) has joined #ceph
[21:06] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) has joined #ceph
[21:06] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Read error: Connection reset by peer)
[21:06] * leseb1 (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) has joined #ceph
[21:17] * leseb1 (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Quit: Leaving.)
[21:50] * pachuco (5b09dfad@ircip2.mibbit.com) has joined #ceph
[21:51] <pachuco> i'm testing next branch against bobtail. Everything seems to be fine except random read performance using qemu / rbd. It dropped from 3000iops to 600
[21:53] <noob2> what does your ceph.conf look like on your bobtail setup?
[21:54] <pachuco> the same like on next branch or do you mean in general?
[21:54] <noob2> just in general
[21:56] <pachuco> journal aio true and disabled cephx and disabled logging
[21:56] <pachuco> rest defaults it's really small
[21:56] <noob2> gotcha
[21:56] <noob2> what are you using to benchmark your iops?
[21:58] <pachuco> fio under qemu with rbd drive
[21:58] <noob2> maybe the rbd drive has caching disabled by default in the newer branch?
[21:58] <pachuco> why "gotcha"?
[21:58] <noob2> oh i meant i understand
[21:59] <pachuco> no i've caching enabled on both manually
[21:59] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[21:59] * loicd (~loic@magenta.dachary.org) has joined #ceph
[21:59] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[22:00] <noob2> i'm not sure what could have changed to make your performance so bad like that
[22:03] <pachuco> http://pastebin.com/raw.php?i=qT4SbyBj
[22:04] <noob2> wow yeah that's a rather large slowdown
[22:04] <pachuco> the first two are random I/O 4k and three and four is seq 4M
[22:04] <noob2> ok
[22:04] <noob2> not bad you're getting almost 2GB/s
[22:04] <pachuco> but seq 4M writing is faster
[22:04] <noob2> and 8GB/s on the bobtail
[22:06] <noob2> i'm finding that with the defaults on my cluster of 20 machines i can't get over 100MB/s for some reason
[22:06] <noob2> i'm still searching for the bottleneck
[22:09] <pachuco> strange sadly no idea
[22:10] <noob2> the machines i'm on are capable of 500+MB/s and have 10Gb connections so i'm confused
[22:14] <pachuco> mhm maybe a client problem?
[22:14] <noob2> possibly
[22:14] <noob2> it's also possible i'm not testing them correctly
[22:17] <noob2> it looks like when i run rados bench against all the hosts at the same time the aggregate is 850MB/s
[22:17] <noob2> that seems about right
[22:23] * noob2 (~cjh@pool-96-249-205-19.snfcca.dsl-w.verizon.net) Quit (Quit: Leaving.)
[22:27] * slang (~slang@72.28.162.16) has joined #ceph
[22:30] * pachuco (5b09dfad@ircip2.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[22:41] * med (~medberry@00012b50.user.oftc.net) has joined #ceph
[23:00] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[23:02] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[23:19] <mikedawson> wido: you mentioned a mon process went away without any logs on issue #4837. I saw something similar a couple days ago. Did you check syslog to see if it was killled off by oom-killer?
[23:23] * ScOut3R (~ScOut3R@dsl51B6B67F.pool.t-online.hu) has joined #ceph
[23:26] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[23:36] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[23:38] * kfox1111 (~kfox@96-41-208-2.dhcp.elbg.wa.charter.com) has joined #ceph
[23:40] * ScOut3R (~ScOut3R@dsl51B6B67F.pool.t-online.hu) Quit (Ping timeout: 480 seconds)
[23:42] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[23:50] * LeaChim (~LeaChim@90.197.3.92) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.