#ceph IRC Log

Index

IRC Log for 2012-09-26

Timestamps are in GMT/BST.

[0:01] <SpamapS> any way to tell if a release contains a particular commit?
[0:02] <joshd> git tag --contains <sha1>
[0:02] <joshd> it's in 0.51
[0:02] <SpamapS> Cool, will mention in the ubuntu bug
[0:03] * SpamapS hides his git newbie shame
[0:05] <dmick> lol. I can never remember that one either SpamapS. At least now it's in my head that such a thing exists.
[0:07] * bchrisman (~Adium@108.60.121.114) Quit (Quit: Leaving.)
[0:25] * KevinPerks (~Adium@12.248.40.138) has joined #ceph
[0:25] * aliguori (~anthony@32.97.110.59) Quit (Remote host closed the connection)
[0:26] * cblack101 (86868b48@ircip3.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[0:26] * LarsFronius (~LarsFroni@2a02:8108:3c0:79:c8eb:ed74:6675:2a79) Quit (Quit: LarsFronius)
[0:28] <tren> gregaf: you around?
[0:28] <gregaf> yep!
[0:28] <tren> hey, sorry about the delay
[0:28] <gregaf> np; did you see my email?
[0:28] <tren> just about to read it now, coworker let me know you'd replied
[0:30] <tren> okie, just finished reading it
[0:31] <tren> I can add those settings to the mds servers, however restarting the mds will probably fix the clientreplay issue won't it?
[0:32] <gregaf> the client hasn't successfully replayed the ops, so I don't think it should
[0:32] <gregaf> *goes to check out that response path again"
[0:32] <tren> okay, I'll add those lines to my mds servers
[0:33] <tren> brb
[0:33] <tren> though I do have one question before I do this
[0:33] * BManojlovic (~steki@195.13.166.253) Quit (Quit: Ja odoh a vi sta 'ocete...)
[0:34] <tren> the servers we're using for OSD's are also used for general processing work. The load is quite bursty and I've been working on adjusting the timeouts to accommodate the load
[0:34] <gregaf> yeah, I don't think it should fix itself…if it does I guess I'll be embarrassed ;)
[0:34] <gregaf> okay
[0:35] <tren> I've managed to get the osd's from freaking out on busy periods, but I'm not sure what settings to tweak for the mds's so they can handle periods of high load
[0:35] <gregaf> ah
[0:35] <tren> for the most part, the mds/mon boxes are very lightly loaded, except for the tertiary mds/mon which is also a osd box, and is doing general processing
[0:36] <tren> could a high load on a tertiary mds box cause some of these issues? even if it's not an active mds?
[0:38] <gregaf> nope
[0:38] <gregaf> assuming it's not the active one, anyway
[0:38] <tren> didn't think so but I wanted to be sure. and no, it's not the active one :)
[0:39] <tren> I've pushed out the updated config.
[0:39] <tren> so I'll restart the mds tertiary, then secondary, then primary?
[0:39] <tren> leaving the one that is currently in clientreplay to be restarted last?
[0:40] <tren> (sorry for all the questions, just want to do this properly)
[0:40] <gregaf> yep!
[0:41] * sagelap (~sage@4.sub-166-250-35.myvzw.com) Quit (Ping timeout: 480 seconds)
[0:43] <tren> k, done
[0:43] <tren> fern has gone into replay mode
[0:43] <tren> mdsmap e25: 1/1/1 up {0=fern=up:replay}, 2 up:standby
[0:43] <tren> and now it's active
[0:44] <tren> mdsmap e28: 1/1/1 up {0=fern=up:active}, 2 up:standby
[0:45] <gregaf> argh
[0:45] <gregaf> well, you got about as much out of it as we were going to at this point
[0:46] <gregaf> but you should check the state of your data (and all your clients are still up, right?)
[0:46] <pentabular> Ciao, lovely cephalopds!
[0:46] * pentabular (~sean@adsl-70-231-141-17.dsl.snfc21.sbcglobal.net) has left #ceph
[0:46] <tren> gimme a sec. currently, the only data on this is stuff I'm rsyncing. about 58million files
[0:46] <tren> trying to break the file system ;)
[0:47] <tren> are you interested in the mds logs for fern?
[0:47] <tren> it's a few hundred mb large
[0:47] <gregaf> the one you sent snippets from?
[0:48] <tren> yes
[0:48] <gregaf> I don't think so, but if you can hold onto them for a few days I'll check with Sage and see if he has anything we want to check out of them
[0:49] <tren> k, I'll move it aside
[0:49] * KevinPerks (~Adium@12.248.40.138) Quit (Quit: Leaving.)
[0:49] <gregaf> oh, actually, let's try this...
[0:49] <gregaf> sage1: think we're interested in that mds log at all?
[0:54] <tren> just bzipping it now in case ya want it ;)
[0:56] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[1:01] * KevinPerks (~Adium@12.248.40.138) has joined #ceph
[1:02] <tren> gregaf: ?
[1:02] <gregaf> I was hoping he'd notice, but I'll have to talk with him later (he's traveling right now)
[1:03] <tren> no worries. The logs are only about 18M compressed
[1:03] <tren> so much more manageable
[1:03] * sagelap (~sage@4.sub-166-250-35.myvzw.com) has joined #ceph
[1:04] <tren> oh, there he is, maybe :)
[1:10] * Karcaw (~evan@96-41-198-212.dhcp.elbg.wa.charter.com) Quit (Ping timeout: 480 seconds)
[1:11] <tren> gregaf: I'll restart the rsync process. I'll keep the enhanced logging. We'll see if this re-manifests in the wee hours of the monring.
[1:11] <tren> morning*
[1:11] * sagelap (~sage@4.sub-166-250-35.myvzw.com) Quit (Ping timeout: 480 seconds)
[1:14] * sbohrer (~sbohrer@173.227.92.65) Quit (Quit: Leaving)
[1:17] <tren> gregaf: when I restarted the rsync, it looks like the mds's failed over again
[1:18] <tren> I have no idea how a 12 core server with no load can be considered "laggy" :/
[1:19] <gregaf> high logging on this time?
[1:19] <tren> yes
[1:19] <gregaf> could be that the OSDs are slow and it's impacting the MDS, although I didn't think that feedback loop was there
[1:19] <tren> it made a 663MB log file
[1:19] <gregaf> the logs should include enough data
[1:19] <gregaf> zip it up and post it somewhere?
[1:19] <tren> k
[1:19] <tren> gimme a sec
[1:21] * aliguori (~anthony@cpe-70-123-140-180.austin.res.rr.com) has joined #ceph
[1:21] * sagelap (~sage@2600:1010:b00a:5eb1:c685:8ff:fe59:d486) has joined #ceph
[1:27] * dec (~dec@ec2-54-251-62-253.ap-southeast-1.compute.amazonaws.com) has joined #ceph
[1:28] <dec> I couldn't find this easily in the docs - does qemu or libvirt's use of rbd require the rbd kernel module or just librbd?
[1:30] <joshd> just librbd
[1:30] <dec> excellent
[1:30] <dmick> dec: the fact that it avoids the kernel is seen as an advantage (stay in userland, no context switching)
[1:30] <dec> dmick: I agree, that's what I was hoping the answer would be :)
[1:30] * jlogan1 (~Thunderbi@72.5.59.176) Quit (Ping timeout: 480 seconds)
[1:31] <dmick> also you get caching.
[1:31] <tren> gregaf: The enhanced logging seems to be causing the mds load to be quite high. over 200%
[1:31] <dec> dmick: caching where? without being a kernel block device you lose use of the kernel buffer cache
[1:32] <tren> gregaf: plus restarting the rsync, it's complaining that files don't exist. even though I can ls them in ceph.
[1:32] <dmick> dec: caching managed by librbd itself
[1:32] <dmick> (i.e. user memory)
[1:33] <dmick> (for the host executing qemu)
[1:33] <joshd> http://ceph.com/docs/master/config-cluster/rbd-config-ref/
[1:34] <tren> gregaf: for now I've stopped the rsync and unmounted the fuse mount points from the 2 servers I was rsyncing from.
[1:35] <dec> Hm, cool. configurable cache size.
[1:36] * jluis (~JL@89.181.153.232) has joined #ceph
[1:36] * Tv_ (~tv@38.122.20.226) Quit (Quit: Tv_)
[1:39] <tren> gregaf: https://www.dropbox.com/s/vr7csslcikami5l/mds.fern.log.25sept2012.bz2
[1:39] <tren> gregaf: Link to the mds logs
[1:42] * joao (~JL@89.181.153.232) Quit (Ping timeout: 480 seconds)
[1:55] <gregaf> tren: how large are thos directories you're rsyncing?
[1:56] <tren> gregaf: what do you mean?
[1:56] * sagelap (~sage@2600:1010:b00a:5eb1:c685:8ff:fe59:d486) Quit (Ping timeout: 480 seconds)
[1:56] <gregaf> they're just noticeably large when I scroll through the request history on them so I wonder how many entires they have
[1:56] <gregaf> it probably doesn't matter; I'm not scrolling long enough through each one; but I was curious
[1:57] <tren> gregaf: hmm. there's 2 directories on 2 different filers. Each one contains about 3TB of data each. The files are around 68k to 1.5MB (TIFF files)
[1:57] <tren> the directories are hashed
[1:59] <tren> 1 through 500 for the first level. Then about 4000 or 5000 directories under each first level directory
[1:59] <tren> then each one of those directories has another date based directory (year/month/day)
[1:59] <gregaf> n/m, they're small, this log only has ~14k requests total
[1:59] <tren> and then finally, you have the TIFF's for each day
[1:59] <tren> it's about 58,000,000 million files and directories total, based on counting inodes from the filer
[2:01] <tren> gregaf: did you already get the logs?
[2:01] * KevinPerks (~Adium@12.248.40.138) Quit (Quit: Leaving.)
[2:01] <gregaf> yeah; I'm looking through it now
[2:01] <tren> gregaf: crap. they're turning the power out! we've been having electrical issues at our office and I'm being shoo'ed out
[2:01] <gregaf> it is actually doing things during the period it got marked as laggy, so it's not totally crazy
[2:01] <tren> gregaf: I'll pop back on irc tomorrow
[2:02] <gregaf> if the takeover mds got stuck on clientreplay I'd love that log too
[2:02] <gregaf> okay, cya then
[2:02] <tren> gregaf: thank you again for your help…sorry I have to run
[2:02] <tren> gregaf: nope, it went active. But I have those logs if ya like
[2:03] <tren> gregaf: I can send them tomorrow morning.
[2:03] <tren> gregaf: g'night! :)
[2:03] * tren (~Adium@184.69.73.122) Quit (Quit: Leaving.)
[2:03] <gregaf> night
[2:06] * mjosu001 (~mosu001@en-439-0331-001.esc.auckland.ac.nz) has joined #ceph
[2:07] * mjosu001 (~mosu001@en-439-0331-001.esc.auckland.ac.nz) Quit ()
[2:13] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[2:14] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[2:14] * jjgalvez (~jjgalvez@12.248.40.138) Quit (Quit: Leaving.)
[2:17] * MikeMcClurg (~mike@cpc10-cmbg15-2-0-cust205.5-4.cable.virginmedia.com) Quit (Ping timeout: 480 seconds)
[2:19] * Cube (~Adium@12.248.40.138) Quit (Ping timeout: 480 seconds)
[2:28] * MikeMcClurg (~mike@cpc10-cmbg15-2-0-cust205.5-4.cable.virginmedia.com) has joined #ceph
[2:33] * johnl (~johnl@2a02:1348:14c:1720:dc45:d42a:b717:2a3c) Quit (Remote host closed the connection)
[2:33] * johnl (~johnl@2a02:1348:14c:1720:2cc6:1331:fad8:91a8) has joined #ceph
[2:33] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) Quit (Quit: Leaving.)
[2:35] * yoshi (~yoshi@p37219-ipngn1701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[2:39] * yoshi (~yoshi@p37219-ipngn1701marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[2:39] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[2:40] * nhm (~nhm@174-20-32-79.mpls.qwest.net) Quit (Read error: No route to host)
[2:48] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) Quit (Quit: tryggvil)
[2:51] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) has joined #ceph
[3:00] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[3:02] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[3:10] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[3:12] * maelfius (~mdrnstm@66.209.104.107) Quit (Quit: Leaving.)
[3:19] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[3:30] * sagelap (~sage@37.sub-166-250-32.myvzw.com) has joined #ceph
[3:38] * sagelap (~sage@37.sub-166-250-32.myvzw.com) Quit (Ping timeout: 480 seconds)
[3:47] * gregaf1 (~Adium@2607:f298:a:607:610d:c78a:75d2:9ef8) has joined #ceph
[3:48] * Karcaw (~evan@96-41-198-212.dhcp.elbg.wa.charter.com) has joined #ceph
[3:52] * bugfixer (~dchambers@42gis175.gulftel.com) has joined #ceph
[3:53] * gregaf (~Adium@2607:f298:a:607:e920:3b6d:3a02:2ffc) Quit (Ping timeout: 480 seconds)
[3:59] <elder> Damnit I thought I added a pointer from a message to the connection it was associated with. I could really use it.
[4:04] <elder> Oh well, I ofund it anyway.
[4:14] * chutzpah (~chutz@100.42.98.5) Quit (Quit: Leaving)
[4:15] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) Quit (Quit: Leaving.)
[4:18] * Ryan_Lane (~Adium@216.38.130.162) Quit (Quit: Leaving.)
[4:35] * slang (~slang@38.122.20.226) Quit (Quit: slang)
[4:38] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[5:01] * maelfius (~mdrnstm@pool-71-160-33-115.lsanca.fios.verizon.net) has joined #ceph
[5:10] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[5:23] * yoshi (~yoshi@p37219-ipngn1701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[5:26] * sagelap (~sage@45.sub-166-250-35.myvzw.com) has joined #ceph
[5:28] * gohko (~gohko@natter.interq.or.jp) Quit (Ping timeout: 480 seconds)
[5:29] * sagelap1 (~sage@2600:1010:b00e:585e:c685:8ff:fe59:d486) has joined #ceph
[5:32] * sagelap (~sage@45.sub-166-250-35.myvzw.com) Quit (Read error: Operation timed out)
[5:33] * deepsa_ (~deepsa@115.242.132.44) has joined #ceph
[5:36] * deepsa (~deepsa@122.172.161.4) Quit (Ping timeout: 480 seconds)
[5:36] * deepsa_ is now known as deepsa
[5:37] * sagelap1 (~sage@2600:1010:b00e:585e:c685:8ff:fe59:d486) Quit (Ping timeout: 480 seconds)
[5:51] * deepsa_ (~deepsa@122.167.173.194) has joined #ceph
[5:56] * deepsa (~deepsa@115.242.132.44) Quit (Ping timeout: 480 seconds)
[5:56] * deepsa_ is now known as deepsa
[6:04] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[6:15] * stass (stas@ssh.deglitch.com) Quit (Read error: Connection reset by peer)
[6:15] * stass (stas@ssh.deglitch.com) has joined #ceph
[6:28] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[6:57] * maelfius (~mdrnstm@pool-71-160-33-115.lsanca.fios.verizon.net) Quit (Quit: Leaving.)
[6:59] * maelfius (~mdrnstm@pool-71-160-33-115.lsanca.fios.verizon.net) has joined #ceph
[7:11] * dmick is now known as dmick_away
[7:33] * glowell (~Adium@38.122.20.226) Quit (Quit: Leaving.)
[7:46] * glowell (~Adium@68.170.71.123) has joined #ceph
[7:54] * The_Bishop (~bishop@2001:470:50b6:0:2515:55f8:ef09:e742) Quit (Ping timeout: 480 seconds)
[8:03] * The_Bishop (~bishop@2001:470:50b6:0:e4bb:71d2:a2b:2a29) has joined #ceph
[8:07] * maelfius (~mdrnstm@pool-71-160-33-115.lsanca.fios.verizon.net) Quit (Quit: Leaving.)
[8:28] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[8:43] * deepsa (~deepsa@122.167.173.194) Quit (Ping timeout: 480 seconds)
[8:46] * deepsa (~deepsa@101.63.169.15) has joined #ceph
[8:58] * yoshi_ (~yoshi@p37219-ipngn1701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[8:58] * yoshi (~yoshi@p37219-ipngn1701marunouchi.tokyo.ocn.ne.jp) Quit (Read error: Connection reset by peer)
[9:01] * jeffp (~jplaisanc@net66-219-41-161.static-customer.corenap.com) Quit (Read error: Operation timed out)
[9:16] * loicd (~loic@magenta.dachary.org) has joined #ceph
[9:19] * verwilst (~verwilst@d5152D6B9.static.telenet.be) has joined #ceph
[9:29] * deepsa_ (~deepsa@122.167.171.159) has joined #ceph
[9:30] * deepsa (~deepsa@101.63.169.15) Quit (Ping timeout: 480 seconds)
[9:30] * deepsa_ is now known as deepsa
[9:37] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[9:57] * EmilienM (~EmilienM@195-132-228-252.rev.numericable.fr) has joined #ceph
[9:58] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[9:59] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[10:09] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[10:10] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[10:10] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[10:12] * jeffp (~jplaisanc@net66-219-41-161.static-customer.corenap.com) has joined #ceph
[10:26] * profy (~aurelien@gw142.ispfr.net) has joined #ceph
[10:26] <profy> hi
[10:27] <profy> my ceph cluster seems to be corrupted
[10:27] <profy> root@ceph6:~# ceph status
[10:27] <profy> health HEALTH_WARN 3 pgs backfill; 3 pgs recovering; 3 pgs stuck unclean
[10:27] <profy> monmap e1: 3 mons at {a=192.168.58.1:6789/0,b=192.168.58.2:6789/0,c=192.168.58.3:6789/0}, election epoch 20, quorum 0,1,2 a,b,c
[10:27] <profy> osdmap e1034: 6 osds: 6 up, 6 in
[10:27] <profy> pgmap v11745: 768 pgs: 765 active+clean, 3 active+recovering+remapped+backfill; 31861 MB data, 65586 MB used, 529 GB / 605 GB avail
[10:27] <profy> mdsmap e1: 0/0/1 up
[10:28] <profy> it's blocked like this since yesterday
[10:30] <profy> and I have i/o errors inside my rbd block device
[10:31] <profy> what can I do ?
[10:37] * verwilst (~verwilst@d5152D6B9.static.telenet.be) Quit (Ping timeout: 480 seconds)
[10:39] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[10:39] * loicd (~loic@magenta.dachary.org) has joined #ceph
[10:39] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has joined #ceph
[10:39] <profy> maybe it's because of mismatched version :
[10:40] <profy> root@ceph1:/var/log/ceph# dpkg -l | grep ceph
[10:40] <profy> ii ceph 0.48.1argonaut-1~bpo70+1 amd64 distributed storage and file system
[10:40] <profy> ii ceph-common 0.48.2argonaut-1~bpo70+1 amd64 common uti
[10:44] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) Quit (Quit: tryggvil)
[10:53] * verwilst (~verwilst@d5152D6B9.static.telenet.be) has joined #ceph
[10:57] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[11:10] * TaylorBaby (ask@71-88-208-68.dhcp.kgpt.tn.charter.com) has joined #ceph
[11:12] <TaylorBaby> WWW.QUICKCOLLEGEHOOKUPS.COM is giving away 7 more FREE accounts!
[11:12] * TaylorBaby (ask@71-88-208-68.dhcp.kgpt.tn.charter.com) Quit (autokilled: This host triggered network flood protection. please mail support@oftc.net if you feel this is in error, quoting this message. (2012-09-26 09:12:32))
[11:22] <profy> problem solved after upgrading and reboot each osd
[11:24] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) Quit (Quit: Leaving.)
[11:25] <hk135> Hi there, is there any way to tell a librbd (qemu on proxmox in this case) where in the infrastucture you are so it connects to the nearest osd?
[11:26] * pentabular (~sean@adsl-70-231-141-17.dsl.snfc21.sbcglobal.net) has joined #ceph
[11:29] <pentabular> Early Cuyler is an Appalachian mud squid
[11:29] * EmilienM (~EmilienM@195-132-228-252.rev.numericable.fr) Quit (Quit: kill -9 EmilienM)
[11:33] * pentabular (~sean@adsl-70-231-141-17.dsl.snfc21.sbcglobal.net) Quit (Quit: pentabular)
[11:33] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[11:37] * yoshi_ (~yoshi@p37219-ipngn1701marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[11:51] * loicd (~loic@jem75-2-82-233-234-24.fbx.proxad.net) has joined #ceph
[12:02] * jlogan (~Thunderbi@72.5.59.176) has joined #ceph
[12:13] * MikeMcClurg (~mike@cpc10-cmbg15-2-0-cust205.5-4.cable.virginmedia.com) Quit (Quit: Leaving.)
[12:15] * lxo (~aoliva@lxo.user.oftc.net) Quit (Quit: later)
[12:22] * jlogan (~Thunderbi@72.5.59.176) Quit (Quit: jlogan)
[12:28] * LarsFronius_ (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[12:28] * LarsFronius_ (~LarsFroni@testing78.jimdo-server.com) Quit ()
[12:30] * LarsFronius_ (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[12:34] * LarsFronius_ (~LarsFroni@testing78.jimdo-server.com) Quit ()
[12:34] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Ping timeout: 480 seconds)
[12:35] * BManojlovic (~steki@87.110.183.173) has joined #ceph
[13:02] * loicd (~loic@jem75-2-82-233-234-24.fbx.proxad.net) Quit (Quit: Leaving.)
[13:11] * MikeMcClurg (~mike@firewall.ctxuk.citrix.com) has joined #ceph
[13:17] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[13:32] * jluis is now known as joao
[13:33] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[13:40] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[14:02] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[14:30] * EmilienM (~EmilienM@195-132-228-252.rev.numericable.fr) has joined #ceph
[14:35] * jamespage (~jamespage@tobermory.gromper.net) has joined #ceph
[14:46] * nhorman (~nhorman@2001:470:8:a08:7aac:c0ff:fec2:933b) has joined #ceph
[14:54] * EmilienM (~EmilienM@195-132-228-252.rev.numericable.fr) Quit (Quit: kill -9 EmilienM)
[14:55] * EmilienM (~EmilienM@195-132-228-252.rev.numericable.fr) has joined #ceph
[14:55] * EmilienM (~EmilienM@195-132-228-252.rev.numericable.fr) Quit ()
[15:06] * krisek_ (~kris@dsdf-4db5018d.pool.mediaWays.net) has joined #ceph
[15:12] <krisek_> hi, I have a question about radosgw and buckets, can anybody help me a bit?
[15:35] * loicd (~loic@AMontsouris-651-1-158-189.w82-123.abo.wanadoo.fr) has joined #ceph
[15:38] * EmilienM (~EmilienM@195-132-228-252.rev.numericable.fr) has joined #ceph
[15:45] * nhm (~nhm@67-220-20-222.usiwireless.com) has joined #ceph
[15:49] <krisek_> I try to delete files from buckets, but somehow the data is not freed up, the space is still reported to be used
[15:51] <krisek_> and rados --pool=.rgw.buckets ls still shows the file
[15:52] <krisek_> the issue seems to be discussed here http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/7927 but I don't understand the solution (unlinking/linking didn't work)
[15:58] * cblack101 (c0373727@ircip1.mibbit.com) has joined #ceph
[15:59] * LarsFronius_ (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[15:59] * LarsFronius_ (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[16:00] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Read error: Connection reset by peer)
[16:01] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[16:05] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Read error: Connection reset by peer)
[16:11] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[16:13] * slang (~slang@38.122.20.226) has joined #ceph
[16:33] * KevinPerks (~Adium@dhcp184-48-55-94.slad.lax.wayport.net) has joined #ceph
[16:36] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Ping timeout: 480 seconds)
[16:39] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[16:40] * nhm (~nhm@67-220-20-222.usiwireless.com) Quit (Remote host closed the connection)
[16:40] * nhm (~nhm@67-220-20-222.usiwireless.com) has joined #ceph
[16:44] * tryggvil_ (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[16:44] * tryggvil_ (~tryggvil@rtr1.tolvusky.sip.is) Quit ()
[16:46] * nhm_ (~nhm@174-20-32-79.mpls.qwest.net) has joined #ceph
[16:49] * nhm (~nhm@67-220-20-222.usiwireless.com) Quit (Ping timeout: 480 seconds)
[16:51] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Ping timeout: 480 seconds)
[16:52] * BManojlovic (~steki@87.110.183.173) Quit (Quit: Ja odoh a vi sta 'ocete...)
[16:57] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) has joined #ceph
[16:59] * nobody (5c398193@ircip2.mibbit.com) has joined #ceph
[17:00] <nobody> hi
[17:00] <nobody> anybody here?
[17:01] * nobody (5c398193@ircip2.mibbit.com) has left #ceph
[17:01] <nhm_> heh
[17:01] * nhm_ is now known as nhm
[17:02] <joao> I kinda found that question a bit ironic
[17:10] * loicd (~loic@AMontsouris-651-1-158-189.w82-123.abo.wanadoo.fr) Quit (Quit: Leaving.)
[17:12] * KevinPerks (~Adium@dhcp184-48-55-94.slad.lax.wayport.net) Quit (Quit: Leaving.)
[17:16] * Fruit (wsl@2001:980:3300:2:216:3eff:fe10:122b) has joined #ceph
[17:17] * KevinPerks (~Adium@dhcp184-48-55-94.slad.lax.wayport.net) has joined #ceph
[17:17] * KevinPerks (~Adium@dhcp184-48-55-94.slad.lax.wayport.net) has left #ceph
[17:24] * verwilst (~verwilst@d5152D6B9.static.telenet.be) Quit (Quit: Ex-Chat)
[17:25] * Tv_ (~tv@38.122.20.226) has joined #ceph
[17:25] * LarsFronius (~LarsFroni@2a02:8108:3c0:79:1c4e:c926:c068:864c) has joined #ceph
[17:35] * glowell (~Adium@68.170.71.123) Quit (Quit: Leaving.)
[17:37] * loicd (~loic@jem75-2-82-233-234-24.fbx.proxad.net) has joined #ceph
[17:43] * tren (~Adium@184.69.73.122) has joined #ceph
[17:43] * Solver (~robert@atlas.opentrend.net) Quit (Ping timeout: 480 seconds)
[17:44] * glowell (~Adium@38.122.20.226) has joined #ceph
[17:49] * Solver (~robert@atlas.opentrend.net) has joined #ceph
[17:50] * jlogan1 (~Thunderbi@2600:c00:3010:1:2431:489a:70ae:7aa0) has joined #ceph
[17:55] <krisek_> ok, in the meanwhile I've found the answer on the mailing list, just for the record: http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/4014
[18:03] * The_Bishop (~bishop@2001:470:50b6:0:e4bb:71d2:a2b:2a29) Quit (Ping timeout: 480 seconds)
[18:06] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) Quit (Quit: tryggvil)
[18:08] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[18:10] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) has joined #ceph
[18:12] * The_Bishop (~bishop@2001:470:50b6:0:2515:55f8:ef09:e742) has joined #ceph
[18:14] * nhm_ (~nhm@67-220-20-222.usiwireless.com) has joined #ceph
[18:16] * nhm (~nhm@174-20-32-79.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[18:19] * cblack101 (c0373727@ircip1.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[18:33] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[18:34] * cblack101 (c0373727@ircip1.mibbit.com) has joined #ceph
[18:42] * spicewiesel (~spicewies@static.60.149.40.188.clients.your-server.de) has left #ceph
[18:43] * Cube (~Adium@12.248.40.138) has joined #ceph
[18:45] <joshd> hk135: it's not connecting to just one osd, but many of them
[18:47] <joshd> hk135: it is possible to optimize your crushmap for reads if you have fast and slow nodes, by making the fast nodes the primaries
[18:50] * krisek_ (~kris@dsdf-4db5018d.pool.mediaWays.net) Quit (reticulum.oftc.net charon.oftc.net)
[18:50] * deepsa (~deepsa@122.167.171.159) Quit (reticulum.oftc.net charon.oftc.net)
[18:50] * jamespage (~jamespage@tobermory.gromper.net) Quit (reticulum.oftc.net charon.oftc.net)
[18:50] * joao (~JL@89.181.153.232) Quit (reticulum.oftc.net charon.oftc.net)
[18:50] * scheuk (~scheuk@67.110.32.249.ptr.us.xo.net) Quit (reticulum.oftc.net charon.oftc.net)
[18:50] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (reticulum.oftc.net charon.oftc.net)
[18:50] * mkampe (~markk@2607:f298:a:607:222:19ff:fe31:b5d3) Quit (reticulum.oftc.net charon.oftc.net)
[18:50] * mrjack_ (mrjack@office.smart-weblications.net) Quit (reticulum.oftc.net charon.oftc.net)
[18:50] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) Quit (reticulum.oftc.net charon.oftc.net)
[18:50] * benner (~benner@193.200.124.63) Quit (reticulum.oftc.net charon.oftc.net)
[18:50] * darkfaded (~floh@188.40.175.2) Quit (reticulum.oftc.net charon.oftc.net)
[18:50] * ogelbukh (~weechat@nat3.4c.ru) Quit (reticulum.oftc.net charon.oftc.net)
[18:50] * nwl (~levine@atticus.yoyo.org) Quit (reticulum.oftc.net charon.oftc.net)
[18:50] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Read error: Operation timed out)
[18:51] * krisek_ (~kris@dsdf-4db5018d.pool.mediaWays.net) has joined #ceph
[18:51] * jamespage (~jamespage@tobermory.gromper.net) has joined #ceph
[18:51] * deepsa (~deepsa@122.167.171.159) has joined #ceph
[18:51] * joao (~JL@89.181.153.232) has joined #ceph
[18:51] * scheuk (~scheuk@67.110.32.249.ptr.us.xo.net) has joined #ceph
[18:51] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[18:51] * mkampe (~markk@2607:f298:a:607:222:19ff:fe31:b5d3) has joined #ceph
[18:51] * mrjack_ (mrjack@office.smart-weblications.net) has joined #ceph
[18:51] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) has joined #ceph
[18:51] * benner (~benner@193.200.124.63) has joined #ceph
[18:51] * darkfaded (~floh@188.40.175.2) has joined #ceph
[18:51] * ogelbukh (~weechat@nat3.4c.ru) has joined #ceph
[18:51] * nwl (~levine@atticus.yoyo.org) has joined #ceph
[18:53] * profy (~aurelien@gw142.ispfr.net) Quit (Ping timeout: 480 seconds)
[18:53] * sagelap (~sage@249.sub-70-199-197.myvzw.com) has joined #ceph
[18:54] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) Quit (Quit: Leaving.)
[18:56] * The_Bishop (~bishop@2001:470:50b6:0:2515:55f8:ef09:e742) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[19:00] <joao> sjust, around?
[19:00] <gregaf1> he's getting coffee ;)
[19:00] <joao> alright
[19:01] <dmick_away> I can confirm the getting coffee :)
[19:01] * dmick_away is now known as dmick
[19:01] <joao> gregaf1, I will eventually need a couple of minutes of your time also :p
[19:01] <gregaf1> okay, let me know when
[19:01] <joao> whenever you have the time
[19:01] <joao> should be quick
[19:02] <sjust> joao: here
[19:03] <joao> cool
[19:03] <cblack101> General Question: Do we have any baseline rbd measurements out there using iometer or something of the like with small 4k block sizes? The reason I ask is I'm getting about 10x fewer IOPS out of a Openstack VM with an rbd backing an additional volume mounted as /dev/vdc....
[19:04] <nhm_> cblack101: some of our guys have done 4k fio tests with RBD.
[19:05] <nhm_> cblack101: how do the iops you are getting compare with what the disks are capable of?
[19:07] <cblack101> I have 48x7k@48xOSD and I'm getting 43k read out of 4 physical hosts with /dev/rbd0 (not bad), but the same setup with VMs (distributed hosts) is about 4k...
[19:08] <cblack101> Sequential workload of course, random really blows chunks right now
[19:08] <joshd> to be clear, are you passing /dev/rbd0 through to the vm, or using qemu's built-in userspace rbd support?
[19:09] <cblack101> yep, I believe that's how we have it setup, Jim C. helped set it up the other day
[19:09] * nhm_ (~nhm@67-220-20-222.usiwireless.com) Quit (Read error: Connection reset by peer)
[19:10] <cblack101> I create a volume in Opensack, then attach it to the VM
[19:10] <cblack101> and it shosw up in the nova pool
[19:11] <cblack101> joshd an dmick, I may need to spend those last 2 consulting hours on this after I collect some more data
[19:12] <Tv_> lol Opensack
[19:12] <joshd> do you have rbd caching enabled?
[19:13] <cblack101> rofl, didn't catch that SP error...
[19:13] <cblack101> joshd Jim made some setting changes yesterday and had no luck, got a pointer to the how-to?
[19:13] <joshd> http://ceph.com/docs/master/config-cluster/rbd-config-ref/
[19:14] <cblack101> cool, ty, will look at that and get back to you
[19:14] <joshd> it should help a bunch
[19:15] * nhm (~nhm@174-20-32-79.mpls.qwest.net) has joined #ceph
[19:17] <dmick> cblack101: 43k IOPS@4K, x4 hosts?
[19:18] * jjgalvez (~jjgalvez@12.248.40.138) has joined #ceph
[19:19] <yehudasa> gregaf1: you'll be happy to hear that wip-admin-rest is ready for review
[19:19] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[19:20] * chutzpah (~chutz@100.42.98.5) has joined #ceph
[19:21] <cblack101> joshd on the rbd cache, looks like I only need to add the rbd cache true to the client ceph.conf files to enable, is this cache using a standard LRU tupe algorythm?
[19:23] <joshd> yeah, it's lru and writeback by default
[19:24] <joshd> so it can coalesce contiguous small writes into larger ones
[19:25] <cblack101> cool, at least then I know slightly what to expect during random tests. Do I need to unmap/remap the rbd for the config change to occur?
[19:26] <joshd> it doesn't affect kernel rbd - that can use the page cache
[19:26] <joshd> it's just for librbd (the userspace part) that qemu uses
[19:27] <joshd> I'll make that clear in the docs
[19:27] <cblack101> One more quesiton, are the rbd images "thin" by default? I see the data & used growing as I perform writes to the image
[19:27] <sjust> yeah, they are
[19:28] <cblack101> ok, so for performance testing, I think I'm going to want to fill those up completely so the overhead of growing the volume doesn't artifically impede the test
[19:28] <cblack101> sound right?
[19:29] <joshd> yeah
[19:29] <cblack101> afk for a bit
[19:35] * maelfius (~mdrnstm@66.209.104.107) has joined #ceph
[19:39] * tryggvil_ (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[19:39] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Read error: Connection reset by peer)
[19:39] * tryggvil_ is now known as tryggvil
[19:39] * jjgalvez (~jjgalvez@12.248.40.138) Quit (Quit: Leaving.)
[19:41] * sagelap (~sage@249.sub-70-199-197.myvzw.com) Quit (Ping timeout: 480 seconds)
[19:41] * maelfius (~mdrnstm@66.209.104.107) Quit (Quit: Leaving.)
[19:46] * maelfius (~mdrnstm@66.209.104.107) has joined #ceph
[19:48] * jjgalvez (~jjgalvez@12.248.40.138) has joined #ceph
[19:49] * LarsFronius_ (~LarsFroni@95-91-242-159-dynip.superkabel.de) has joined #ceph
[19:56] * LarsFronius (~LarsFroni@2a02:8108:3c0:79:1c4e:c926:c068:864c) Quit (Ping timeout: 480 seconds)
[19:56] * LarsFronius_ is now known as LarsFronius
[20:01] * LarsFronius (~LarsFroni@95-91-242-159-dynip.superkabel.de) Quit (Quit: LarsFronius)
[20:12] * ajm (~ajm@adam.gs) has joined #ceph
[20:14] * sagelap (~sage@176.sub-70-199-197.myvzw.com) has joined #ceph
[20:19] <wido> joshd: No new way to get these values in libvirt is there? We still need to pass them in the disk path?
[20:20] <wido> I want to prevent having a ceph.conf on every hypervisor node
[20:20] <wido> Looking for getting this into CloudStack, that's why
[20:20] <joshd> hmm? which values?
[20:21] <joshd> yeah, any extra parameters still need to go in the disk path
[20:21] * KevinPerks (~Adium@38.122.20.226) has joined #ceph
[20:21] <Fruit> hrmm, on http://ceph.com/docs/master/cluster-ops/authentication/ it says "Deprecated since version 0.51.". Does that mean everything following that line is deprecated?
[20:22] <joshd> Fruit: no, just the 'auth supported' config setting
[20:22] <Fruit> oh right.
[20:22] <wido> joshd: Yes, I meant the values indeed. Still need to be appended to the disk path, got it
[20:22] <Fruit> how do I generate a key to attach a normal (non-admin) client?
[20:22] <joao> sjust, whenever you have a couple of minutes, please check the topmost commit on leveldbstore-init-refactor
[20:23] * sagelap (~sage@176.sub-70-199-197.myvzw.com) Quit (Ping timeout: 480 seconds)
[20:23] <joshd> Fruit: http://ceph.com/docs/master/cluster-ops/authentication/#generate-a-key
[20:24] <sjust> joao: lookin
[20:24] <joshd> the ceph-authtool man page has some more example capabilities
[20:24] <joao> ty
[20:25] <Fruit> joshd: that doesn't seem to explain what capabilities are required for a specific client, or even what capabilities exist (or what they do: what does it mean to have rw access to an osd?)
[20:25] <Fruit> (if that simply hasn't been documented yet, I understand)
[20:25] <joshd> yeah, the full possible capabilities still need documenting
[20:26] <joshd> osd rw means it can read/write to osds, and it needs mon r to be able to get the osdmap from the monitor and talk to the osds
[20:26] <Fruit> there are no per-dataset acls?
[20:26] <sjust> joao: that looks good
[20:27] <joshd> Fruit: you can restrict the osd ones per pool
[20:27] <joshd> that part's in the ceph-authtool man page
[20:28] <Fruit> right, thanks, I'll take a look
[20:30] * MikeMcClurg (~mike@firewall.ctxuk.citrix.com) Quit (Quit: Leaving.)
[20:32] * KevinPerks (~Adium@38.122.20.226) Quit (Quit: Leaving.)
[20:33] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[20:43] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[20:45] <Fruit> joshd: one last question: in #enabling-authentication the client.admin key is created before the secret monitor key is uploaded. since cephx authenticates both ways, how can the admin client identify the mon?
[20:46] <joshd> Fruit: that's assuming your cluster starts without authentication enabled, so the admin client doesn't authenticate the mon
[20:47] * KevinPerks (~Adium@38.122.20.226) has joined #ceph
[20:47] <Fruit> but after step 6, the admin client won't be able to authenticate the mon anymore, since it never put the mon key in its ring
[20:54] * deepsa_ (~deepsa@115.184.27.221) has joined #ceph
[20:56] * deepsa (~deepsa@122.167.171.159) Quit (Ping timeout: 480 seconds)
[20:56] * deepsa_ is now known as deepsa
[20:57] * krisek_ (~kris@dsdf-4db5018d.pool.mediaWays.net) has left #ceph
[21:02] <Fruit> not trying to be a pain - just trying to understand cephx :)
[21:16] * KevinPerks (~Adium@38.122.20.226) Quit (Quit: Leaving.)
[21:23] * pentabular (~sean@adsl-70-231-141-17.dsl.snfc21.sbcglobal.net) has joined #ceph
[21:32] * deepsa (~deepsa@115.184.27.221) Quit (Quit: Computer has gone to sleep.)
[21:39] * nhm_ (~nhm@67-220-20-222.usiwireless.com) has joined #ceph
[21:42] * nhm (~nhm@174-20-32-79.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[21:47] * KevinPerks (~Adium@38.122.20.226) has joined #ceph
[21:48] <joshd> Fruit: the client only needs its key to authenticate
[21:48] <joshd> the monitors keep track of all the keys internally
[21:48] * BManojlovic (~steki@195.13.166.253) has joined #ceph
[21:49] <Fruit> joshd: oh I thought cephx authenticated both ways
[21:53] <joshd> I think this is still pretty accurate: http://www.mail-archive.com/ceph-devel@lists.sourceforge.net/msg00328.html
[21:53] <joshd> there's another mail I remember describing what was implemented, but I can't seem to find it now
[21:58] * KevinPerks (~Adium@38.122.20.226) Quit (Ping timeout: 480 seconds)
[21:59] <joshd> if you're really curious, there's a bunch more detail at http://ceph.com/docs/wip-msgauth/dev/cephx_protocol/
[22:02] <Fruit> ah, I misunderstood/misread "The authentication protocol is such that both parties are able to prove to each other they have a copy of the key"
[22:02] <Fruit> there's only one key they're proving to eachother (namely, the client's)
[22:04] <joshd> yeah, the monitors are implicitly trusted
[22:04] * KevinPerks (~Adium@38.122.20.226) has joined #ceph
[22:06] <Fruit> I think I finally get it now. thanks for your patience!
[22:07] <joshd> you're welcome :)
[22:07] * KevinPerks1 (~Adium@2607:f298:a:607:4d81:5d65:cf8c:e3bb) has joined #ceph
[22:07] * KevinPerks (~Adium@38.122.20.226) Quit (Read error: Connection reset by peer)
[22:11] <dspano> Just did firmware maintenance on one of my osds then brought it back into the cluster with no problems. ceph kicks a$$!
[22:23] * nhorman (~nhorman@2001:470:8:a08:7aac:c0ff:fec2:933b) Quit (Quit: Leaving)
[22:36] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Quit: Leaving)
[22:36] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[22:38] * elder_ (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[22:38] * elder_ (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit ()
[22:38] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit ()
[22:39] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[22:45] * wido (~wido@2a00:f10:104:206:9afd:45af:ae52:80) Quit (Remote host closed the connection)
[22:45] * wido (~wido@2a00:f10:104:206:9afd:45af:ae52:80) has joined #ceph
[22:46] * masterpe_ (~masterpe@87.233.7.43) has joined #ceph
[22:47] * masterpe (~masterpe@2001:990:0:1674::1:82) Quit (Read error: Connection reset by peer)
[22:48] * loicd (~loic@jem75-2-82-233-234-24.fbx.proxad.net) Quit (Quit: Leaving.)
[22:48] <maelfius> dspano: I agree. (for very similar reasons recently)
[22:52] <dspano> :
[22:52] <dspano> :)
[22:58] * KevinPerks (~Adium@38.122.20.226) has joined #ceph
[22:59] * KevinPerks1 (~Adium@2607:f298:a:607:4d81:5d65:cf8c:e3bb) Quit (Read error: Connection reset by peer)
[23:13] * pentabular (~sean@adsl-70-231-141-17.dsl.snfc21.sbcglobal.net) has left #ceph
[23:14] * slang (~slang@38.122.20.226) has left #ceph
[23:14] * slang (~slang@38.122.20.226) has joined #ceph
[23:17] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[23:18] * Karcaw (~evan@96-41-198-212.dhcp.elbg.wa.charter.com) Quit (Ping timeout: 480 seconds)
[23:34] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: Leaving)
[23:35] * loicd (~loic@magenta.dachary.org) has joined #ceph
[23:55] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[23:56] <tren> Dylan, you there?
[23:57] <tren> damnit, wrong irc :)
[23:58] * ninkotech (~duplo@89.177.137.231) Quit (Quit: Konversation terminated!)
[23:58] * ninkotech (~duplo@89.177.137.231) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.