#ceph IRC Log

Index

IRC Log for 2013-09-15

Timestamps are in GMT/BST.

[0:03] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:12] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) has joined #ceph
[0:17] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[0:42] * AfC (~andrew@2001:44b8:31cb:d400:2ad2:44ff:fe08:a4c) has joined #ceph
[0:43] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[0:44] * diegows (~diegows@190.190.11.42) Quit (Ping timeout: 480 seconds)
[0:54] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[0:55] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[0:59] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) has joined #ceph
[1:02] * AfC (~andrew@2001:44b8:31cb:d400:2ad2:44ff:fe08:a4c) Quit (Ping timeout: 480 seconds)
[1:03] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:05] * BillK (~BillK-OFT@58-7-172-nwork.dyn.iinet.net.au) has joined #ceph
[1:09] * cfreak200 (~cfreak200@p4FF3F53A.dip0.t-ipconnect.de) has joined #ceph
[1:12] * cfreak201 (~cfreak200@p4FF3F8B8.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:13] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[1:31] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Remote host closed the connection)
[1:36] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[1:39] * dspano (~AndChat13@h207.4.141.67.dynamic.ip.windstream.net) has joined #ceph
[1:41] * dspano (~AndChat13@h207.4.141.67.dynamic.ip.windstream.net) Quit ()
[1:41] <mech422> Does anyone know how you make a file.managed ONLY update when a 'parent' file.managed actually changes ?
[1:43] <mech422> (specifically, I'm creating a 'flag' file (.this_step_done), and I only want the 2nd file.managed to fire off the first time the state is executed (when I create .this_step_done) )
[1:45] <mech422> crud - wrong window...
[1:45] <mech422> sorry
[1:50] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[1:51] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[1:53] * sjm (~sjm@CPE001b24bd8cd3-CM001bd7096236.cpe.net.cable.rogers.com) has joined #ceph
[1:56] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[2:04] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[2:04] * diegows (~diegows@190.190.11.42) has joined #ceph
[2:11] * cfreak201 (~cfreak200@p4FF3F447.dip0.t-ipconnect.de) has joined #ceph
[2:13] * cfreak200 (~cfreak200@p4FF3F53A.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[2:13] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[2:34] * scheuk (~scheuk@204.246.67.78) Quit (Ping timeout: 480 seconds)
[2:37] <lx0> wow, ceph_crc32c_le_generic is ceph-osd's top cpu eater by far during recovery. it's a shame that such an expensive operation is pretty much performed twice on each written block on btrfs. I wonder if there's any way to make it cheaper (short of replacing servers for intel ones with built-in crc32 instructions :-)
[2:38] <lx0> (four times, actually, if you count the crc computations at the source of replication too ;-)
[2:38] * lx0 is now known as lxo
[2:41] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[2:49] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[2:56] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[3:01] * sjm (~sjm@CPE001b24bd8cd3-CM001bd7096236.cpe.net.cable.rogers.com) Quit (Quit: Leaving)
[3:09] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[3:18] * Lea (~LeaChim@host86-135-252-168.range86-135.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[3:30] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[3:50] * haomaiwang (~haomaiwan@117.79.232.211) has joined #ceph
[3:56] * Steki (~steki@198.199.65.141) Quit (Ping timeout: 480 seconds)
[4:01] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[4:03] * sjm (~sjm@CPE001b24bd8cd3-CM001bd7096236.cpe.net.cable.rogers.com) has joined #ceph
[4:04] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:09] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[4:30] * sjm (~sjm@CPE001b24bd8cd3-CM001bd7096236.cpe.net.cable.rogers.com) Quit (Ping timeout: 480 seconds)
[4:49] * sjm (~sjm@CPE001b24bd8cd3-CM001bd7096236.cpe.net.cable.rogers.com) has joined #ceph
[5:01] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[5:06] * fireD_ (~fireD@93-142-229-143.adsl.net.t-com.hr) has joined #ceph
[5:07] * fireD (~fireD@93-139-133-20.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[5:09] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[5:33] * a_____ (~a@pool-173-55-143-200.lsanca.fios.verizon.net) has joined #ceph
[5:35] * AfC (~andrew@gateway.syd.operationaldynamics.com) has joined #ceph
[6:02] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[6:10] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[6:20] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Ping timeout: 480 seconds)
[6:38] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[6:39] * sleinen1 (~Adium@2001:620:0:25:205a:73e9:e071:c4f0) has joined #ceph
[6:46] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[7:10] * ScOut3R (~ScOut3R@4E5C2305.dsl.pool.telekom.hu) has joined #ceph
[7:10] * ScOut3R (~ScOut3R@4E5C2305.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[7:11] * ScOut3R (~ScOut3R@4E5C2305.dsl.pool.telekom.hu) has joined #ceph
[7:19] * ScOut3R (~ScOut3R@4E5C2305.dsl.pool.telekom.hu) Quit (Ping timeout: 480 seconds)
[7:25] * sjustlaptop (~sam@ip174-70-103-52.no.no.cox.net) has joined #ceph
[7:26] * a_____ (~a@pool-173-55-143-200.lsanca.fios.verizon.net) Quit (Quit: This computer has gone to sleep)
[8:00] * AfC (~andrew@gateway.syd.operationaldynamics.com) Quit (Quit: Leaving.)
[8:04] <Qu310> Lo, i'm trying to setup a dumpling radosgw, however it seems the radosgw dosn't want to start i'm getting this in my radosgw.log http://pastie.org/private/e7jmcag2oiwy3vzlkima
[8:52] * sjustlaptop (~sam@ip174-70-103-52.no.no.cox.net) Quit (Ping timeout: 480 seconds)
[8:58] * foosinn (~stefan@office.unitedcolo.de) has joined #ceph
[9:01] <mech422> I think everyone fell asleep :-P
[9:02] <mech422> no one left but us noobs :-D
[9:04] * Svedrin (svedrin@ketos.funzt-halt.net) Quit (Ping timeout: 480 seconds)
[9:04] * krechet (~krechet@213.87.142.186) has joined #ceph
[9:05] * krechet (~krechet@213.87.142.186) Quit (Quit: Ухожу я от вас (xchat 2.4.5 или старше))
[9:16] * Svedrin (svedrin@ketos.funzt-halt.net) has joined #ceph
[9:22] * haomaiwang (~haomaiwan@117.79.232.211) Quit (Remote host closed the connection)
[9:25] <Qu310> seems so
[9:29] * sjm (~sjm@CPE001b24bd8cd3-CM001bd7096236.cpe.net.cable.rogers.com) Quit (Ping timeout: 480 seconds)
[9:30] <mech422> Qu310: my radosgw time got turned into 'salt vs. ansible' time...
[9:31] <mech422> so all I have atm is a very well work stanza to add to ceph.conf :-P (But hey! CM has applied it like 100 times now!!)
[9:31] <mech422> s/work/worn/
[9:46] * diegows (~diegows@190.190.11.42) Quit (Ping timeout: 480 seconds)
[9:54] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[9:54] * KindTwo (~KindOne@198.14.206.87) has joined #ceph
[9:54] * KindTwo is now known as KindOne
[10:52] * mrprud (~mrprud@ANantes-554-1-246-148.w2-1.abo.wanadoo.fr) has joined #ceph
[11:04] <jtang> dheh
[11:04] <jtang> mech422: which one did you like more?
[11:04] <jtang> ansible or salt/
[11:04] <jtang> ?
[11:04] <jtang> im using ansible to roll out our radosgw
[11:04] <mech422> well - salt is sexy and prolly has $$ potential with contract work...
[11:05] <jtang> and we're probably going to create a few modules for ansible to roll out osd, mds and mons
[11:05] <jtang> yea i suppose
[11:05] <mech422> but I prefer ansibles 'imperative' style to the dependency hell you get with a graph based declarative setup like salt
[11:05] <jtang> ansible is great for deploying, but not necessaryily daily ops
[11:05] <jtang> we went down the route of re-deploying instead of trying to maintain and fix things
[11:05] <mech422> check this hack out: http://pastebin.com/xSsXzTpp
[11:06] <jtang> isnt there a notication or assemble command in salt?
[11:06] <mech422> jtang: I'm not sure what the 'daily ops' is I keep reading about ? stuff like deploying a new virtual web host or adding a new user should be straightforward in either ?
[11:07] <mech422> I could NOT figure out how to get the dam thing to only modify that file the first time the script is run :-P
[11:07] <jtang> deploying with both is fine, its when you need to maintain a running system salt looks better
[11:07] <jtang> i guess it depends on how big your site is
[11:07] <jtang> and what you are doing
[11:07] <mech422> I spent all day reading and hanging out in #salt - didn't get any real solution
[11:07] <jtang> btw, i found using ansible to 'migrate' stuff to be great
[11:07] <mech422> yeah - I suppose - we're small - about a dozen nodes + vms
[11:07] <jtang> rolling updates ftw!
[11:08] <jtang> we've several hundred at our site :P
[11:08] <mech422> oh? cool... how's performance ?
[11:08] <jtang> and we're small
[11:08] <mech422> oh hell - yeah - performance will be fine for me then :-P
[11:08] <jtang> mech422: speed wise, meh, but the serial feature in ansible rocks
[11:08] <jtang> you can pull out N number of machines to execute against
[11:08] <jtang> and it stops on first failure
[11:09] <jtang> so you dont get this stuff of "oh crap, i just blew away 300 nodes"
[11:09] <mech422> hehe
[11:09] <mech422> I should have your problems :-P
[11:09] <jtang> well i dont maintain the clusters anymore
[11:09] <mech422> bet ya got a nice fancy ceph setup to??
[11:09] <jtang> im more working on storage/preservation in our group
[11:09] <mech422> fun!
[11:10] <jtang> still its great being able to do rolling updates of distributed systems
[11:10] <mech422> I've only got 5 nodes and about 4.5T
[11:10] <jtang> of ceph storage?
[11:10] <mech422> everything running thru a single gigE switch too :-P
[11:10] <mech422> yeah
[11:10] <jtang> heh
[11:10] <jtang> we've 16tb right now in the initial ceph system for the preservation/archive project
[11:11] <jtang> there's a 100tb system for *other* stuff in work
[11:11] <mech422> wow - sweet....
[11:11] <mech422> feel free to mail me old harddrives :-P
[11:11] <jtang> 100tb is small, the original system was planned for ~250tb
[11:11] <jtang> we had *problems* with backblaze pods
[11:12] <mech422> I'm running on used dell C6100's - 2x4 blades for $1500 :-P
[11:12] <jtang> ah we have cluster of those
[11:12] <jtang> i think
[11:12] <jtang> they are nice machines
[11:12] <jtang> our setup is running of dell r720's
[11:12] <mech422> I actually love them aside from the IPMI/DRAC thing
[11:13] <mech422> I spent hours the other night trying to get java 6 installed to run the KVM
[11:13] <jtang> i think we'll probably get cheaper boxes for the rest of the storage nodes
[11:14] <mech422> I'm actually really annoyed about this salt thing....
[11:14] <mech422> its a freaking CM system - there has _got_ to be a way to only process files when they change... I have to be doing it wrong
[11:15] <jtang> heh
[11:16] <jtang> are you running the radosgw on el6 or ubuntu?
[11:16] <jtang> im curious, i had lots of deployemnet fun with it
[11:16] <jtang> i was never too happy with it
[11:16] <mech422> no..not at all yet...
[11:16] <mech422> I was supposed ot be doing that this weekend
[11:16] <mech422> but the storage boxes are debian wheezy
[11:16] <jtang> ah okay
[11:17] <mech422> VM hosts are ubuntu - but given a choice, I prefer debian for 'critical' stuff
[11:17] <mech422> really nice that inktank makes debs for wheezy :-D
[11:17] <jtang> ah okay
[11:18] <jtang> im kinda glad they are doing stuff with erasure coding
[11:18] <jtang> its going to fall nicely into our plans
[11:18] <mech422> oh? secure scrubbing ? I guess with that much space you'll need it...
[11:18] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) has joined #ceph
[11:18] <mech422> for us 'scrubbing' will be fire up a new VM :-P
[11:19] <jtang> its more than just that, its just the space saving
[11:19] <jtang> and the on read checking of data that we're interested in
[11:19] <jtang> replication doesnt really work (cheaply) once you get to pb sized systems
[11:19] <mech422> oh? they adding crc checking to data reads ? I hadn't seen that...
[11:19] <jtang> especially in academia where we dont get much money for anything
[11:20] * Steki (~steki@198.199.65.141) has joined #ceph
[11:20] <jtang> mech422: checkout the erasure coding pages in the wikis
[11:20] <mech422> hehe - try working for startups :-)
[11:20] <jtang> its not there yet from what i know, though i did see a commit in master in the last few days
[11:20] <mech422> yeah - I'll have to add that to my RTFM list... so much to do :-P
[11:22] <jtang> anywya, i'd like ot hear more about salt + ceph deploys
[11:22] <jtang> :)
[11:22] <jtang> cause it seems the salt stack is pretty cool and has lots of momentum
[11:22] <mech422> oh..dunno about salt...but ummm...
[11:23] * jtang wishes he did some more sysadmining type stuff
[11:23] <jtang> i kinda miss it
[11:24] <mech422> hmm - he's not in channel
[11:24] <mech422> there was a guy here end of last week buidling custom puppet? chef?? recipes for pull up a ceph cluster from scratch
[11:24] <jtang> there are chef recipes already to do that
[11:24] <jtang> the puppet stuff looks like its getting mature
[11:25] <jtang> i think there is juju charms as well for ceph
[11:25] <mech422> jtang: oh ... wonder if anyone told him that :-P
[11:25] <mech422> have you tried juju ?
[11:25] <jtang> nope
[11:25] <jtang> it seems ubuntu centric and i have no real interest in it
[11:25] <jtang> :P
[11:25] <mech422> hmm - me either... it seemed sorta similiar to ansible as far as recipes went
[11:26] <mech422> at least, from the first 2 pages of advertising hype
[11:26] <jtang> heh, cool salt is being used by fusion-io
[11:27] <jtang> well ansible is nice cause of the low startup costs
[11:27] <jtang> the devs on the our team are able to use/modify it without having to learn yet another DSL
[11:28] <jtang> its primarily YAML and its well known/understood
[11:28] * Jedicus (~user@108-64-153-24.lightspeed.cicril.sbcglobal.net) Quit (Read error: Operation timed out)
[11:28] <mech422> yeah...
[11:29] <mech422> Salt added an 'overstates' and 'order' setup to allow you to directly order build steps
[11:29] <jtang> i wonder if the inktank people are going to be at SC13
[11:29] <mech422> that sorta indicates something was......lacking.. in their DSL/Yaml setup (IMHO)
[11:32] <jtang> hmmm, these openstack and cloudstack features makes me wonder if we made a bad choice with opennebula
[11:32] <mech422> oh...I almost went opennebula
[11:33] <jtang> we're using opennebula
[11:33] <mech422> it seemed much less 'intrusive' then openstack on the overall network setup
[11:33] <jtang> and had to *fix* some of the ceph scripts for it to work
[11:33] <jtang> snapshotting and attaching additional drives dont work in our deployment
[11:33] <jtang> but we can live with that
[11:34] <mech422> oh - been a couple of people in here this weekend having problems hotplugging libvirt with rdb images
[11:34] <jtang> its probably (usually) the apparmor rules in ubuntu which causes me problems
[11:35] <jtang> the recommendation is generally to disable cephx (which is a bad idea)
[11:35] <mech422> tbh - I skipped cephx :-P
[11:35] <jtang> we can hotplug stuff manually, its just not "scripted" for opennebula
[11:35] <jtang> we're kinda not pushed to do it, cause we dont need it
[11:36] <jtang> im having a bit of a hard time understanding cephx
[11:36] <jtang> i wish there were concrete examples of deploying it
[11:36] <mech422> there's a lot of secrets to play with :-P
[11:36] <mech422> even the secret-secret you have to give libvirt to lockup somewhere ...
[11:37] <jtang> https://github.com/jcftang/ansible-opennebula
[11:37] <jtang> for what its worth
[11:38] <jtang> and https://github.com/jcftang/ansible-ceph
[11:38] <jtang> there's our radosgw deployment scripts there
[11:38] <jtang> the ceph cluster setup we have hasnt been "automated" away yet
[11:38] <jtang> as we're in the middle of doing a few things
[11:39] <mech422> oh nice... you've been busy :-)
[11:39] <jtang> https://github.com/jcftang/ansible-ceph/blob/master/site.yml
[11:39] <jtang> thats the play for setting up a haproxy load balancer + radosgw
[11:39] <jtang> it assumes an existing ceph cluster
[11:40] <mech422> oh! I'm gonna need that if I ever get this CM stuff done
[11:40] * Jedicus (~user@108-64-153-24.lightspeed.cicril.sbcglobal.net) has joined #ceph
[11:40] <jtang> https://github.com/jcftang/ansible-ceph/blob/master/playbooks/rolling_update.yml
[11:40] <mech422> I wanna run radosgw on all 5 nodes, and loadbalance
[11:40] <jtang> thats the rolling update stuff
[11:40] <jtang> we take out machines form the LB then update it
[11:40] <jtang> then put it back in
[11:41] <jtang> ansible allows tasks like this to be done *really easily*
[11:41] <mech422> yeah - it looks really clean
[11:42] <jtang> and this you might be interested in
[11:42] <jtang> https://github.com/jcftang/ansible-ceph/blob/master/playbooks/upgrade-ceph.yml
[11:42] <jtang> i recently did a rolling update of a cuttlefish to dumpling on a test system
[11:43] <mech422> hehe - I just bookmarked your main github page :-P
[11:43] <jtang> im really getting to like ansible for recording steps
[11:43] <mech422> lots of fun stuff to go digging thru!
[11:43] <jtang> *sigh* its not the day job
[11:43] <jtang> its just stuff we had to write for development
[11:43] <jtang> prod/dev/test are near identical systems in our work place
[11:44] * ScOut3R (~scout3r@4E5C2305.dsl.pool.telekom.hu) has joined #ceph
[11:44] <jtang> the cool stuff with the repository systems isnt online
[11:44] <jtang> just the *helpers* that got us to where we are
[11:44] <mech422> hehe
[11:44] <jtang> the opennebula plays need to be fixed
[11:45] <jtang> they aren't completely there yet
[11:45] <jtang> at least not for a generic deploy, its right now built for our environment
[11:45] <mech422> only thing that concerns me with ansible (at least from what I've read...) is variable declarations
[11:46] <jtang> yea its a pain in the ass
[11:46] <mech422> seems like you got a ton of different places you can define stuff
[11:46] <jtang> just use group_vars
[11:46] <mech422> hehe
[11:46] <jtang> and roles defaults
[11:46] <jtang> then override it in your hosts file for custom deployments
[11:46] <jtang> and never put in vars in the tasks/plays themselves if you can get away with it
[11:46] <jtang> i've been burnt a few times
[11:47] <jtang> :P
[11:47] <mech422> I still want to look at ansible and puppet ( and maybe chef, but I'm not really a ruby guy )
[11:48] <mech422> the puppet DSL samples looked fairly clean - but I've heard both puppet and chef are a PITA to install/setup
[11:48] <jtang> depends on how big your systems are ;)
[11:48] <jtang> if its just a few, its probably a pain
[11:48] <jtang> if its a few hundred then its worth it
[11:49] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[11:49] <mech422> I doubt we'll ever get above a dozen hosts and maybe 2 dozen vms
[11:49] <mech422> but if I have to learn something, I'd prefer to learn something I can re-use for another client
[11:49] <jtang> heh, learn them all!
[11:50] <jtang> right i got to go
[11:50] <mech422> yeah...who needs sleep ? :-P
[11:50] <jtang> ttyl
[11:50] <mech422> take care!
[11:50] <mech422> nice talking with ya!
[11:50] * KindOne (~KindOne@0001a7db.user.oftc.net) has joined #ceph
[12:01] * malcolm_ (~malcolm@101.165.48.42) has joined #ceph
[12:04] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[12:05] * sleinen2 (~Adium@2001:620:0:25:84a3:718e:4c5b:8d11) has joined #ceph
[12:07] * sleinen1 (~Adium@2001:620:0:25:205a:73e9:e071:c4f0) Quit (Ping timeout: 480 seconds)
[12:12] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[12:17] * malcolm_ (~malcolm@101.165.48.42) Quit (Ping timeout: 480 seconds)
[12:42] * ScOut3R (~scout3r@4E5C2305.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[13:16] * Lea (~LeaChim@host86-135-252-168.range86-135.btcentralplus.com) has joined #ceph
[14:15] * Kioob1 (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) has joined #ceph
[14:15] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) Quit (Read error: Connection reset by peer)
[14:48] * sleinen2 (~Adium@2001:620:0:25:84a3:718e:4c5b:8d11) Quit (Quit: Leaving.)
[14:49] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[14:50] * sleinen1 (~Adium@2001:620:0:25:3d5e:df29:cb60:88a7) has joined #ceph
[14:57] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[15:16] * sjm (~sjm@CPE001b24bd8cd3-CM001bd7096236.cpe.net.cable.rogers.com) has joined #ceph
[15:36] * sjm (~sjm@CPE001b24bd8cd3-CM001bd7096236.cpe.net.cable.rogers.com) Quit (Remote host closed the connection)
[15:51] * diegows (~diegows@190.190.11.42) has joined #ceph
[16:19] * diegows (~diegows@190.190.11.42) Quit (Read error: Operation timed out)
[16:20] * BillK (~BillK-OFT@58-7-172-nwork.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[17:00] * haomaiwang (~haomaiwan@211.155.113.239) has joined #ceph
[17:09] * dspano (~AndChat13@h207.4.141.67.dynamic.ip.windstream.net) has joined #ceph
[17:10] * AndChat|139376 (~AndChat13@2600:1000:b117:7c56:2982:ffbd:893c:cc9a) has joined #ceph
[17:11] * AndChat|139376 (~AndChat13@2600:1000:b117:7c56:2982:ffbd:893c:cc9a) has left #ceph
[17:17] * dspano (~AndChat13@h207.4.141.67.dynamic.ip.windstream.net) Quit (Ping timeout: 480 seconds)
[17:31] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[17:32] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[17:34] * hflai (~hflai@alumni.cs.nctu.edu.tw) has joined #ceph
[17:37] * a (~a@pool-173-55-143-200.lsanca.fios.verizon.net) has joined #ceph
[17:38] * a is now known as Guest6736
[18:08] * sleinen1 (~Adium@2001:620:0:25:3d5e:df29:cb60:88a7) Quit (Quit: Leaving.)
[18:08] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[18:09] * sleinen1 (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[18:09] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Read error: Connection reset by peer)
[18:09] * scuttlemonkey (~scuttlemo@38.127.1.5) has joined #ceph
[18:09] * ChanServ sets mode +o scuttlemonkey
[18:10] * sleinen (~Adium@2001:620:0:25:995a:f82f:dffe:356c) has joined #ceph
[18:10] * ScOut3R (~scout3r@4E5C2305.dsl.pool.telekom.hu) has joined #ceph
[18:11] * nwat (~nwat@eduroam-237-79.ucsc.edu) has joined #ceph
[18:11] * foosinn (~stefan@office.unitedcolo.de) Quit (Quit: Leaving)
[18:17] * sleinen1 (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:24] * rtek (~sjaak@rxj.nl) Quit (Ping timeout: 480 seconds)
[19:25] * rtek (~sjaak@rxj.nl) has joined #ceph
[19:28] * Frank9999 (~Frank@kantoor.transip.nl) Quit (Ping timeout: 480 seconds)
[19:32] * sjm (~sjm@CPE001b24bd8cd3-CM001bd7096236.cpe.net.cable.rogers.com) has joined #ceph
[19:33] * ScOut3R (~scout3r@4E5C2305.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[20:02] * Guest6736 (~a@pool-173-55-143-200.lsanca.fios.verizon.net) Quit (Quit: This computer has gone to sleep)
[20:07] * scuttlemonkey (~scuttlemo@38.127.1.5) Quit (Ping timeout: 480 seconds)
[20:11] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[20:33] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Ping timeout: 480 seconds)
[20:36] * sleinen (~Adium@2001:620:0:25:995a:f82f:dffe:356c) Quit (Quit: Leaving.)
[20:36] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[20:39] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[20:39] * sarob (~sarob@2601:9:7080:13a:dcb2:afb8:f2c:bd3f) has joined #ceph
[20:41] * nwat (~nwat@eduroam-237-79.ucsc.edu) Quit (Read error: Operation timed out)
[20:44] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:47] * sarob (~sarob@2601:9:7080:13a:dcb2:afb8:f2c:bd3f) Quit (Ping timeout: 480 seconds)
[20:50] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[20:52] * sleinen (~Adium@2001:620:0:25:c9d3:e272:9d5e:f9bf) has joined #ceph
[20:54] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[20:55] * sjm (~sjm@CPE001b24bd8cd3-CM001bd7096236.cpe.net.cable.rogers.com) Quit (Remote host closed the connection)
[21:02] * sjustlaptop (~sam@ip174-70-103-52.no.no.cox.net) has joined #ceph
[21:10] * sjustlaptop (~sam@ip174-70-103-52.no.no.cox.net) Quit (Read error: Operation timed out)
[21:18] * DarkAceZ (~BillyMays@50.107.55.36) has joined #ceph
[21:36] * nigwil (~idontknow@174.143.209.84) Quit (Ping timeout: 480 seconds)
[21:45] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Ping timeout: 480 seconds)
[21:49] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[21:50] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[21:52] * DarkAceZ (~BillyMays@50.107.55.36) has joined #ceph
[21:58] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[22:00] * berant (~blemmenes@24-236-241-163.dhcp.trcy.mi.charter.com) has joined #ceph
[22:01] * danieagle (~Daniel@177.133.174.145) has joined #ceph
[22:07] * wschulze (~wschulze@cpe-98-14-21-89.nyc.res.rr.com) has joined #ceph
[22:09] * sjustlaptop (~sam@ip174-70-103-52.no.no.cox.net) has joined #ceph
[22:09] * a (~a@pool-173-55-143-200.lsanca.fios.verizon.net) has joined #ceph
[22:10] * a is now known as Guest6751
[22:25] * Vjarjadian (~IceChat77@05453253.skybroadband.com) Quit (Quit: Now if you will excuse me, I have a giant ball of oil to throw out my window)
[22:27] * odi (~quassel@2a00:12c0:1015:136::9) has joined #ceph
[22:44] * a_ (~a@pool-173-55-143-200.lsanca.fios.verizon.net) has joined #ceph
[22:45] * Guest6751 (~a@pool-173-55-143-200.lsanca.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[22:49] * madkiss (~madkiss@2001:6f8:12c3:f00f:c153:d4ce:d913:33cd) Quit (Quit: Leaving.)
[22:54] * scuttlemonkey (~scuttlemo@38.127.1.5) has joined #ceph
[22:54] * ChanServ sets mode +o scuttlemonkey
[22:55] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[23:01] * Vjarjadian (~IceChat77@05453253.skybroadband.com) has joined #ceph
[23:01] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[23:09] * odi (~quassel@2a00:12c0:1015:136::9) Quit (Quit: quit...)
[23:09] * odi (~quassel@2a00:12c0:1015:136::9) has joined #ceph
[23:10] * frank9999 (~Frank@kantoor.transip.nl) has joined #ceph
[23:12] * peetaur (~peter@CPEbc1401e60493-CMbc1401e60490.cpe.net.cable.rogers.com) Quit (Quit: Konversation terminated!)
[23:12] * ScOut3R (~scout3r@4E5C2305.dsl.pool.telekom.hu) has joined #ceph
[23:16] * peetaur (~peter@CPEbc1401e60493-CMbc1401e60490.cpe.net.cable.rogers.com) has joined #ceph
[23:20] * sleinen (~Adium@2001:620:0:25:c9d3:e272:9d5e:f9bf) Quit (Quit: Leaving.)
[23:20] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[23:21] * sleinen1 (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[23:21] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Read error: Connection reset by peer)
[23:22] * sleinen (~Adium@2001:620:0:25:5129:5e26:1e64:ffb) has joined #ceph
[23:28] * wschulze (~wschulze@cpe-98-14-21-89.nyc.res.rr.com) Quit (Quit: Leaving.)
[23:29] * sleinen1 (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[23:54] * scuttlemonkey (~scuttlemo@38.127.1.5) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.