#ceph IRC Log

Index

IRC Log for 2013-01-19

Timestamps are in GMT/BST.

[0:00] * verwilst (~verwilst@dD5769628.access.telenet.be) Quit (Quit: Ex-Chat)
[0:05] * sleinen (~Adium@217-162-132-182.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:05] * mattbenjamin (~matt@aa2.linuxbox.com) Quit (Quit: Leaving.)
[0:13] <gregaf> xmltok: I would recommend fragmenting, yes
[0:13] <xmltok> yeah, i split it up into 100 buckets and i am seeing a much better distribution of OSD load
[0:13] <gregaf> it ought to work without doing so but the bucket index object is currently stored in one place so that would be quite a lot of data stuck in one place
[0:14] <xmltok> aha. so that explains it
[0:14] <xmltok> is the bucket index object stored in a special pool? i could put that pool on SSD
[0:17] <xmltok> im up to 200 puts a second and receiving around 3.5MB/s on each OSD hosting machine
[0:18] <gregaf> I think it is, but not sure which one…yehudasa?
[0:18] <yehudasa> gregaf, xmltok: no, resides on the same pool as the other data
[0:19] <xmltok> crud
[0:19] <xmltok> has there been anyone with very heavy write loads? im not sure how i can help speed this up
[0:20] <phantomcircuit> hmm this is a conundrem
[0:20] <gregaf> it should parallelize fairly well
[0:20] * aliguori (~anthony@cpe-70-112-157-151.austin.res.rr.com) Quit (Quit: Ex-Chat)
[0:21] <phantomcircuit> i have a cluster of qemu instance using rbd from when i had a single monitor
[0:21] <phantomcircuit> i guess i have to restart them to remove the original monitor dont i
[0:23] * tnt (~tnt@120.194-67-87.adsl-dyn.isp.belgacom.be) Quit (Read error: Operation timed out)
[0:23] <dmick> phantomcircuit: you mean to change the setting in the domain XML? and did the original single monitor go away, or did you just augment it?
[0:25] <phantomcircuit> i've added 2 additional monitors on dedicated boxes for a total of 3
[0:25] * BManojlovic (~steki@bojanka.net) has joined #ceph
[0:25] <phantomcircuit> i want to add another monitor and remove the original one
[0:25] <xmltok> perhaps i need to tune my osd's, i saw some async journal stuff someplace
[0:26] <phantomcircuit> the box it's on really shouldn't have been running the monitor int he first place
[0:26] <phantomcircuit> the problem is all the qemu instances were started with -drive file=rbd:rbd/vm123:auth_supported=none:mon_host=ns238708.ovh.net\:6789,if=none,id=drive-virtio-disk0,format=raw,cache=writeback,bps_rd=200000000,bps_wr=50000000,iops_rd=10000,iops_wr=50
[0:26] <phantomcircuit> so they just know about the one original monitor
[0:26] <phantomcircuit> i presume i will have to restart all the qemu instances with the new monitors?
[0:28] * leseb_ (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Remote host closed the connection)
[0:28] <joshd> phantomcircuit: they'll learn about the new monitor when they connect via the monmap. you just won't be able to start new ones with the old ip once you remove the old monitor
[0:29] * chutzpah (~chutz@199.21.234.7) Quit (Quit: Leaving)
[0:29] <phantomcircuit> ok that's fine i'll just change the libvirt pool so it knows about all the new monitors
[0:30] <phantomcircuit> joshd, neat killing the ceph-mon on the original monitor caused the qemu instances to connect to the two new monitors
[0:31] <phantomcircuit> very cool
[0:36] <xmltok> yehudasa: is this still correct http://www.mail-archive.com/ceph-devel@vger.kernel.org/msg07745.html? I had recreated .rgw.buckets with 1300 pg
[0:37] * joao (~JL@89.181.156.120) has joined #ceph
[0:37] * ChanServ sets mode +o joao
[0:39] <yehudasa> xmltok, that's still correct
[0:40] * jlogan2 (~Thunderbi@72.5.59.176) Quit (Ping timeout: 480 seconds)
[0:43] * jluis (~JL@89.181.154.239) Quit (Ping timeout: 480 seconds)
[0:44] * mattbenjamin (~matt@adsl-75-45-226-110.dsl.sfldmi.sbcglobal.net) has joined #ceph
[0:44] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[0:44] <xmltok> yehudasa, if that is the case, is there an advantage to creating the other pools manually with <8 pg?
[0:45] <yehudasa> <8 or >8?
[0:45] <xmltok> >8
[0:45] <yehudasa> yeah, it is recommended to create manually all the pools with a higher number of pgs
[0:46] <xmltok> so something like, for a in .rgw.gc .rgw.control .rgw .users.uid .users; do ceph osd pool create $a 1300 1300; done, then fire up radosgw and let it rip?
[0:50] * PerlStalker (~PerlStalk@72.166.192.70) Quit (Quit: ...)
[0:54] * JohansGlock_ (~quassel@kantoor.transip.nl) has joined #ceph
[0:55] * JohansGlock__ (~quassel@kantoor.transip.nl) has joined #ceph
[0:57] * JohansGlock___ (~quassel@kantoor.transip.nl) has joined #ceph
[0:59] * JohansGlock____ (~quassel@kantoor.transip.nl) has joined #ceph
[1:01] * jlogan (~Thunderbi@2600:c00:3010:1:ecc0:67c9:f071:2eb0) has joined #ceph
[1:01] <xmltok> the increase pg count on .rgw.buckets may have been the problem, my throughput is over 500 puts/s now which is about all i can generate
[1:01] * JohansGlock (~quassel@kantoor.transip.nl) Quit (Ping timeout: 480 seconds)
[1:01] * JohansGlock (~quassel@kantoor.transip.nl) has joined #ceph
[1:02] * JohansGlock_ (~quassel@kantoor.transip.nl) Quit (Ping timeout: 480 seconds)
[1:03] * JohansGlock_ (~quassel@kantoor.transip.nl) has joined #ceph
[1:04] * JohansGlock__ (~quassel@kantoor.transip.nl) Quit (Ping timeout: 480 seconds)
[1:06] * JohansGlock__ (~quassel@kantoor.transip.nl) has joined #ceph
[1:06] * JohansGlock___ (~quassel@kantoor.transip.nl) Quit (Ping timeout: 480 seconds)
[1:07] * BManojlovic (~steki@bojanka.net) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:07] * JohansGlock____ (~quassel@kantoor.transip.nl) Quit (Ping timeout: 480 seconds)
[1:10] * JohansGlock___ (~quassel@kantoor.transip.nl) has joined #ceph
[1:10] * JohansGlock (~quassel@kantoor.transip.nl) Quit (Ping timeout: 480 seconds)
[1:11] <yehudasa> xmltok: yes
[1:12] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has left #ceph
[1:13] * JohansGlock_ (~quassel@kantoor.transip.nl) Quit (Ping timeout: 480 seconds)
[1:17] * JohansGlock__ (~quassel@kantoor.transip.nl) Quit (Ping timeout: 480 seconds)
[1:18] * nwat (~Adium@soenat3.cse.ucsc.edu) has left #ceph
[1:21] * xmltok (~xmltok@pool101.bizrate.com) Quit (Quit: Leaving...)
[1:28] <Vjarjadian> anyone know if the techs have decided on a methodology for the geo-replication?
[1:29] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) Quit (Remote host closed the connection)
[1:30] <gregaf> the…techs?
[1:30] <gregaf> in any case, Yehuda just sent an email to the list describing how RGW geo-replication is likely to work
[1:31] <Vjarjadian> nice, :)
[1:35] <Vjarjadian> so there is still room for changes... looks interesting
[1:38] * xiaoxi (~xiaoxiche@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[1:42] * benpol (~benp@garage.reed.edu) has left #ceph
[1:46] * xiaoxi (~xiaoxiche@jfdmzpr04-ext.jf.intel.com) Quit ()
[1:47] * xiaoxi (~xiaoxiche@134.134.137.73) has joined #ceph
[1:54] <dec> Why is it that ceph clusters are concentrated in a
[1:54] <dec> single geographical location
[1:54] <dec> (sorry, pasted an accidental newline ...)
[1:55] <dec> Why is it that 'ceph clusters are concentrated in a single geographical location' given crush's capability for data placement across different arbitrary zones/areas
[1:55] <gregaf> its communication and replication strategies are designed for data-center scale networking and latencies
[1:56] <dec> That's a pretty strong assumption that people don't have data-centre scale networking spanning multiple locations
[1:56] <gregaf> heh
[1:57] <gregaf> in that case they can do geographic replication without us doing anything special for them ;)
[1:57] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[1:57] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit ()
[1:57] <dec> gregaf: great - I was concerned there was some underlying issue in doing so, as that's exactly what we plan to do
[1:58] <dec> our secondary DC is only 0.2ms further away and 20Gbps
[1:58] <gregaf> no problem there; just a lot of people want some geographic redundancy without buying the pipes for it so we have to do something different
[1:58] <dec> makes sense
[1:58] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[1:58] <gregaf> you'll of course need a monitor in a third location in order to survive the loss of either DC
[1:58] <dec> yup
[1:59] <dec> do you have any idea how much of an issue latency/bandwidth is between the mons?
[1:59] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[1:59] <gregaf> not
[1:59] <dec> in that scenario of a 3rd mon external to both DCs
[2:00] <gregaf> I mean, you could manufacture one I guess
[2:00] <dec> manufacture latency/bandwidth restriction to see whether it's an issue?
[2:00] <gregaf> but the monitors don't do anything latency sensitive in a range that matters to, say cross-US ping times
[2:00] <dec> OK, cool
[2:01] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[2:01] <gregaf> I guess it wouldn't do well on a standard residential internet connection's bandwidth
[2:02] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Remote host closed the connection)
[2:02] <gregaf> but if it's in a DC elsewhere I'd expect it to be fine
[2:02] * xmltok (~xmltok@pool101.bizrate.com) has joined #ceph
[2:02] <dpippenger> can a single ruleset contain multiple steps for choosing where to store replicas? I can't find any examples where someone has used step choose more than once
[2:02] <gregaf> dpippenger: yes, it can!
[2:03] <gregaf> you could do eg
[2:03] <dpippenger> I've tried a few variations but I keep crashing my mons :)
[2:03] <gregaf> step chooseleaf 1 ssd_bucket
[2:03] <gregaf> wait, I mean
[2:04] <gregaf> step chooseleaf firstn 1 ssd_bucket
[2:04] <gregaf> step chooseleaf firstn −1 hdd_bucket
[2:04] <gregaf> emit
[2:04] * Cube (~Cube@184.253.187.45) has joined #ceph
[2:04] <gregaf> modulo my syntax errors
[2:05] <dpippenger> so that would put one replica on ssd and any additionals on hdd?
[2:06] <dpippenger> sorry if I'm having trouble grasping the {num} in the ruleset, the best understanding was that <= 0 was match any replica count, but 1,2... indicated what to do with that # of replica
[2:07] <gregaf> yeah, assuming you have buckets of SSDs and buckets of HDDs which are named that way
[2:08] <gregaf> given that n is the replica count/pool size/whatever you want to call it
[2:08] <gregaf> "firstn 0" == n
[2:08] <gregaf> "firstn -1" == n - 1
[2:08] <gregaf> "firstn 1" == 1
[2:08] <dpippenger> I think what I want to do is mostly nonsense, but let me see if you have any ideas or if you will just scratch your head at me :)
[2:11] <dpippenger> I have 3 servers to use as OSD nodes, and 2 storage arrays I can only attach to the OSD using iscsi. So I wanted to have a LUN from each storage array on each OSD server. The problem I have is if I set the crushmap to replicate on host I end up with a single PG going to two OSD servers on the same storage array so if I loose the storage array I loose both replicas. If I organize my OSD by rack and use rack replication I end up with a si
[2:12] <dpippenger> what I would *like* is to be able to have it mirror the PG between different servers and different storage arrays, but I can't figure out how to articulate the osd to host to rack relationship in my crushmap
[2:13] <dpippenger> plan B is I just throw away the 3rd OSD node and do a 1 to 1 mapping of storage array to host
[2:14] * Cube (~Cube@184.253.187.45) Quit (Ping timeout: 480 seconds)
[2:23] * alram (~alram@38.122.20.226) Quit (Quit: leaving)
[2:32] <phantomcircuit> dpippenger, you can define your own types, rack -> row -> host -> array -> osd
[2:32] <phantomcircuit> actually that still doesn't help you does it
[2:33] <dpippenger> nah, I tried that... I think what I'm attempting is just not possible
[2:33] <dpippenger> it would require an osd to be part of both a rack and a host
[2:33] <dpippenger> and I don't think you can do that
[2:33] * Cube (~Cube@pool-71-108-128-153.lsanca.dsl-w.verizon.net) has joined #ceph
[2:34] <dpippenger> whatever the names... basically the hierarchy seems to only allow one parent per child
[2:35] <dpippenger> and the ruleset seems to only let you define a target in the form of a device (osd) or bucket, not a combination of both
[2:38] <dpippenger> I think my best compromise is the first two OSD nodes will have a 1 to 1 mapping to the storage array. Then the 3rd node will have a LUN on each storage array. Then I'll define a rack as containing 1 host and the OSD from the 3rd node from the opposing storage array.
[2:38] * ninkotech (~duplo@89.177.137.236) Quit (Ping timeout: 480 seconds)
[2:39] * ninkotech_ (~duplo@89.177.137.236) Quit (Ping timeout: 480 seconds)
[2:39] <dpippenger> so I should never end up with a PG replica on the same host or same storage array, but in a failure of one OSD node I will still be able to utilize all of the spindles on both storage arrays
[2:40] <dpippenger> heh, life would be easier if I had 3+ storage arrays but oh well...
[2:41] * ninkotech (~duplo@89.177.137.236) has joined #ceph
[2:42] * sjust (~sam@38.122.20.226) Quit (Remote host closed the connection)
[2:42] * ninkotech_ (~duplo@89.177.137.236) has joined #ceph
[2:43] * rturk is now known as rturk-away
[2:44] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[2:44] * loicd (~loic@magenta.dachary.org) has joined #ceph
[2:46] * Cube (~Cube@pool-71-108-128-153.lsanca.dsl-w.verizon.net) Quit (Ping timeout: 480 seconds)
[2:47] * gregaf (~Adium@2607:f298:a:607:3da4:c882:d437:2dc6) Quit (Quit: Leaving.)
[2:50] <xmltok> is it expected for puts/s to decrease if i use more than one radosgw? does it block?
[3:03] <elder> Github problem?
[3:07] * jlogan (~Thunderbi@2600:c00:3010:1:ecc0:67c9:f071:2eb0) Quit (Ping timeout: 481 seconds)
[3:15] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[3:21] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) Quit (Quit: Leaving.)
[3:23] * xmltok (~xmltok@pool101.bizrate.com) Quit (Ping timeout: 480 seconds)
[3:45] * yehudasa (~yehudasa@2607:f298:a:607:6969:5d4:541c:de81) Quit (Remote host closed the connection)
[3:52] * buck (~buck@bender.soe.ucsc.edu) Quit (Quit: Leaving.)
[3:55] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[3:55] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[4:26] * LeaChim (~LeaChim@b0fadd12.bb.sky.com) Quit (Ping timeout: 480 seconds)
[4:30] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) Quit (Quit: Leaving.)
[4:37] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[4:40] * Ryan_Lane (~Adium@216.38.130.166) Quit (Quit: Leaving.)
[4:47] * mattbenjamin (~matt@adsl-75-45-226-110.dsl.sfldmi.sbcglobal.net) Quit (Quit: Leaving.)
[5:00] * dmick (~dmick@2607:f298:a:607:1a03:73ff:fedd:c856) Quit (Remote host closed the connection)
[5:02] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) Quit (Quit: Leaving.)
[5:27] * absynth_47215 (~absynth@irc.absynth.de) Quit (Read error: Connection reset by peer)
[5:28] * dmick (~dmick@2607:f298:a:607:d178:217b:75db:b2c) has joined #ceph
[5:28] <elder> Josh said you had a problem with teuthology/github earlier dmick.
[5:29] <dmick> yes
[5:29] <dmick> github tarball downloads were timing uot
[5:30] <dmick> so workunit stuff just wouldn't run
[5:30] <elder> I have been too. I'll let you know shortly if it is having the same problem.
[5:30] <elder> (Just started a test)
[5:31] <elder> It's weird because it looks like it's getting somewhere, but then it has a problem after "Making a separate scratch dir for every client..."
[5:32] <elder> Yup, same thing.
[5:32] * absynth_47215 (~absynth@irc.absynth.de) has joined #ceph
[5:32] <dmick> wget -O- http://github.com/ceph/ceph/tarball/HEAD is the thing that fails, I think
[5:33] <dmick> trying to access that URL fails from my desktop as well
[5:33] <elder> Yes.
[5:50] * absynth_47215 (~absynth@irc.absynth.de) Quit (Ping timeout: 480 seconds)
[5:53] * absynth_47215 (~absynth@irc.absynth.de) has joined #ceph
[5:57] * drokita (~drokita@24-107-180-86.dhcp.stls.mo.charter.com) has joined #ceph
[6:05] * drokita (~drokita@24-107-180-86.dhcp.stls.mo.charter.com) Quit (Ping timeout: 480 seconds)
[6:17] <via> i can read from a large file on cephfs at 80+ MB/s, but reading from rbd peaks out at around 30... are there any tunables that i should be using?
[6:17] <via> i have a 128 MB writethrough cache right now set up
[6:19] <via> 256MB rather. the tail of my ceph.conf on the node using rbd is https://pastee.org/7pbkv
[6:19] * cmello (~cmello@201-23-177-71.gprs.claro.net.br) has joined #ceph
[6:21] * cmello (~cmello@201-23-177-71.gprs.claro.net.br) Quit (Read error: Connection reset by peer)
[6:55] <dmick> via: how are you reading?
[6:56] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[6:58] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[7:36] * ScOut3R (~ScOut3R@5400A5AF.dsl.pool.telekom.hu) has joined #ceph
[7:36] * ScOut3R (~ScOut3R@5400A5AF.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[7:38] <xiaoxi> hi, I am hitting -13> 2013-01-19 13:25:58.249609 7f8600fa1700 0 filestore(/data/osd.512) write couldn't open 2.71af_TEMP/34b671af/rb.0.1245.52dfe34f.000000000d50/head//2 flags 65: (24) Too many open file
[7:38] <xiaoxi> in my OSD daemon
[7:39] <xiaoxi> I have set max open files = 131072 in ceph.conf
[7:39] <xiaoxi> and according to ceph'd doc,ceph daemon will set the system limit when it startup
[7:40] <xiaoxi> but uname -a return file=1024...
[7:40] <xiaoxi> ulimit -a,sorry
[7:41] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (Ping timeout: 480 seconds)
[7:45] <dmick> are you starting daemons with /etc/init.d/ceph, xiaoxi?
[7:46] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[7:46] <dmick> that's the place where that setting is consumed
[7:51] * dmick (~dmick@2607:f298:a:607:d178:217b:75db:b2c) Quit (Quit: Leaving.)
[7:54] * yehuda_hm (~yehuda@2602:306:330b:a40:7438:5485:39e:5b0f) Quit (Quit: Leaving)
[7:59] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[8:26] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[8:53] <xiaoxi> yes, I starting daemons with /etc/init.d/ceph -a start
[8:54] * jrisch (~Adium@4505ds2-hi.0.fullrate.dk) has joined #ceph
[9:05] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[9:05] * loicd (~loic@magenta.dachary.org) has joined #ceph
[9:15] * sleinen (~Adium@217-162-132-182.dynamic.hispeed.ch) has joined #ceph
[9:17] * sleinen1 (~Adium@2001:620:0:26:ac5c:70e0:9b28:3467) has joined #ceph
[9:23] * sleinen (~Adium@217-162-132-182.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[9:26] * joao (~JL@89.181.156.120) Quit (Ping timeout: 480 seconds)
[9:31] * tnt (~tnt@120.194-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[9:34] <xdeller> xiaoxi: take look to limits in pam.d/su, if they are commented out, limits from security may not apply even you are root (su-ed to it, for example)
[9:35] * xdeller (~xdeller@broadband-77-37-224-84.nationalcablenetworks.ru) Quit (Quit: Leaving)
[9:37] * sleinen1 (~Adium@2001:620:0:26:ac5c:70e0:9b28:3467) Quit (Quit: Leaving.)
[9:39] * stxShadow (~Jens@ip-178-201-147-146.unitymediagroup.de) has joined #ceph
[9:43] <xiaoxi> xdeller:I tried to print some info inside /etc/init.d/ceph,when I run ulimit -n inside shell,it show 1024,but when printing result in /etc/init.d/ceph, it shows the right number
[9:43] * pigx (~pierluigi@host159-45-static.240-95-b.business.telecomitalia.it) has joined #ceph
[9:45] <xiaoxi> === osd.315 ===
[9:45] <xiaoxi> 131072
[9:45] <xiaoxi> + [ 131072 != 0 ]
[9:45] <xiaoxi> + ulimit -n
[9:45] <xiaoxi> + cur=131072
[9:45] <xiaoxi> + [ x131072 != x131072 ]
[9:45] <xiaoxi> + set +x
[9:45] * pigx (~pierluigi@host159-45-static.240-95-b.business.telecomitalia.it) Quit ()
[9:52] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[10:01] * Tribaal (uid3081@id-3081.tooting.irccloud.com) Quit (Ping timeout: 480 seconds)
[10:05] * stxShadow (~Jens@ip-178-201-147-146.unitymediagroup.de) Quit (Ping timeout: 480 seconds)
[10:08] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[10:08] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[10:10] * sleinen (~Adium@217-162-132-182.dynamic.hispeed.ch) has joined #ceph
[10:12] * sleinen1 (~Adium@2001:620:0:25:8dc8:222a:1f8d:4c33) has joined #ceph
[10:14] * sleinen (~Adium@217-162-132-182.dynamic.hispeed.ch) Quit (Read error: Operation timed out)
[10:39] * xiaoxi (~xiaoxiche@134.134.137.73) Quit (Ping timeout: 480 seconds)
[10:41] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) has joined #ceph
[11:42] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[11:58] * stxShadow (~Jens@ip-178-201-147-146.unitymediagroup.de) has joined #ceph
[12:01] * Tribaal (uid3081@tooting.irccloud.com) has joined #ceph
[12:27] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Quit: ChatZilla 0.9.89 [Firefox 18.0/20130104151925])
[12:35] * jrisch (~Adium@4505ds2-hi.0.fullrate.dk) Quit (Read error: Operation timed out)
[12:44] * todin (tuxadero@kudu.in-berlin.de) Quit (Remote host closed the connection)
[12:52] * andret (~andre@pcandre.nine.ch) Quit (Ping timeout: 480 seconds)
[13:03] * sleinen1 (~Adium@2001:620:0:25:8dc8:222a:1f8d:4c33) Quit (Read error: Connection reset by peer)
[13:03] * leseb (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[13:04] * leseb_ (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[13:05] * leseb (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[13:05] * sleinen (~Adium@2001:620:0:25:8dc8:222a:1f8d:4c33) has joined #ceph
[13:08] * jrisch (~Adium@94.191.185.62.mobile.3.dk) has joined #ceph
[13:12] * LeaChim (~LeaChim@b0fadd12.bb.sky.com) has joined #ceph
[13:15] * stxShadow (~Jens@ip-178-201-147-146.unitymediagroup.de) has left #ceph
[13:36] * jrisch (~Adium@94.191.185.62.mobile.3.dk) Quit (Quit: Leaving.)
[13:44] * leseb_ (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Remote host closed the connection)
[14:06] * xiaoxi (~xiaoxiche@jfdmzpr06-ext.jf.intel.com) has joined #ceph
[14:08] * jrisch (~Adium@94.191.185.62.mobile.3.dk) has joined #ceph
[14:09] <xiaoxi> hi
[14:13] * jrisch (~Adium@94.191.185.62.mobile.3.dk) has left #ceph
[14:50] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[14:54] * madkiss (~madkiss@178.188.60.118) Quit (Quit: Leaving.)
[14:55] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[14:57] * Kioob (~kioob@luuna.daevel.fr) Quit (Quit: Leaving.)
[15:02] * Kioob (~kioob@luuna.daevel.fr) has joined #ceph
[15:15] <scalability-junk> darkfaded slowly I think noone is keeping backups be it offsite or onsite ones of ceph objects :D
[15:37] * mattbenjamin (~matt@75.45.226.110) has joined #ceph
[15:45] * allsystemsarego (~allsystem@188.27.166.249) has joined #ceph
[15:59] * sleinen1 (~Adium@2001:620:0:26:8d43:3a84:e153:6cb8) has joined #ceph
[16:04] * sleinen (~Adium@2001:620:0:25:8dc8:222a:1f8d:4c33) Quit (Ping timeout: 480 seconds)
[16:18] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Ping timeout: 480 seconds)
[16:42] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[16:42] * ChanServ sets mode +o scuttlemonkey
[16:59] * xiaoxi (~xiaoxiche@jfdmzpr06-ext.jf.intel.com) Quit (Remote host closed the connection)
[16:59] * xiaoxi (~xiaoxiche@134.134.137.75) has joined #ceph
[17:11] * xiaoxi (~xiaoxiche@134.134.137.75) Quit (Remote host closed the connection)
[17:21] * KindOne (KindOne@h180.210.89.75.dynamic.ip.windstream.net) Quit (Ping timeout: 480 seconds)
[17:31] <jksM> anyone got a tip on how to checkout a specific branch from the git repository in order to build my own RPMs?
[17:32] * KindOne (KindOne@h180.210.89.75.dynamic.ip.windstream.net) has joined #ceph
[17:50] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Remote host closed the connection)
[17:59] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Quit: This computer has gone to sleep)
[18:06] <jksM> sagewk, thanks for the wip-pg-removal build! - I'm testing it on two of the osds now
[18:09] * LeaChim (~LeaChim@b0fadd12.bb.sky.com) Quit (Ping timeout: 480 seconds)
[18:17] * leseb (~leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[18:18] * LeaChim (~LeaChim@b0faf18a.bb.sky.com) has joined #ceph
[18:30] * lx0 is now known as lxo
[18:51] * ScOut3R (~ScOut3R@5400A5AF.dsl.pool.telekom.hu) has joined #ceph
[18:59] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[18:59] * loicd (~loic@magenta.dachary.org) has joined #ceph
[19:01] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[19:09] * jmlowe (~Adium@c-71-201-31-207.hsd1.in.comcast.net) has joined #ceph
[19:18] * Pagefaulted (~AndChat73@c-67-168-132-228.hsd1.wa.comcast.net) has joined #ceph
[19:19] * loicd (~loic@magenta.dachary.org) Quit (Remote host closed the connection)
[19:29] * BManojlovic (~steki@241-166-222-85.adsl.verat.net) has joined #ceph
[19:32] * jmlowe (~Adium@c-71-201-31-207.hsd1.in.comcast.net) Quit (Quit: Leaving.)
[19:33] * The_Bishop (~bishop@e179017075.adsl.alicedsl.de) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[19:37] * Pagefaulted (~AndChat73@c-67-168-132-228.hsd1.wa.comcast.net) Quit (Remote host closed the connection)
[19:38] * madkiss (~madkiss@178.188.60.118) Quit (Quit: Leaving.)
[20:05] * jmlowe (~Adium@c-71-201-31-207.hsd1.in.comcast.net) has joined #ceph
[20:09] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[20:21] * madkiss (~madkiss@178.188.60.118) Quit (Ping timeout: 480 seconds)
[20:27] * The_Bishop (~bishop@2001:470:50b6:0:9424:9c99:2f7:1449) has joined #ceph
[20:43] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[20:43] * ChanServ sets mode +o scuttlemonkey
[20:45] * jmlowe (~Adium@c-71-201-31-207.hsd1.in.comcast.net) Quit (Quit: Leaving.)
[21:03] <Psi-jack> Hmmm
[21:04] <Psi-jack> So, I'm about ready to prep up upgrading from Ceph 0.55.x-git to 0.56.1
[21:04] <Psi-jack> Is there any upgrade concerns from that recent of a version of ceph to 0.56.1?
[21:08] * loicd (~loic@magenta.dachary.org) has joined #ceph
[21:15] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[21:16] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Read error: Connection reset by peer)
[21:16] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[21:17] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[21:17] * ChanServ sets mode +o scuttlemonkey
[21:20] * madkiss1 (~madkiss@178.188.60.118) has joined #ceph
[21:21] * madkiss (~madkiss@178.188.60.118) Quit (Read error: Connection reset by peer)
[21:37] * xdeller (~xdeller@broadband-77-37-224-84.nationalcablenetworks.ru) has joined #ceph
[21:41] <xdeller> since ceph works at zfs, nobody tried or wants to try launch ceph at nexenta? since nexenta` userspace is almost the same as regular ubuntu, if it works, it`ll be perfect replacement for btrfs
[21:43] <nhm> xdeller: A couple of people have tried it. I think Sam wants to play with it, but the licensing issues make it tough to justify spending a lot of time on it right now.
[21:44] <xdeller> <nhm> licensing may be less painful for end users, and I haven`t even a bunch of free PCs right now to try, hope it`ll be possible next week
[21:50] * madkiss1 (~madkiss@178.188.60.118) Quit (Quit: Leaving.)
[21:55] * ScOut3R (~ScOut3R@5400A5AF.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[22:04] <Psi-jack> Hmmm, ceph, on ZFS? I tried that once.... Epic failure...
[22:08] <Psi-jack> Oh, and one thing I'm noticing, with ceph 0.55+ so far, there is no more src/obsync directory? Was that stuff removed? It was python code for obsync and boto_tool.
[22:11] * sagelap (~sage@76.89.177.113) has joined #ceph
[22:11] <sagelap> jens: you there?
[22:12] <sagelap> jksm: ping
[22:18] * loicd (~loic@magenta.dachary.org) has joined #ceph
[22:21] <Psi-jack> Heh, sage!
[22:22] <sagelap> hey
[22:22] <Psi-jack> Got a few moments for like 2 simple questions? :d
[22:22] <sagelap> maybe :)
[22:22] <Psi-jack> Is there any upgrade concerns from that recent of a version of ceph (0.55.x-git) to 0.56.1?
[22:23] <Psi-jack> And, I'm noticing, with ceph 0.55+ so far, there is no more src/obsync directory? Was that stuff removed? It was python code for obsync and boto_tool.
[22:23] <sagelap> obsync moved out of ceph.git.. there's a separate git repo on github
[22:23] <sagelap> no upgrade problems, 0.56.1 shoudl be strictly better than 0.55
[22:24] <Psi-jack> Ahh, hmm, okay.
[22:24] <Psi-jack> Spiffy. :)
[22:24] <Psi-jack> I'm working on Arch's AUR Ceph package updates from 0.48.2 to 0.56.1 ;)
[22:26] * Steki (~steki@85.222.221.79) has joined #ceph
[22:30] * BManojlovic (~steki@241-166-222-85.adsl.verat.net) Quit (Ping timeout: 480 seconds)
[22:34] <Psi-jack> Cool, thanks sage! Should have this nicely up and running. Basically I plan to upgrade one server at a time, so until all three of my storage servers are updated, I will have mixed versions, 0.55.x and 0.56.1.
[22:53] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[22:53] * loicd (~loic@magenta.dachary.org) has joined #ceph
[22:53] <sagelap> psi-jack: there may have been an issue with the recovery reservations where the mixed-version compatibility was broken in 0.55.. i forget exactly. if you hit a crash, just upgrade the other nodes to 0.56.1 and you'll be ok
[22:54] <sagelap> jksm: ping
[23:08] * sleinen1 (~Adium@2001:620:0:26:8d43:3a84:e153:6cb8) Quit (Quit: Leaving.)
[23:21] * Gugge_47527 (gugge@kriminel.dk) has joined #ceph
[23:24] * Gugge-47527 (gugge@kriminel.dk) Quit (Ping timeout: 480 seconds)
[23:24] * Gugge_47527 is now known as Gugge-47527
[23:35] * sagewk (~sage@2607:f298:a:607:219:b9ff:fe40:55fe) Quit (Read error: Operation timed out)
[23:42] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[23:42] * loicd (~loic@magenta.dachary.org) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.