#ceph IRC Log

Index

IRC Log for 2013-09-06

Timestamps are in GMT/BST.

[0:00] * jeff-YF (~jeffyf@67.23.117.122) Quit (Ping timeout: 480 seconds)
[0:00] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[0:00] <Cube> nigwil: You can remove a placement group that you have a copy of elsewhere on that OSD and try starting it.
[0:02] * BillK (~BillK-OFT@58-7-131-166.dyn.iinet.net.au) has joined #ceph
[0:09] * gaveen (~gaveen@175.157.143.100) Quit (Ping timeout: 480 seconds)
[0:13] * jcfischer (~fischer@user-28-10.vpn.switch.ch) has joined #ceph
[0:13] <gregaf> sagewk: finally got to wip-intel-crc-workaround
[0:14] <gregaf> it looks fine to me; you've run it under valgrind?
[0:14] <sagewk> yep
[0:14] * sprachgenerator (~sprachgen@130.202.135.204) Quit (Quit: sprachgenerator)
[0:14] <gregaf> cool
[0:14] <gregaf> how were we testing throughput of the various implementations?
[0:14] * doxavore (~doug@99-7-52-88.lightspeed.rcsntx.sbcglobal.net) Quit (Quit: :qa!)
[0:14] * n1md4 (~nimda@anion.cinosure.com) Quit (Read error: Operation timed out)
[0:14] <sagewk> unittest_crc32c
[0:15] <gregaf> …just realized I need to ask what machines we have that this works on?
[0:16] <lordinvader> hi, can I attach an rbd disk to a running VM using 'virsh attach-disk'? I tried and it doesn't seem to recognize the type. :(
[0:17] * davidzlap (~Adium@cpe-75-84-249-188.socal.res.rr.com) has joined #ceph
[0:18] <gregaf> lordinvader: I don't know anything about virsh, but more details? which thing isn't recognizing what type?
[0:19] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[0:19] * ChanServ sets mode +v andreask
[0:19] * ScOut3R (~scout3r@BC2484D1.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[0:21] <lordinvader> gregaf, I'm trying to attach a rbd image (rbd:libvirt-pool/my-img) to a VM without shutting it down. 'virsh attach-disk' allows me to attach ISOs etc without having to restart the VM, and I wanted to use it to attach a rbd disk
[0:22] <joshd> lordinvader: try virsh attach-device instead, it's more general. iirc attach-disk has some restrictions that might make it not work for rbd
[0:22] <lordinvader> joshd, oh okay, let me try
[0:25] * malcolm (~malcolm@silico24.lnk.telstra.net) has joined #ceph
[0:25] <mtanski> Okay, the stuff should be in the mailing now
[0:25] <mtanski> the fscache that is
[0:26] <gregaf> yep, I was wondering how ceph-devel got 8 messages so fast :)
[0:28] <lordinvader> joshd, it says devices of type 'ide' can't be hotplugged. I changed the bus type to 'virtio' (found someone online) - it says 'attached successfully' but it doesn't appear in my VM.
[0:28] <lordinvader> Any way around it? Here's my disk config - http://pastebin.com/6Nvhn9Q6
[0:29] <joshd> lordinvader: your guest kernel needs to support hotplugging (and virtio) for it to work
[0:32] <jcfischer> I did some surgery on our ceph cluster (ran out of disk on one of the mons so had to shuffle stuff back and forth), broke the mon, had to reboot the server, took out the mon, re-created it
[0:33] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[0:33] <jcfischer> now I have 42 pgs down, 43 stuck inactive, 50 stuck unclean, all 3 mons are back up
[0:33] <lordinvader> joshd, thanks, let me try it out
[0:33] * carif (~mcarifio@pool-96-233-32-122.bstnma.fios.verizon.net) Quit (Quit: Ex-Chat)
[0:33] <jcfischer> but no repair activity going on - any idea on what button to press?
[0:35] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[0:35] <jcfischer> ah - I spoke to soon - it has picked up and has started to repair
[0:35] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[0:35] * grepory (~Adium@30.sub-70-197-5.myvzw.com) has joined #ceph
[0:36] * houkouonchi-work (~linux@12.248.40.138) Quit (Read error: Connection reset by peer)
[0:37] * thomnico (~thomnico@2a01:e35:8b41:120:e051:2a8e:a6fe:ba3d) Quit (Quit: Ex-Chat)
[0:38] * sjm (~sjm@2607:f298:a:607:c11c:3245:fbca:7d92) Quit (Remote host closed the connection)
[0:39] * sjm (~sjm@38.122.20.226) has joined #ceph
[0:39] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) has joined #ceph
[0:40] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[0:40] * terje- (~root@135.109.220.9) Quit (Ping timeout: 480 seconds)
[0:41] * dmsimard1 (~Adium@108.163.152.66) has joined #ceph
[0:44] * dmsimard1 (~Adium@108.163.152.66) Quit ()
[0:47] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) Quit (Read error: Connection reset by peer)
[0:47] <mtanski> gregaf: with any luck this will be last one to go in, and I'll be spamming you guys less often from now on
[0:47] <lordinvader> joshd, it worked, thanks a ton!
[0:47] * sjm (~sjm@38.122.20.226) Quit (Remote host closed the connection)
[0:47] <joshd> lordinvader: you're welcome!
[0:51] * sjm (~sjm@38.122.20.226) has joined #ceph
[0:51] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[0:51] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[0:55] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:55] <mtanski> I found the patch Linus accepted from Yan, "vfs: call d_op->d_prune() before unhashing dentry" are there any dependencies I need?
[0:57] <yanzheng> it should fix the missing file issue
[1:02] <mtanski> I'll give that a try on the machine I can repro this
[1:06] <mtanski> Is there a bug in the ceph bug tracker
[1:07] <mtanski> I guess I'd like to understand what's happening
[1:07] <mtanski> or why does it happen
[1:09] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[1:10] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[1:10] <yanzheng> the commit message should explain what happens
[1:11] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[1:12] * grepory (~Adium@30.sub-70-197-5.myvzw.com) Quit (Quit: Leaving.)
[1:16] * motakhalif (~motakhali@CPE001601df5719-CM78cd8eccfad5.cpe.net.cable.rogers.com) has joined #ceph
[1:17] * motakhalif (~motakhali@CPE001601df5719-CM78cd8eccfad5.cpe.net.cable.rogers.com) Quit ()
[1:18] <wusui> mark? are you there?
[1:18] <mtanski> It seams to work so far, although I'm going to leave this test running
[1:19] <nhm> wusui: helloP
[1:19] <nhm> !
[1:19] <mtanski> since it was able to fail and I'll take a to make sure when I get back. I'll take a look at the patch as well
[1:19] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[1:20] <nhm> wusui: so right now I think the radosbench task is just logging to the teuthology.log file. I was thinking of creating osme kind of abstraction that would grab data from it, put it in the DB, and then just continue to log it. Sounds like you have something kind of like that already?
[1:22] <wusui> i am logging stuff (not permanently yet) from teuthology suite summary.yaml files.
[1:23] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[1:23] <nhm> wusui: ok. So it doesn't look like the result is logged there yet, but that would be a relatively easy change.
[1:23] <nhm> hopefully
[1:23] <wusui> I plan to have it run from the teuthology suites themselves. The information that you save from the radosbench task might storable in my table. If not, we can use the same database
[1:24] <wusui> right now, it looks like deeby has a perf_test database with no tables yet.
[1:24] <nhm> yep, just setup yesterday.
[1:25] <wusui> what information are you going to log
[1:25] <wusui> ?
[1:25] <nhm> wusui: starting out just a throughput number, but eventually if this ends up being useful potentially cpu usage, latencies, maybe other data. Anything for chartio at least would be time series.
[1:27] <wusui> SO the most important data right now is the date/time of the run, information about the run itself, error status, and a throughput number?
[1:28] <nhm> wusui: date/time, ceph branch, and performance number is probably the minimum. We could collect tons of other data, but frankly that might be getting to over zealous.
[1:29] <wusui> okay. we can alter the table as we get more stuff.
[1:29] <nhm> I've got other tools I use for more sophisticated analysis right now, I just want to do whateer the minimum is to get a demo going. The data is probably not even going to be that valid until we make a bunch of changes to how the disks are allocated and the kernel that is used.
[1:29] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Read error: Operation timed out)
[1:30] <wusui> So do you have anything that actually collects the data out of teuthology.log files? Or are we gonna scrape that inforjmation?
[1:31] <nhm> wusui: nope, nothing yet. I was thinking of just creating a python class to do it that could be instantiated right in the class, but maybe we don't want the extra dependency in teuthology.
[1:32] <nhm> s/class/task
[1:32] * jskinner (~jskinner@199.127.136.233) Quit (Remote host closed the connection)
[1:33] <wusui> we could probably set it up so that teuthology uses/does not use the db depending on some other configuration information or something.
[1:34] <wusui> When is this demo?
[1:35] <nhm> wusui: As soon as possible. :D
[1:36] <nhm> wusui: Honestly I think we'd be happy even if the data is totally bogus if we can start doing nightly tests and displaying in chartio just to prove we can do it.
[1:37] <nhm> After that it's just bug fixes. :P
[1:37] <wusui> okay. I could try replicating what I did for the teuth suite stuff and but together a database with bogus data in it, and see if we could access it.
[1:38] <wusui> but == put
[1:38] * houkouonchi-work (~linux@12.248.40.138) has joined #ceph
[1:39] <nhm> wusui: I've got a suite now that will run rados bench. I think I can probably get the final result(s) into the results.yaml file
[1:39] <wusui> how does chartio work? If I deliver the database records via a python script. would that be good?
[1:40] <nhm> basically you just give chartio a sql query to execute so that it gets whatever data you want to plot along with the X/Y values and it will display it.
[1:41] <nhm> X values I guess since Y would be the data values...
[1:41] <nhm> Honestly I haven't used it that much, this is my recollection from like a year ago.
[1:41] * KevinPerks1 (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[1:44] * sjm (~sjm@38.122.20.226) Quit (Remote host closed the connection)
[1:44] <wusui> would an http:-ish type interface work?
[1:46] * sjm (~sjm@2607:f298:a:607:e5fc:c0a0:f725:105c) has joined #ceph
[1:46] * alram (~alram@38.122.20.226) Quit (Read error: Connection reset by peer)
[1:48] * rturk is very familiar with chartio if you need some pointers :)
[1:49] <dmick> it had better be pretty agnostic about data sources, or someone has some answering to do
[1:49] <nhm> wusui: So what is the right next step here? Should I use the stuff you are developing? Just directly insert data into a table?
[1:52] <wusui> I'm not sure. I may wanna give ross a quick question to see how chartio relates to this. I could mess around a little bit today to see what is easy/feasible.
[1:53] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[1:53] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[1:55] <nhm> wusui: Ok, I'm happy to hand this over to you if you want to play with it
[1:55] * aliguori (~anthony@cpe-70-112-153-179.austin.res.rr.com) Quit (Remote host closed the connection)
[1:55] <nhm> wusui: should be pretty easy to stick some data in and have chartio graph it
[1:55] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[1:55] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[1:56] <nhm> wusui: you can run a basic rados bench test like so: ./schedule_suite.sh performance master testing mark.nelson@ceph.com basic mark-perf burnupi
[1:56] * sjm_ (~sjm@38.122.20.226) has joined #ceph
[1:57] <nhm> well, different email
[1:57] <xarses> i was on the undersanding that with a minimum of one monitor and two osd's that the cluster should come up active+clean
[1:57] <nhm> xarses: yes
[1:57] <xarses> however i find that if the osd's are all from the same host, it will not
[1:57] <xarses> (cuttlefish)
[1:57] <nhm> xarses: it still should, I've got a box I test that way pretty regularly.
[1:58] <dmick> crushmap wrong or tunables set wrong
[1:58] <nhm> xarses: are you using custom crush rules?
[1:58] <xarses> no
[1:58] <joshd> the default splits across hosts iirc
[2:00] * sjm (~sjm@2607:f298:a:607:e5fc:c0a0:f725:105c) Quit (Ping timeout: 480 seconds)
[2:02] <wusui> mark -- I just talked to rturk and got the 30 second chartio demo. If I give you a database and table with the fields in it (filled with bogus data right now) can you run chartio on it and see if that's what you want.
[2:03] <nhm> wusui: Sure. Ultimately we'll basically want to create a running chart for 4k, 128k, and 4M object sizes, and probably have a couple of lines for different repositories (master, next, current, etc)
[2:04] <nhm> so I guess there is a bit more metadata than I originally said.
[2:04] <nhm> oh, and reads/writes. :D
[2:06] <nhm> So lets make colmns for: datetime, ceph branch, io size, io type, concurrent_ios, throughput
[2:06] * nwat (~nwat@eduroam-237-79.ucsc.edu) Quit (Ping timeout: 480 seconds)
[2:07] <wusui> nhm: okay
[2:08] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[2:22] * LeaChim (~LeaChim@97e00ac2.skybroadband.com) Quit (Ping timeout: 480 seconds)
[2:25] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[2:35] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[2:36] * wusui (~Warren@2607:f298:a:607:d1d:b885:7622:d9bd) Quit (Quit: Leaving)
[2:36] * wusui (~Warren@2607:f298:a:607:d1d:b885:7622:d9bd) has joined #ceph
[2:36] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[2:36] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[2:39] <wusui> xxx
[2:39] <wusui> nhm: are you there?
[2:44] * xarses (~andreww@204.11.231.50.static.etheric.net) Quit (Ping timeout: 480 seconds)
[2:46] <nhm> sorta, kids bed time
[2:47] * yy-nm (~Thunderbi@220.184.128.218) has joined #ceph
[3:02] <wusui> nhm: i emailed you some stuff.
[3:03] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) Quit (Remote host closed the connection)
[3:03] * ShaunR (~ShaunR@staff.ndchost.com) Quit ()
[3:04] * nerdtron (~kenneth@202.60.8.252) has joined #ceph
[3:05] * angdraug (~angdraug@204.11.231.50.static.etheric.net) Quit (Quit: Leaving)
[3:10] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[3:11] * xarses (~andreww@c-71-202-167-197.hsd1.ca.comcast.net) has joined #ceph
[3:12] * sage (~sage@76.89.177.113) Quit (Read error: Operation timed out)
[3:13] * freedomhui (~freedomhu@117.79.232.247) has joined #ceph
[3:23] * dpippenger (~riven@tenant.pas.idealab.com) Quit (Quit: Leaving.)
[3:28] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[3:40] * LPG (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) has joined #ceph
[3:40] * LPG_ (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) has joined #ceph
[3:40] * LPG (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) Quit (Read error: Connection reset by peer)
[3:41] * rturk is now known as rturk-away
[3:42] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[3:47] * nhm_ (~nhm@184-97-187-196.mpls.qwest.net) has joined #ceph
[3:49] * nhm (~nhm@mfe2836d0.tmodns.net) Quit (Ping timeout: 480 seconds)
[3:53] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:54] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[3:54] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[3:54] * sjustlaptop1 (~sam@172.56.6.92) has joined #ceph
[3:55] * grepory (~Adium@2600:1010:b015:3e96:6c8f:43ad:64eb:8ab5) has joined #ceph
[3:58] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[4:00] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[4:00] * julian (~julianwa@125.70.133.187) has joined #ceph
[4:03] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[4:04] * freedomhui (~freedomhu@117.79.232.247) Quit (Quit: Leaving...)
[4:04] * clayb (~kvirc@proxy-nj1.bloomberg.com) Quit (Read error: Connection reset by peer)
[4:04] * haomaiwang (~haomaiwan@119.4.172.149) has joined #ceph
[4:06] * grepory (~Adium@2600:1010:b015:3e96:6c8f:43ad:64eb:8ab5) Quit (Quit: Leaving.)
[4:07] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[4:07] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[4:08] * freedomhui (~freedomhu@117.79.232.216) has joined #ceph
[4:13] * sjustlaptop1 (~sam@172.56.6.92) Quit (Ping timeout: 480 seconds)
[4:15] * haomaiwang (~haomaiwan@119.4.172.149) Quit (Ping timeout: 480 seconds)
[4:17] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[4:21] * houkouonchi-work (~linux@12.248.40.138) Quit (Quit: Client exiting)
[4:24] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[4:29] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[4:31] * sagelap1 (~sage@2600:1012:b027:1e8e:f836:7a76:e39:2411) has joined #ceph
[4:32] * grepory (~Adium@85.sub-70-197-0.myvzw.com) has joined #ceph
[4:32] * grepory (~Adium@85.sub-70-197-0.myvzw.com) Quit ()
[4:33] * sagelap (~sage@2607:f298:a:607:f836:7a76:e39:2411) Quit (Ping timeout: 480 seconds)
[4:34] * shang (~ShangWu@122-116-16-162.HINET-IP.hinet.net) has joined #ceph
[4:36] * grepory (~Adium@211.sub-70-197-1.myvzw.com) has joined #ceph
[4:37] * shang (~ShangWu@122-116-16-162.HINET-IP.hinet.net) Quit ()
[4:37] * shang (~ShangWu@122-116-16-162.HINET-IP.hinet.net) has joined #ceph
[4:37] * grepory (~Adium@211.sub-70-197-1.myvzw.com) Quit ()
[4:38] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[4:39] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[4:39] * carif (~mcarifio@cpe-74-78-54-137.maine.res.rr.com) has joined #ceph
[4:44] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[4:45] * carif (~mcarifio@cpe-74-78-54-137.maine.res.rr.com) Quit (Read error: Operation timed out)
[4:47] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[4:51] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[4:52] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[4:53] * lordinvader (~lordinvad@14.139.82.6) Quit (Quit: Leaving)
[4:56] * carif (~mcarifio@cpe-74-78-54-137.maine.res.rr.com) has joined #ceph
[4:58] * sjm_ (~sjm@38.122.20.226) Quit (Remote host closed the connection)
[5:00] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Remote host closed the connection)
[5:01] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[5:03] * sagelap1 (~sage@2600:1012:b027:1e8e:f836:7a76:e39:2411) Quit (Ping timeout: 480 seconds)
[5:06] * fireD_ (~fireD@93-139-163-132.adsl.net.t-com.hr) has joined #ceph
[5:07] * fireD (~fireD@93-139-138-90.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[5:11] * sjustlaptop1 (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[5:13] * xmltok_ (~xmltok@pool101.bizrate.com) has joined #ceph
[5:14] * Cube (~Cube@12.248.40.138) Quit (Read error: Operation timed out)
[5:14] * xmltok (~xmltok@pool101.bizrate.com) Quit (Quit: Bye!)
[5:15] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[5:17] * cofol1986 (~xwrj@110.90.119.113) has joined #ceph
[5:19] * sjustlaptop1 (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[5:34] * xmltok_ (~xmltok@pool101.bizrate.com) Quit (Quit: Leaving...)
[5:35] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[5:40] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[5:40] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[5:46] * haomaiwang (~haomaiwan@119.4.172.149) has joined #ceph
[5:47] * haomaiwa_ (~haomaiwan@112.193.130.93) has joined #ceph
[5:49] * BillK (~BillK-OFT@58-7-131-166.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[5:50] * BillK (~BillK-OFT@124-171-168-171.dyn.iinet.net.au) has joined #ceph
[5:54] * haomaiwang (~haomaiwan@119.4.172.149) Quit (Ping timeout: 480 seconds)
[5:58] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[6:07] * carif (~mcarifio@cpe-74-78-54-137.maine.res.rr.com) Quit (Ping timeout: 480 seconds)
[6:13] * jksM (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[6:15] * ofu_ (ofu@dedi3.fuckner.net) Quit (Read error: Connection reset by peer)
[6:16] * hflai_ (~hflai@alumni.cs.nctu.edu.tw) has joined #ceph
[6:17] * jks (~jks@3e6b5724.rev.stofanet.dk) Quit (Read error: Connection reset by peer)
[6:17] * hflai (~hflai@alumni.cs.nctu.edu.tw) Quit (Read error: Connection reset by peer)
[6:18] * KindTwo (~KindOne@h107.49.186.173.dynamic.ip.windstream.net) has joined #ceph
[6:19] * carif (~mcarifio@cpe-74-78-54-137.maine.res.rr.com) has joined #ceph
[6:20] * freedomhui (~freedomhu@117.79.232.216) Quit (Ping timeout: 480 seconds)
[6:20] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:20] * KindTwo is now known as KindOne
[6:21] * ofu (ofu@dedi3.fuckner.net) has joined #ceph
[6:24] * julian (~julianwa@125.70.133.187) Quit (Read error: Connection reset by peer)
[6:24] * julian (~julianwa@125.70.133.187) has joined #ceph
[6:31] * freedomhui (~freedomhu@117.79.232.216) has joined #ceph
[6:34] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Read error: Connection reset by peer)
[6:36] * KindOne (~KindOne@0001a7db.user.oftc.net) has joined #ceph
[6:36] * freedomhui (~freedomhu@117.79.232.216) Quit (Quit: Leaving...)
[6:47] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Bye!)
[6:50] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[6:55] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) has joined #ceph
[7:15] * freedomhui (~freedomhu@li565-182.members.linode.com) has joined #ceph
[7:16] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:21] * nerdtron (~kenneth@202.60.8.252) Quit (Ping timeout: 480 seconds)
[7:21] * terje- (~root@135.109.220.9) has joined #ceph
[7:21] * freedomhu (~freedomhu@117.79.232.216) has joined #ceph
[7:25] * carif (~mcarifio@cpe-74-78-54-137.maine.res.rr.com) Quit (Ping timeout: 480 seconds)
[7:26] * davidzlap (~Adium@cpe-75-84-249-188.socal.res.rr.com) Quit (Quit: Leaving.)
[7:27] * freedomhu (~freedomhu@117.79.232.216) Quit (Quit: Leaving...)
[7:29] * freedomhui (~freedomhu@li565-182.members.linode.com) Quit (Ping timeout: 480 seconds)
[7:30] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[7:31] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[7:36] * mjeanson_ (~mjeanson@bell.multivax.ca) has joined #ceph
[7:41] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Ping timeout: 480 seconds)
[8:00] * nerdtron (~kenneth@202.60.8.252) has joined #ceph
[8:03] <malcolm> Quick question, ceph-deploy doesn't seem to be part of the main source bundle. So when building from source, I assume I need to go and track it down. Failing that is the old guide for getting thigs going minus ceph-deploy still floating around?
[8:07] <loicd> is there a link to download the webinars from https://www.brighttalk.com/ ?
[8:13] * sherry (~sherry@wireless-nat-10.auckland.ac.nz) has joined #ceph
[8:15] <malcolm> Ummm not in the gear I got. Well I've not poked it too deep.
[8:16] * odyssey4me (~odyssey4m@165.233.71.2) has joined #ceph
[8:16] <sherry> I just run "install -d m0755 out dev/osd0" and when I try to create one more osd with this command "install -d m0755 out dev/osd1", the "osd tree" doesnt show the second osd, what's the problem?!
[8:17] <sherry> I got this error > HEALTH_WARN 24 pgs degraded; 24 pgs stuck unclean; recovery 53/106 objects degraded (50.000%) which Im not sure the problem comes from the number of OSDs or not!
[8:20] <nerdtron> sherry, ceph osd tree
[8:20] <nerdtron> any down osd?
[8:21] <dmick> malcolm: you can pip install ceph-deploy
[8:21] <dmick> or you can get it from github.com/ceph too
[8:21] * tziOm (~bjornar@ti0099a340-dhcp0395.bb.online.no) has joined #ceph
[8:21] <malcolm> whats a pip? I'm not a python guy
[8:22] <malcolm> I assume its python :P
[8:23] <sherry> nerdtron: no, bt there is one OSD!
[8:23] <dmick> pip is a way to install python packages
[8:24] <dmick> ceph-deploy is also in debs/rpms, so you can still use it from the pkg even if you're building ceph proper from source
[8:24] <nerdtron> sherry, how many osd in total?
[8:25] <sherry> I created 2, bt osd tree shows me only 1 in total
[8:26] <nerdtron> why? isn't it recommended to have 1 osd per physical disk? it is hard to know which of the disk is down
[8:26] <dmick> sherry: you realize that "install -d" is nothing like creating an osd, right?
[8:26] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[8:26] <sherry> what would be the command then?
[8:27] <dmick> it's a whole series of commands
[8:27] <dmick> install -d just makes a directory
[8:28] * Vjarjadian (~IceChat77@176.254.37.210) Quit (Quit: The early bird may get the worm, but the second mouse gets the cheese)
[8:29] <sherry> if I run ./vstart.sh -n -x -l?
[8:29] <malcolm> I'm building on OpenSuSE Tumbleweed. So the 12.2 packages are.... unfriendly. Its cool I just installed pip. So I'll be deploying in no time!
[8:30] <malcolm> I'm actually doing something a bit nuts.. so a question, to ensure I can actually do this... all the required ceph osd parts build ok on arm?
[8:30] <sherry> then how cn I correct this error > HEALTH_WARN 24 pgs degraded; 24 pgs stuck unclean; recovery 53/106 objects degraded (50.000%), is it related to the number of OSDs which should be more than 1?
[8:33] <yanzheng> sherry, you need at least two osds
[8:33] <sherry> then how cn I create the second one? what is the right command?
[8:38] <malcolm> And nope.. ceph-deploy just borks with UnsupportedPlatform :(
[8:40] * bandrus (~Adium@66-87-130-41.pools.spcsdns.net) has joined #ceph
[8:48] <dmick> malcolm: we have limited arm support
[8:48] <dmick> ceph-deploy's test is pretty easy to hack IIRC
[8:48] <dmick> if you want to try
[8:54] * topro (~topro@host-62-245-142-50.customer.m-online.net) Quit (Remote host closed the connection)
[8:55] <malcolm> dmick: nah I'm just doing it the 'old' way.
[8:56] <malcolm> Oh and on ARM as long as I can get an OSD running, I'll be happy.. like I said... I have a crazy idea
[8:57] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) has joined #ceph
[8:58] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) Quit ()
[8:59] * sherry (~sherry@wireless-nat-10.auckland.ac.nz) Quit (Quit: Konversation terminated!)
[8:59] * dpippenger1 (~riven@cpe-76-166-208-83.socal.res.rr.com) has joined #ceph
[9:00] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[9:01] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[9:02] * madkiss (~madkiss@tmo-102-146.customers.d1-online.com) has joined #ceph
[9:06] * JustEra (~JustEra@89.234.148.11) has joined #ceph
[9:07] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[9:10] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[9:12] * tziOm (~bjornar@ti0099a340-dhcp0395.bb.online.no) Quit (Remote host closed the connection)
[9:14] * jcfischer (~fischer@user-28-10.vpn.switch.ch) Quit (Quit: jcfischer)
[9:16] * malcolm (~malcolm@silico24.lnk.telstra.net) Quit (Ping timeout: 480 seconds)
[9:19] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) has joined #ceph
[9:24] * danieagle (~Daniel@177.97.250.27) has joined #ceph
[9:25] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:25] * mschiff (~mschiff@p4FD7E621.dip0.t-ipconnect.de) has joined #ceph
[9:28] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) Quit (Ping timeout: 480 seconds)
[9:30] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[9:31] * jcfischer (~fischer@130.59.94.167) has joined #ceph
[9:32] * bandrus (~Adium@66-87-130-41.pools.spcsdns.net) Quit (Quit: Leaving.)
[9:33] <nigwil> why this? 2013-09-06 17:20:26.621961 7f2fd28017c0 -1 filestore(/var/lib/ceph/osd/ceph-10) _detect_fs unable to create /var/lib/ceph/osd/ceph-10/xattr_test: (28) No space left on device
[9:33] <nigwil> 2013-09-06 17:20:26.622443 7f2fd28017c0 -1 ** ERROR: error converting store /var/lib/ceph/osd/ceph-10: (28) No space left on device
[9:33] <nigwil> when this?
[9:33] <nigwil> /dev/sdc1 465G 440G 25G 95% /var/lib/ceph/osd/ceph-10
[9:35] <Kioob> 95% and 5% reserved ?
[9:36] <nigwil> ok, 5% reserve is an XFS or Linux feature?
[9:36] <Kioob> if there is 5% reserved here, it's by XFS
[9:37] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[9:37] <Kioob> mmm
[9:37] <Kioob> reserved space is for "root". OSD are running in root
[9:37] <Kioob> so... don't know.
[9:38] <nigwil> root@ceph5:/var/log/ceph# xfs_db -r "-c freesp -s" /dev/sdc1 from to extents blocks pct
[9:38] <nigwil> 1 1 5953 5953 0.09
[9:38] <nigwil> 2 3 7972 19885 0.30
[9:38] <nigwil> 4 7 15678 86320 1.32
[9:38] <nigwil> 8 15 441050 6429139 98.29
[9:38] <nigwil> total free extents 470653
[9:42] <fireD_> whats your mon_osd_full_ratio and mon_osd_nearfull_ratio?
[9:42] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[9:42] * fireD_ is now known as fireD
[9:43] <nigwil> the defaults I expect (I didn't change anything) so 0.95
[9:43] <nigwil> 0.85
[9:44] <nigwil> what I find surprising is that the OSD won't start because the filesystem is too full, but the check is low-level
[9:44] <fireD> check with something like ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show | egrep 'mon_osd_full_ratio|mon_osd_nearfull_ratio'
[9:45] <Kioob> but here, there is an system error 28 "No space left on device", it's not a Ceph check, right ?
[9:46] <yy-nm> hay, all. i have a question about export rbd snap? using rbd export can export rbd snap ???
[9:46] <nigwil> Kioob: asok config show won't work since the OSD has not started so the socket is not there
[9:48] <topro> how much to keep prepared per MDS cache inode, anyone?
[9:48] <topro> how much ram ^^
[9:49] <topro> dumpcache just gave me a file with 3.6M lines while I configured mds cache size to 1.5M
[9:49] * ScOut3R (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) has joined #ceph
[9:49] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[9:50] <topro> I think there is a real severe problem with cleaning up MDS cache (dumpling 0.67.2+a708c8ab52)
[9:51] <topro> ^^ this is with 8 linux-3.9 kernel clients
[9:56] <nigwil> is service --allhosts stop enough to stop the cluster?
[10:00] * allsystemsarego (~allsystem@5-12-37-158.residential.rdsnet.ro) has joined #ceph
[10:02] * topro (~topro@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[10:02] <nigwil> it seems not
[10:02] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[10:03] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[10:05] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[10:07] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[10:11] * Karcaw (~evan@68-186-68-219.dhcp.knwc.wa.charter.com) Quit (Ping timeout: 480 seconds)
[10:25] * Karcaw (~evan@68-186-68-219.dhcp.knwc.wa.charter.com) has joined #ceph
[10:27] * hugo (~hugo@50-197-147-249-static.hfc.comcastbusiness.net) has joined #ceph
[10:27] <hugo> Hello
[10:27] <hugo> Does anyone know how to set owner to a pool ? appreciate~~
[10:28] <Kioob> owner ?
[10:28] <Kioob> (hi)
[10:28] <hugo> hi
[10:28] <hugo> pool 6 '.rgw' rep size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 400 pgp_num 400 last_change 177 owner 0
[10:29] <hugo> I delete the .rgw pool and re-add again with more pg number
[10:29] <hugo> but I found that the owner was changed
[10:29] <hugo> the original owner value is "owner 18446744073709551615"
[10:30] <hugo> Do I need to modify it for making RadosGW working properly ?
[10:30] <nigwil> rados chown UID
[10:30] <hugo> how to specify the pool name ?
[10:31] <nigwil> -p pool
[10:31] <nigwil> --pool=pool
[10:32] <nigwil> https://github.com/ceph/ceph/blob/master/src/rados.cc
[10:32] <nigwil> sadly the online doc is out of date, source-code is your friend it seems
[10:33] <hugo> nigwil: sexy ..
[10:33] <hugo> an additional question , I found that the uid seems not the owner id ... how to query the uid of owner 18446744073709551615 ?
[10:35] <hugo> for example : rados --pool .rgw chown 18446744073709551615 .... the result in changed auid on pool .rgw to 9223372036854775807
[10:35] * hugo crying
[10:35] <nigwil> good question...
[10:36] * hugo reading the code too...
[10:36] * LeaChim (~LeaChim@97e00ac2.skybroadband.com) has joined #ceph
[10:38] * cofol1986 (~xwrj@110.90.119.113) Quit (Quit: Leaving.)
[10:40] * Lea (~LeaChim@97e00ac2.skybroadband.com) has joined #ceph
[10:41] <hugo> ok... funny.... it suppose to be a bug with the auid 18446744073709551615
[10:51] * mattt (~mattt@92.52.76.140) has joined #ceph
[10:54] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[10:55] * shang (~ShangWu@122-116-16-162.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[10:57] * jcfischer_ (~fischer@user-28-9.vpn.switch.ch) has joined #ceph
[11:01] * jcfischer (~fischer@130.59.94.167) Quit (Ping timeout: 480 seconds)
[11:01] * jcfischer_ is now known as jcfischer
[11:04] * cofol1986 (~xwrj@110.90.119.113) has joined #ceph
[11:05] <cofol1986> hello, I got some pgs in this status for quite a long time:active+recovering, how to eliminate it?
[11:07] * jluis (~joao@89-181-152-211.net.novis.pt) has joined #ceph
[11:07] * ChanServ sets mode +o jluis
[11:07] * X3NQ (~X3NQ@195.191.107.205) has joined #ceph
[11:09] <nerdtron> cofol1986, is your cluster healthy? or are there any down osd? or you have added more osd?
[11:14] * hugo (~hugo@50-197-147-249-static.hfc.comcastbusiness.net) Quit (Remote host closed the connection)
[11:14] * joao (~joao@89-181-152-211.net.novis.pt) Quit (Ping timeout: 480 seconds)
[11:22] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[11:23] * jcfischer (~fischer@user-28-9.vpn.switch.ch) Quit (Quit: jcfischer)
[11:23] * mozg (~andrei@86.188.208.210) has joined #ceph
[11:23] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[11:24] * Cube1 (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[11:24] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[11:27] * Cube1 (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit ()
[11:33] * jcfischer (~fischer@130.59.94.167) has joined #ceph
[11:33] * jcfischer (~fischer@130.59.94.167) Quit ()
[11:34] * yy-nm (~Thunderbi@220.184.128.218) Quit (Quit: yy-nm)
[11:39] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[11:40] * danieagle (~Daniel@177.97.250.27) Quit (Quit: inte+ e Obrigado Por tudo mesmo! :-D)
[11:44] * shdb (~shdb@80-219-0-163.dclient.hispeed.ch) has joined #ceph
[11:54] * ross_ (~ross@60.208.111.209) Quit (Ping timeout: 480 seconds)
[11:57] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) Quit (Remote host closed the connection)
[11:59] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[12:04] * jcfischer (~fischer@130.59.94.167) has joined #ceph
[12:08] * jcfischer_ (~fischer@user-23-18.vpn.switch.ch) has joined #ceph
[12:09] * ScOut3R (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[12:09] * mozg (~andrei@86.188.208.210) Quit (Read error: Connection reset by peer)
[12:11] <cfreak200> Can someone explain to me why it shows HEALTH_WARN > http://nopaste.info/ea52f830d4.html < ? Does look ok to me...
[12:12] * jcfischer (~fischer@130.59.94.167) Quit (Ping timeout: 480 seconds)
[12:12] * jcfischer_ is now known as jcfischer
[12:14] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) has joined #ceph
[12:23] <nerdtron> you don't have an mds server?
[12:25] <cfreak200> just 1 for testing. We suspecting that it's because of that
[12:25] <cfreak200> or had one for testing.. actually not relevant for the current ceph usage
[12:26] <nerdtron> ceph health detail
[12:26] <cfreak200> ahh disk space, thanks
[12:27] <nerdtron> huh? what about disk space?
[12:27] <cfreak200> it's complaining about only 28% left disk space
[12:28] <cfreak200> on one of the mons
[12:28] <nerdtron> oh...your mons are not balanced, 3 mons and 8 osd??
[12:28] <nerdtron> mine is 3 mons 3 osd and identical hard drive capacity..
[12:29] <cfreak200> i've 1 dedicated mon/mds server
[12:29] * madkiss (~madkiss@tmo-102-146.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[12:29] <cfreak200> and on both osd-servers i've each 4 osds with one mon
[12:29] <nerdtron> oh
[12:29] <cfreak200> that worked well so far... did some "plug the cable" tests without any impact on availabiliy
[12:30] <nerdtron> 58078 GB / 59599 GB avail; why is it complaining on 28% left disk space?
[12:30] * via (~via@smtp2.matthewvia.info) Quit (Ping timeout: 480 seconds)
[12:31] <cfreak200> mon data is stored outside the actual ceph space
[12:31] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[12:31] <nerdtron> oh
[12:31] * KindTwo (~KindOne@h210.209.89.75.dynamic.ip.windstream.net) has joined #ceph
[12:31] * KindTwo is now known as KindOne
[12:31] <cfreak200> the lvm where the mon-data is was just a couple of GB..
[12:32] * via (~via@smtp2.matthewvia.info) has joined #ceph
[12:34] * julian (~julianwa@125.70.133.187) Quit (Quit: afk)
[12:40] * pressureman (~pressurem@62.217.45.26) has joined #ceph
[12:41] * nerdtron (~kenneth@202.60.8.252) Quit (Remote host closed the connection)
[12:49] <pressureman> are there any more detailed docs on configuring ceph-rest-api? so far i've only found http://ceph.com/docs/next/man/8/ceph-rest-api/
[12:50] <mattt> pressureman: you mean the s3/swift gateway stuff ?
[12:51] <pressureman> nope, the new admin REST API that landed in dumpling
[12:51] <mattt> pressureman: hadn't even seen that :( sorry, ignore me
[12:52] * jcfischer_ (~fischer@macjcf.switch.ch) has joined #ceph
[12:52] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[12:52] * ChanServ sets mode +v andreask
[12:52] <pressureman> hmm. maybe i've solved it. i don't have a client name "client.restapi" set up...
[12:53] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[12:53] <mattt> good stuff
[12:54] * jcfischer (~fischer@user-23-18.vpn.switch.ch) Quit (Ping timeout: 480 seconds)
[12:54] * jcfischer_ is now known as jcfischer
[12:55] * jtang1 (~jtang@130.226.230.9) has joined #ceph
[13:00] <pressureman> wow... this admin REST api looks pretty powerful, and perfect for cluster health monitoring. i can see a nagios plugin being developed for this ;-)
[13:00] * KindTwo (~KindOne@h121.56.186.173.dynamic.ip.windstream.net) has joined #ceph
[13:00] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[13:01] * KindTwo is now known as KindOne
[13:02] * madkiss (~madkiss@tmo-102-146.customers.d1-online.com) has joined #ceph
[13:03] * psiekl (psiekl@wombat.eu.org) Quit (Ping timeout: 480 seconds)
[13:08] * jtang1 (~jtang@130.226.230.9) Quit (Quit: Leaving.)
[13:14] * jtang1 (~jtang@130.226.230.9) has joined #ceph
[13:14] * jtang1 (~jtang@130.226.230.9) Quit ()
[13:18] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[13:18] * mozg (~andrei@86.188.208.210) has joined #ceph
[13:20] * psiekl (psiekl@wombat.eu.org) has joined #ceph
[13:34] * Xiol_ (~Xiol@94-193-254-111.zone7.bethere.co.uk) has joined #ceph
[13:36] * Xiol (~Xiol@94-193-254-111.zone7.bethere.co.uk) Quit (Read error: Operation timed out)
[13:40] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[13:46] * hughsaunders (~oftc-webi@94.236.7.190) has joined #ceph
[13:55] * vipr (~vipr@frederik.pw) Quit (Quit: leaving)
[13:57] * vipr (~vipr@frederik.pw) has joined #ceph
[13:58] <joelio> pressureman: that's the next thing on my list here.. A nagios plugin. I've looked at the current ones and they just parse command line output
[14:03] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[14:03] * ChanServ sets mode +v andreask
[14:03] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[14:03] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[14:06] * mozg (~andrei@86.188.208.210) Quit (Quit: Ex-Chat)
[14:08] * yanzheng (~zhyan@134.134.139.76) has joined #ceph
[14:14] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[14:15] * DarkAce-Z (~BillyMays@50.107.55.36) Quit (Ping timeout: 480 seconds)
[14:16] * SJVNH3VN5TQ3YH (~egyptmosl@41.46.216.251) has joined #ceph
[14:16] <SJVNH3VN5TQ3YH> Do skype,yahoo,other chat and social communication prog (facebook&twitter) spy for Israel & usa ???
[14:16] <SJVNH3VN5TQ3YH> Do they record and analyse everything we type on the internet???
[14:16] <SJVNH3VN5TQ3YH> هل تتجسس برامج الشات و التواصل الاجتماعى لاسرائيل و امريكا؟؟؟؟
[14:16] <SJVNH3VN5TQ3YH> Do skype,yahoo,other chat and social communication prog (facebook&twitter) spy for Israel & usa ???
[14:16] <SJVNH3VN5TQ3YH> Do they record and analyse everything we type on the internet???
[14:16] <SJVNH3VN5TQ3YH> هل تتجسس برامج الشات و التواصل الاجتماعى لاسرائيل و امريكا؟؟؟؟
[14:16] <maciek> blah blah blah #gtfo
[14:17] <Kioob> SJVNH3VN5TQ3YH: of course they do. With Ceph you can store a lot of data. Perfect product to spy !
[14:17] <maciek> :-D
[14:17] * SJVNH3VN5TQ3YH (~egyptmosl@41.46.216.251) Quit (Excess Flood)
[14:18] * JustEra (~JustEra@89.234.148.11) Quit (Ping timeout: 480 seconds)
[14:22] <janos> Kioob: haha
[14:25] * joelio waits for the NSA Whitepaper
[14:27] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 23.0.1/20130814063812])
[14:27] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[14:28] <nhm_> good grief
[14:30] * smiley__ (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) Quit (Quit: smiley__)
[14:30] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[14:31] * Macheske (~Bram@d5152D87C.static.telenet.be) has joined #ceph
[14:33] * Machske (~Bram@d5152D87C.static.telenet.be) Quit (Ping timeout: 480 seconds)
[14:33] * DarkAce-Z (~BillyMays@50.107.55.36) has joined #ceph
[14:35] * alfredodeza wishes he had op right now
[14:36] <liiwi> I'd prefer beer
[14:36] <alfredodeza> beer too, but I am about to complete my coffee
[14:36] <alfredodeza> :)
[14:39] * hugo (~hugo@118.233.227.15) has joined #ceph
[14:41] * joao (~joao@89-181-152-211.net.novis.pt) has joined #ceph
[14:41] * ChanServ sets mode +o joao
[14:41] * thorus (~jonas@82.199.158.66) has left #ceph
[14:44] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[14:45] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[14:46] <nhm_> alfredodeza: huh, I forget who is in charge of ops stuff
[14:47] <nhm_> alfredodeza: dmick might be teh right person to talk to
[14:47] <alfredodeza> we have joao now here \o/
[14:47] * nhm_ is now known as nhm
[14:47] * jluis (~joao@89-181-152-211.net.novis.pt) Quit (Ping timeout: 480 seconds)
[14:48] * odyssey4me (~odyssey4m@165.233.71.2) Quit (Ping timeout: 480 seconds)
[14:55] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[15:00] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[15:01] <mattt> trying to use a ceph cluster for glance imaging using radosgw
[15:01] <mattt> i can upload fine, but when i try to delete the images it errors out
[15:05] <mattt> appears that i can delete individual chunks w/ swift client, but deleting the "top level" UUID file causes radosgw to timeout
[15:07] <zackc> mattt: i'm not sure if i can help you, but any error text you have could be useful
[15:08] <loicd> zackc: jenkins is no go, my hunch was not a good one ;) ( posted a mail to the list )
[15:08] <mattt> zackc: yeah, i'm trying to piece a few things together right now …
[15:11] <zackc> loicd: aww. which list? i'm not seeing your message.
[15:13] * ScOut3R (~scout3r@91EC1DC5.catv.pool.telekom.hu) has joined #ceph
[15:13] <ScOut3R> identify 123ToEr
[15:14] * ScOut3R (~scout3r@91EC1DC5.catv.pool.telekom.hu) has left #ceph
[15:14] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[15:15] <loicd> devel
[15:15] <zackc> that's unfortunate.
[15:15] <loicd> zackc: ceph-devel
[15:15] <zackc> loicd: that's... weird. i don't have the mail. let me check the archives.
[15:16] <loicd> scuttlemonkey: I think the scout asks to be kicked
[15:16] <mattt> zackc: so glance is chunking the image, and it's when you try to retrieve the manifest file in ceph/swift that it sits there spinning for ages
[15:16] <loicd> scuttlemonkey: that how good my arabic understanding is
[15:17] <zackc> loicd: the last messages i see from you on ceph-devel are from last week - even in the archives
[15:17] <mattt> zackc: http://pastebin.com/CpZPpsbJ
[15:17] <loicd> zackc: I forwarded it to you right now.
[15:18] <mattt> zackc: if you tail radosgw.log while you try to delete/download the manifest, you'll see a ton of entries like that
[15:18] * markbby (~Adium@168.94.245.2) has joined #ceph
[15:19] <zackc> hmmm
[15:19] * jcsp (~john@fluency-gw1.summerhall.co.uk) has joined #ceph
[15:21] * sjm (~sjm@ip66-104-240-108.z240-104-66.customer.algx.net) has joined #ceph
[15:24] <topro> seems like one of my cephfs problems is that I get corrupted metadata (MDS) when MDS needs to get restarted. corrupted files tend to show exactly 4MiB file size, even if it was a file of only some KiB size, originally. anyone encountered a similar situation?
[15:27] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[15:27] <zackc> mattt: yeah, this is over my head. if you haven't sent a message to ceph-devel, could you do so?
[15:34] * malcolm (~malcolm@101.165.48.42) has joined #ceph
[15:39] <zackc> topro: you may also want to send a mail to one of the lists :-/
[15:39] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[15:40] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[15:42] * doxavore (~doug@99-89-22-187.lightspeed.rcsntx.sbcglobal.net) has joined #ceph
[15:44] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[15:44] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[15:47] * jcsp (~john@fluency-gw1.summerhall.co.uk) Quit (Quit: Ex-Chat)
[15:48] * mschiff_ (~mschiff@p4FD7E621.dip0.t-ipconnect.de) has joined #ceph
[15:48] <mattt> zackc: not sure there's a problem actually
[15:49] * mschiff (~mschiff@p4FD7E621.dip0.t-ipconnect.de) Quit (Read error: Connection reset by peer)
[15:49] <mattt> zackc: think i'm just getting caught out everywhere w/ the synchronous writes (which i don't believe swift does)
[15:50] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[15:57] * malcolm (~malcolm@101.165.48.42) Quit (Ping timeout: 480 seconds)
[15:59] * hugo (~hugo@118.233.227.15) Quit (Remote host closed the connection)
[16:03] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[16:05] * hugo (~hugo@118.233.227.15) has joined #ceph
[16:08] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[16:09] <loicd> zackc: <jeblair> dachary: zuul is the job scheduler we use in openstack-infra, it triggers jenkins jobs via the gearman protocol (we wrote a gearman-plugin for jenkins)
[16:09] <loicd> <jeblair> dachary: but it's not tied directly to jenkins; so you could have something in addition or instead of jenkins run the actual jobs
[16:10] <loicd> http://ci.openstack.org/zuul/ this may be a lead. I'll dig further. zuul was ( in my mind ) a gating system ( that's what it says ) and not a job scheduler.
[16:10] <loicd> to be continued ...
[16:11] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[16:12] * tryggvil (~tryggvil@178.19.53.254) Quit (Quit: tryggvil)
[16:19] * sjm (~sjm@ip66-104-240-108.z240-104-66.customer.algx.net) Quit (Remote host closed the connection)
[16:23] * jcsp (~john@fluency-gw1.summerhall.co.uk) has joined #ceph
[16:24] * clayb (~kvirc@199.172.169.79) has joined #ceph
[16:25] * wrale (~wrale@wrk-28-217.cs.wright.edu) Quit (Quit: Leaving)
[16:33] * sprachgenerator (~sprachgen@130.202.135.180) has joined #ceph
[16:33] * mschiff_ (~mschiff@p4FD7E621.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[16:34] * topro (~topro@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[16:35] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[16:38] * kyann (~oftc-webi@AMontsouris-652-1-212-179.w90-46.abo.wanadoo.fr) has joined #ceph
[16:38] <kyann> hi !
[16:42] <joao> hey kyann
[16:42] <joao> how's it going?
[16:43] <kyann> joao: I'm having an issue with the osd, something that look like this : http://tracker.ceph.com/issues/4602 :(
[16:44] <joao> are you guys still on cuttlefish?
[16:44] <kyann> yes, i have been told to wait a week or two
[16:45] <kyann> before upgrade
[16:48] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[16:50] * madkiss (~madkiss@tmo-102-146.customers.d1-online.com) Quit (Quit: Leaving.)
[16:53] * BillK (~BillK-OFT@124-171-168-171.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[16:54] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[16:55] * pressureman (~pressurem@62.217.45.26) Quit (Quit: Ex-Chat)
[16:59] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[17:00] * jcsp (~john@fluency-gw1.summerhall.co.uk) Quit (Ping timeout: 480 seconds)
[17:04] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:08] * yanzheng (~zhyan@134.134.139.76) Quit (Ping timeout: 480 seconds)
[17:11] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[17:14] * sage (~sage@76.89.177.113) has joined #ceph
[17:16] * vata (~vata@2607:fad8:4:6:c0c6:4504:1641:57af) has joined #ceph
[17:17] * Vjarjadian (~IceChat77@176.254.37.210) has joined #ceph
[17:19] * sagelap (~sage@2600:1012:b004:b945:f836:7a76:e39:2411) has joined #ceph
[17:23] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[17:24] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[17:28] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[17:28] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit ()
[17:33] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[17:33] * odyssey4me (~odyssey4m@41-133-58-101.dsl.mweb.co.za) has joined #ceph
[17:35] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[17:38] * glowell1 (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:38] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) has joined #ceph
[17:39] * odyssey4me2 (~odyssey4m@165.233.71.2) has joined #ceph
[17:40] * xarses (~andreww@c-71-202-167-197.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:41] * odyssey4me (~odyssey4m@41-133-58-101.dsl.mweb.co.za) Quit (Ping timeout: 480 seconds)
[17:45] * hugo (~hugo@118.233.227.15) Quit (Remote host closed the connection)
[17:47] * sleinen1 (~Adium@2001:620:0:46:b41c:e261:23e9:e17e) Quit (Ping timeout: 480 seconds)
[17:49] * odyssey4me (~odyssey4m@41-133-58-101.dsl.mweb.co.za) has joined #ceph
[17:50] * jcfischer (~fischer@macjcf.switch.ch) Quit (Ping timeout: 480 seconds)
[17:51] * haomaiwa_ (~haomaiwan@112.193.130.93) Quit (Remote host closed the connection)
[17:51] * haomaiwang (~haomaiwan@li498-162.members.linode.com) has joined #ceph
[17:55] * JustEra (~JustEra@89.234.148.11) has joined #ceph
[17:55] * odyssey4me2 (~odyssey4m@165.233.71.2) Quit (Ping timeout: 480 seconds)
[17:57] * JustEra (~JustEra@89.234.148.11) Quit ()
[17:57] * odyssey4me (~odyssey4m@41-133-58-101.dsl.mweb.co.za) Quit (Ping timeout: 480 seconds)
[17:58] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[18:00] * mattt (~mattt@92.52.76.140) Quit (Read error: Connection reset by peer)
[18:00] * sagelap1 (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) has joined #ceph
[18:01] * sjm (~sjm@38.122.20.226) has joined #ceph
[18:02] * sagelap (~sage@2600:1012:b004:b945:f836:7a76:e39:2411) Quit (Ping timeout: 480 seconds)
[18:04] * hugo (~hugo@118.233.227.15) has joined #ceph
[18:06] <sagewk> mtanski: ping
[18:06] <sjust> kyann: where did you get the ceph package?
[18:07] <mtanski> holla
[18:07] <sagewk> just saw david's email
[18:08] * angdraug (~angdraug@204.11.231.50.static.etheric.net) has joined #ceph
[18:08] <sagewk> i can rebuild the branch against his patches. can you push your warning fix? didn't see it in the bitbucket adfin tree
[18:08] * odyssey4me (~odyssey4m@165.233.71.2) has joined #ceph
[18:09] * alram (~alram@38.122.20.226) has joined #ceph
[18:09] <mtanski> I just talked to David it turns out he had some email snafu
[18:09] * xarses (~andreww@204.11.231.50.static.etheric.net) has joined #ceph
[18:10] <mtanski> Would you like me to push the fix onto of the current one (e.g. without David's updates) or also incorporating David's updates?
[18:10] <mtanski> Also, sorry for being a pain in the ass.
[18:10] <sagewk> mtanski: no worries, coordinating the multiple trees is always somehow harder than it should be
[18:11] <sagewk> i would just build your branch that pulls things in the right order and i'll rebase the ceph tree accordingly
[18:11] * doxavore (~doug@99-89-22-187.lightspeed.rcsntx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[18:12] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[18:14] <mtanski> The problems is that David's tree is ahead of the Ceph tree… and he's like us to merge his fscache-fixes-for-ceph trag
[18:14] <mtanski> tag*
[18:15] <mtanski> He cherry picked some of my patches, and his fixed to the cifs depend on one of them
[18:15] <mtanski> This makes my head hurt
[18:17] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:19] <sagewk> zackc: https://github.com/ceph/teuthology/pull/75
[18:20] <sagewk> mtanski: doesn't matter; just base your stuff on his tree and i can merge it into my master
[18:20] <sagewk> right?
[18:20] <kraken> http://i.imgur.com/RvquHs0.gif
[18:21] * grepory (~Adium@12.236.17.3) has joined #ceph
[18:22] * doxavore (~doug@99-89-22-187.lightspeed.rcsntx.sbcglobal.net) has joined #ceph
[18:24] <mtanski> Sure
[18:27] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) has joined #ceph
[18:27] * haomaiwang (~haomaiwan@li498-162.members.linode.com) Quit (Remote host closed the connection)
[18:27] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[18:29] * sleinen1 (~Adium@2001:620:0:26:29dd:86a4:8a51:6673) has joined #ceph
[18:29] <zackc> sagewk: ooh nice
[18:32] * nwat (~nwat@eduroam-237-79.ucsc.edu) has joined #ceph
[18:35] <sagewk> zackc: hmm, teuthology-master directory disappeared out of ~teuthworker
[18:35] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[18:39] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Bye!)
[18:39] <sagewk> zackc: alfredodeza: https://github.com/ceph/teuthology/pull/77
[18:40] <sagewk> i think this fix didn't make it intot eh tree; i had modified it in place trying to debug the situation last week. <facepalm>
[18:40] * doxavore (~doug@99-89-22-187.lightspeed.rcsntx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[18:40] <alfredodeza> merged
[18:41] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[18:43] <sagewk> thanks!
[18:47] <zackc> ha!
[18:47] <zackc> awesome.
[18:50] * yehuda_hm (~yehuda@2602:306:330b:1410:baac:6fff:fec5:2aad) Quit (Ping timeout: 480 seconds)
[18:52] <mtanski> I'm rebuilding the fixed tree and then I'll give a quick test
[18:53] * markbby (~Adium@168.94.245.2) Quit (Remote host closed the connection)
[18:53] * doxavore (~doug@99-7-52-88.lightspeed.rcsntx.sbcglobal.net) has joined #ceph
[18:53] <mtanski> I'll also stick this on one of my beta nodes, the people hitting the test environment will quickly find if that bug i sent to David was fixed properly
[18:53] <mtanski> because it seams to come up when you have multi node concurrency (and thus a bunch of revalidates queued up)
[18:55] * sjm (~sjm@38.122.20.226) Quit (Remote host closed the connection)
[18:59] * xmltok (~xmltok@pool101.bizrate.com) has joined #ceph
[18:59] * yehuda_hm (~yehuda@2602:306:330b:1410:ec0d:7518:62da:7c01) has joined #ceph
[19:01] * dpippenger1 (~riven@cpe-76-166-208-83.socal.res.rr.com) Quit (Quit: Leaving.)
[19:06] * aliguori (~anthony@cpe-70-112-153-179.austin.res.rr.com) has joined #ceph
[19:08] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Quit: Ex-Chat)
[19:09] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[19:12] <decede> does a mon or osd definition in the config file need to use a number like [mon.1] rather then [mon.a]?
[19:12] * sjustlaptop (~sam@2607:f298:a:607:64d5:abc:d5a7:d01f) has joined #ceph
[19:21] * doxavore (~doug@99-7-52-88.lightspeed.rcsntx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[19:22] <sagewk> nwat: ping
[19:22] <sagewk> nwat: just noticed your build-refactor branch.. have you see wip-automake? roald is working on the same thing i think
[19:23] <xarses> decede, i understand that it is convention to use letters for mon and numbers for osd
[19:23] <sagewk> joao: issue 6147 is a smaller one that might be worth doing before the ping thing
[19:23] <kraken> sagewk might be talking about: http://tracker.ceph.com/issues/6147 [mon: calculate, expose per-pool pg stat deltas]
[19:24] <decede> xarses: but i could use hostname?
[19:24] <sagewk> xarses: hostnames on mons, ideally
[19:24] <nwat> sagewk: oh, no i didn't see it.
[19:24] <xarses> sagewk for the ini block?
[19:25] <sagewk> if you're using ceph-deploy you don't need to modify ceph.conf at all, and it uses hostnames on its own
[19:29] <nwat> sagewk: i'll ping him and see if there is anything we can combine
[19:30] <sagewk> nwat: perfect
[19:30] <xarses> sagewk, ya i still have lots of questions and confusions about what needs to be in the ceph.conf with relation to defining the monitor nodes
[19:32] <sagewk> xarses: ceph-deploy will do it all for you if you're using htat. if you're not using ceph-deploy, you probably should..
[19:32] <xarses> https://gist.github.com/xarses/6384459 specifically
[19:32] * diegows (~diegows@190.190.11.42) has joined #ceph
[19:32] <xarses> sagewk, probably should define the settings in ceph.conf or use ceph-deploy?
[19:36] * sjustlaptop1 (~sam@38.122.20.226) has joined #ceph
[19:36] * sjustlaptop (~sam@2607:f298:a:607:64d5:abc:d5a7:d01f) Quit (Ping timeout: 480 seconds)
[19:40] <bstillwell> What's the best way to replace a failed drive on a cluster built with ceph-deploy?
[19:41] <xarses> start over? =P
[19:41] <bstillwell> heh
[19:41] <nhm> xarses: that's waht I do for performance analysis. ;D
[19:42] <bstillwell> Is it just 'ceph-deploy osd prepare' and 'ceph-deploy osd activate'? Or do I need to remove the drive first?
[19:45] <bstillwell> whoa, just tried a dry-run of that and it created a new OSD for me
[19:45] <alfredodeza> bstillwell: what do you mean by a dry-run
[19:45] * alfredodeza is not aware of dry-run mode in ceph-deploy
[19:45] <bstillwell> 'ceph-deploy --help' lists:
[19:45] <bstillwell> -n, --dry-run do not perform any action, but report what would be done
[19:46] <alfredodeza> aha!
[19:46] <alfredodeza> hrmn
[19:46] <alfredodeza> let me take a look
[19:47] <alfredodeza> bstillwell: well, this is unfortunate but that dry-run command is a (partial) lie
[19:47] <alfredodeza> :(
[19:47] <bstillwell> k
[19:47] <xarses> alfredodeza: re https://github.com/ceph/ceph-deploy/pull/65, why not just allocate the tty?
[19:47] <bstillwell> So I wanted to replace the drive for osd.15, but it created osd.40 instead.
[19:47] <bstillwell> Is there a way to remove osd.40 and then make the drive osd.15?
[19:47] <alfredodeza> the only thing it is doing is that is writing a temp file with a config
[19:48] <mtanski> sagewk: I have a tree for you https://bitbucket.org/adfin/linux-fs/commits/branch/wip-fscache-v2
[19:48] <alfredodeza> and it is only for `ceph-deploy new`
[19:48] * alfredodeza fixes this
[19:49] <mtanski> I took wip-fscache (without Ceph fscache support), merged David's tag, rebased my patches onto of it (including the warning fixes)
[19:49] <mtanski> i also rebooted and gave it a quick test
[19:49] <wusui> nhm: Mark. are you there?
[19:50] <mtanski> although when i tested I had Yan's sentry patch on top (which i cherry picked for linus' master)
[19:50] <nhm> wusui: hello!
[19:50] <mtanski> otherwise the the machine i test on consistently runs into this issue
[19:50] <wusui> nhm: Hi. wanna talk about what needs to be done to get real data onto the database>
[19:52] <bstillwell> So if I manually remove both osd.15 and osd.40, would running 'ceph-deploy osd prepare' use osd.15?
[19:52] <nhm> wusui: sure!
[19:52] <bstillwell> Because I'm not seeing anything about it supporting removing an osd
[19:52] <alfredodeza> bstillwell: you can tell it explicitly
[19:53] <alfredodeza> by using HOST:DISK[:JOURNAL]
[19:53] <nhm> wusui: So if you look in say /a/nhm-2013-09-05_11:26:22-performance-master-testing-basic-burnupi/22589 on teuthology you can see the output in the tuethology.log file from one of the runs.
[19:53] <bstillwell> alfredodeza: How does that specify the OSD number?
[19:54] <bstillwell> Heh, just found this bug: http://tracker.ceph.com/issues/3480
[19:54] <alfredodeza> bstillwell: you can do HOST:/path/to/osd
[19:54] <nhm> wusui: it's burried, but if ou search for Bandwidth you'll end up near the summary of the results.
[19:55] <nhm> wusui: that's just for one client though, so we may need to either aggregate the results here or possibly in chartio itself.
[19:55] <nhm> (which would mean another column for the client id)
[19:56] <wusui> nhm: I see two numbers in each teuthology.log Bandwidth, and Stddev Bandwidth
[19:56] <bstillwell> alfredodeza: ahh
[19:57] <nhm> wusui: yeah, that's just the stddev across the run
[19:58] * dpippenger1 (~riven@tenant.pas.idealab.com) has joined #ceph
[19:58] * dpippenger (~riven@tenant.pas.idealab.com) has joined #ceph
[19:58] * dpippenger (~riven@tenant.pas.idealab.com) Quit ()
[19:58] <nhm> there are also per-second values. It might be worth storing the per-second data and having chartio just execute an avg() or something. It looks like there may be some way to have chartio drill down into the data which would be cool.
[20:00] * Muhlemmer (~kvirc@78.96.254.85) has joined #ceph
[20:00] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[20:01] <wusui> nhm: so we want to stash records with these 2 bandwidth values as two of the columns? and what is the client id we should stash?
[20:03] <nhm> wusui: well, we can just start out with Bandwidth for now maybe.
[20:04] <nhm> regarding client ID, we'd have to figure out which copy of which rados bench instance is doing the work. I think we might be getting that now through the: INFO:teuthology.task.radosbench.radosbench.0.out:[10.214.134.22] portion of the line.
[20:05] <nhm> it may even be that we only need radosbench.<num> if <num> increments for every concurrent copy of rados bench being run.
[20:06] * hughsaunders (~oftc-webi@94.236.7.190) Quit (Remote host closed the connection)
[20:07] * dosaboy_ (~dosaboy@host109-145-44-0.range109-145.btcentralplus.com) has joined #ceph
[20:07] <wusui> nhm: okay. I guess I can throw together a command that, given a directory in, /a, writes this stuff to the database. Then we could have a script populate the database with old data. After that, we could either periodically run this update command, or we could add a call to this command in whatever code runs this test. (This is basically what I am doing for results saving)
[20:08] * imjustmatthew (~imjustmat@c-24-127-107-51.hsd1.va.comcast.net) Quit (Remote host closed the connection)
[20:09] <nhm> wusui: cool. Whatever you think is best. I figure if you've got a standard way you want to do this we can have it be based on the same code.
[20:09] * kyann (~oftc-webi@AMontsouris-652-1-212-179.w90-46.abo.wanadoo.fr) Quit (Quit: Page closed)
[20:09] * hugo (~hugo@118.233.227.15) Quit (Remote host closed the connection)
[20:11] <bstillwell> cool, manually removing osd.15 and osd.40 followed by 'ceph-deploy osd prepare den2ceph004:/dev/sdb' worked for replacing osd.15
[20:11] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[20:12] <wusui> nhm: I need to make it the same code... I'll see what I can do. If I get the database set up with what is currently in /a/nmh*/*, and get a command that you can use to write more records into the db, would that be sufficient?
[20:13] * dosaboy (~dosaboy@host109-156-222-77.range109-156.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[20:14] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Read error: Connection reset by peer)
[20:15] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[20:15] * sjustlaptop1 (~sam@38.122.20.226) Quit (Read error: Connection reset by peer)
[20:15] <wusui> nhm: How permanent is the machine that you are using to store this data? Would it be okay if I store the results data on this as well?
[20:16] * sjustlaptop (~sam@2607:f298:a:697:f155:a422:3f48:4166) has joined #ceph
[20:19] <cjh_> has anyone reported cephfs losing files recently? i mounted cephfs via fuse and i noticed today that all my files are gone
[20:21] <cjh_> i'm running 0.67.1-1quantal
[20:23] <nhm> wusui: the machine with the database? Go for it. It's a VM ross setup for me. I think it's permenant indefinitely.
[20:24] <wusui> nhm: okay. I'll send you an email later today on the status of all of this...
[20:24] <nhm> wusui: cool, thanks! You'll make Sage very happy. :D
[20:24] * jeff-YF (~jeffyf@67.23.117.122) Quit (Quit: jeff-YF)
[20:31] * sjustlaptop (~sam@2607:f298:a:697:f155:a422:3f48:4166) Quit (Ping timeout: 480 seconds)
[20:33] * tryggvil (~tryggvil@178.19.53.254) Quit (Quit: tryggvil)
[20:33] <dmsimard> Huh.
[20:34] <dmsimard> $ ceph osd pool delete <pool> --yes-i-really-really-mean-it
[20:34] <dmsimard> Invalid command: saw 0 of --yes-i-really-really-mean-it, expected 1
[20:34] <joao> you should try '<pool> <pool>'
[20:34] * sjustlaptop (~sam@38.122.20.226) has joined #ceph
[20:34] <joao> I thought that had been fixed
[20:34] <dmsimard> Yeah, that worked
[20:34] <dmsimard> Interesting
[20:35] * grepory (~Adium@12.236.17.3) Quit (Quit: Leaving.)
[20:35] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[20:44] * kyann (~kyann@did75-15-88-160-187-237.fbx.proxad.net) has joined #ceph
[20:55] * Cube (~Cube@12.248.40.138) has joined #ceph
[20:56] <odyssey4me> hmm, friday night - and I get a double-whammy of issues... lovely
[20:56] <odyssey4me> any thoughts on how to identify and resolve an issue with a file on cephfs that appears to be unreadable?
[20:58] <dmsimard> Trying some crash scenarios to make bad things happen and troubleshoot them, some issues I'm running into aren't easy :D
[20:58] <odyssey4me> lol, dmsimard - friday fun, eh?
[20:59] <dmsimard> Yeah, this time around it looks like I can't get every PGs to come back clean.
[21:00] <sagewk> zackc: issue 6251: shouldn't the package install create this? curious how this comes up
[21:00] <dmsimard> http://pastebin.com/aqNE9219 :(
[21:01] <zackc> sagewk: no idea; saw it and filed it so i wouldn't forget
[21:01] <alfredodeza> issue 6251
[21:01] <kraken> alfredodeza might be talking about: http://tracker.ceph.com/issues/6251 [task/ceph.py:ceph_log() should create /var/log/ceph]
[21:01] <sagewk> ah, kraken doesn't like my :
[21:01] <alfredodeza> it totally does not
[21:02] * jeff-YF (~jeffyf@pool-173-66-21-43.washdc.fios.verizon.net) has joined #ceph
[21:02] * sjustlaptop (~sam@38.122.20.226) Quit (Ping timeout: 480 seconds)
[21:18] * Muhlemmer (~kvirc@78.96.254.85) Quit (Quit: KVIrc 4.3.1 Aria http://www.kvirc.net/)
[21:19] * sagewk (~sage@2607:f298:a:607:219:b9ff:fe40:55fe) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * yehudasa (~yehudasa@2607:f298:a:607:d6be:d9ff:fe8e:174c) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * Kdecherf (~kdecherf@shaolan.kdecherf.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * beardo (~sma310@beardo.cc.lehigh.edu) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * cclien_ (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * Azrael (~azrael@terra.negativeblue.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * masterpe (~masterpe@2a01:670:400::43) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * jnq (~jon@0001b7cc.user.oftc.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * maswan (maswan@kennedy.acc.umu.se) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * portante (~portante@nat-pool-bos-t.redhat.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * Jakdaw (~chris@puma-mxisp.mxtelecom.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * soren (~soren@hydrogen.linux2go.dk) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * sbadia (~sbadia@yasaw.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * rturk-away (~rturk@ds2390.dreamservers.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * xmltok (~xmltok@pool101.bizrate.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * sprachgenerator (~sprachgen@130.202.135.180) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * via (~via@smtp2.matthewvia.info) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * mjeanson_ (~mjeanson@bell.multivax.ca) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * LPG_ (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * cjh_ (~cjh@ps123903.dreamhost.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * barryo (~borourke@cumberdale.ph.ed.ac.uk) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * DLange (~DLange@dlange.user.oftc.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * torment2 (~torment@pool-72-91-185-71.tampfl.fios.verizon.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * miniyo (~miniyo@0001b53b.user.oftc.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * cfreak200 (~cfreak200@p4FF3F540.dip0.t-ipconnect.de) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * chiluk (~chiluk@cpe-70-124-70-187.austin.res.rr.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * [caveman] (~quassel@boxacle.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * a2_ (~avati@ip-86-181-132-209.redhat.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * med (~medberry@ec2-50-17-21-207.compute-1.amazonaws.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * SpamapS (~clint@xencbyrum2.srihosting.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * iggy (~iggy@theiggy.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * Enigmagic (~nathan@c-98-234-189-23.hsd1.ca.comcast.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * brambles (lechuck@s0.barwen.ch) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * nigwil (~idontknow@174.143.209.84) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * L2SHO (~adam@office-nat.choopa.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * jpieper (~josh@209-6-205-161.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * tdb (~tdb@willow.kent.ac.uk) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * Daviey (~DavieyOFT@bootie.daviey.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * gregaf (~Adium@2607:f298:a:607:c501:9f75:49ae:ffe5) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * niklas (niklas@vm15.hadiko.de) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * al (quassel@niel.cx) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * odyssey4me (~odyssey4m@165.233.71.2) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * dmsimard (~Adium@108.163.152.2) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * joao (~joao@89-181-152-211.net.novis.pt) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * Macheske (~Bram@d5152D87C.static.telenet.be) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * ofu (ofu@dedi3.fuckner.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * fireD (~fireD@93-139-163-132.adsl.net.t-com.hr) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * nhm (~nhm@184-97-187-196.mpls.qwest.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * PITon (~pavel@195.182.195.107) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * NaioN (stefan@andor.naion.nl) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * rsanti (~rsanti@74.125.122.33) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * tobru (~quassel@2a02:41a:3999::94) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * godog (~filo@0001309c.user.oftc.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * adam4 (~adam@46-65-111-12.zone16.bethere.co.uk) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * jerker (jerker@82ee1319.test.dnsbl.oftc.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * bstillwell (~bryan@bokeoa.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * itatar (~itatar@209.6.175.46) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * Guest3115 (~coyo@thinks.outside.theb0x.org) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * brother (foobaz@2a01:7e00::f03c:91ff:fe96:ab16) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * elmo (~james@faun.canonical.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * Elbandi (~ea333@elbandi.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * SubOracle (~quassel@00019f1e.user.oftc.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * guerby (~guerby@ip165-ipv6.tetaneutral.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * nyerup (irc@jespernyerup.dk) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * cephalobot (~ceph@ds2390.dreamservers.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * tomaw (tom@tomaw.netop.oftc.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * sleinen1 (~Adium@2001:620:0:26:29dd:86a4:8a51:6673) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * mtanski (~mtanski@69.193.178.202) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * jksM (~jks@3e6b5724.rev.stofanet.dk) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * Qu310 (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * kislotniq (~kislotniq@193.93.77.54) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * erwan_taf (~erwan@lns-bzn-48f-62-147-157-222.adsl.proxad.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * lupine (~lupine@lupine.me.uk) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * [fred] (fred@konfuzi.us) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * LCF (ball8@193.231.broadband16.iol.cz) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * loicd (~loicd@bouncer.dachary.org) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * MooingLemur (~troy@phx-pnap.pinchaser.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * ntranger (~ntranger@proxy2.wolfram.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * iii8 (~Miranda@91.207.132.71) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * jefferai (~quassel@corkblock.jefferai.org) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * \ask (~ask@oz.develooper.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * sakari (sakari@turn.ip.fi) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * absynth (~absynth@irc.absynth.de) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * iggy_ (~iggy@theiggy.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * cce (~cce@50.56.54.167) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * Gugge-47527 (gugge@kriminel.dk) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * rtek (~sjaak@rxj.nl) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * Ludo__ (~Ludo@falbala.zoxx.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * seif (uid11725@ealing.irccloud.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * phantomcircuit (~phantomci@covertinferno.org) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * asadpanda (~asadpanda@2001:470:c09d:0:20c:29ff:fe4e:a66) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * jeff-YF (~jeffyf@pool-173-66-21-43.washdc.fios.verizon.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * Vjarjadian (~IceChat77@176.254.37.210) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * capri_on (~capri@212.218.127.222) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * dlan (~dennis@116.228.88.131) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * mynameisbruce (~mynameisb@tjure.netzquadrat.de) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * zynzel (zynzel@spof.pl) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * tserong (~tserong@58-6-101-181.dyn.iinet.net.au) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * jamespage (~jamespage@culvain.gromper.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * piti (~piti@82.246.190.142) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * paravoid (~paravoid@scrooge.tty.gr) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * psomas (~psomas@inferno.cc.ece.ntua.gr) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * hijacker (~hijacker@213.91.163.5) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * lmb (lmb@212.8.204.10) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * mdjunaid (uid13426@ealing.irccloud.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * mjevans- (~mje@209.141.34.79) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * leseb (~leseb@88-190-214-97.rev.dedibox.fr) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * r0r_taga (~nick@greenback.pod4.org) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * partner (joonas@ajaton.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * MrNPP (~MrNPP@216.152.240.194) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * vhasi (vhasi@vha.si) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * Meyer^_ (meyer@c64.org) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * kyann (~kyann@did75-15-88-160-187-237.fbx.proxad.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * dosaboy_ (~dosaboy@host109-145-44-0.range109-145.btcentralplus.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * alram (~alram@38.122.20.226) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * mxmln (~maximilia@212.79.49.65) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * gregmark (~Adium@68.87.42.115) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * zackc (~zack@0001ba60.user.oftc.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * janisg (~troll@85.254.50.23) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * alexxy[home] (~alexxy@79.173.81.171) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * ccourtaut (~ccourtaut@2001:41d0:1:eed3::1) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * ggreg (~ggreg@int.0x80.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * mjblw (~mbaysek@wsip-174-79-34-244.ph.ph.cox.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * Frank9999 (~Frank@kantoor.transip.nl) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * baffle (baffle@jump.stenstad.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * wogri (~wolf@nix.wogri.at) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * darkfader (~floh@88.79.251.60) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * Rocky (~r.nap@188.205.52.204) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * skm (~smiley@205.153.36.170) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * xmir (~xmeer@cm-84.208.159.149.getinternet.no) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * Esmil (esmil@horus.0x90.dk) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * wonko_be_ (bernard@november.openminds.be) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * aliguori (~anthony@cpe-70-112-153-179.austin.res.rr.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * nwat (~nwat@eduroam-237-79.ucsc.edu) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * xarses (~andreww@204.11.231.50.static.etheric.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * clayb (~kvirc@199.172.169.79) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * lxo (~aoliva@lxo.user.oftc.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * psiekl (psiekl@wombat.eu.org) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * allsystemsarego (~allsystem@5-12-37-158.residential.rdsnet.ro) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * peetaur (~peter@CPEbc1401e60493-CMbc1401e60490.cpe.net.cable.rogers.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * ivan` (~ivan`@000130ca.user.oftc.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * Meths (~meths@2.25.193.204) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * sglwlb (~sglwlb@221.12.27.202) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * athrift (~nz_monkey@203.86.205.13) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * jantje (~jan@paranoid.nl) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * twx (~twx@rosamoln.org) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * john_barbee (~jbarbee@23-25-46-97-static.hfc.comcastbusiness.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * chamings (~jchaming@134.134.139.70) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * yeled (~yeled@spodder.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * zjohnson (~zjohnson@guava.jsy.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * joelio (~Joel@88.198.107.214) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * Zethrok (~martin@95.154.26.34) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * houkouonchi-home (~linux@pool-108-38-63-48.lsanca.fios.verizon.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * yehuda_hm (~yehuda@2602:306:330b:1410:ec0d:7518:62da:7c01) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * angdraug (~angdraug@204.11.231.50.static.etheric.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * sagelap1 (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * DarkAce-Z (~BillyMays@50.107.55.36) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * vipr (~vipr@frederik.pw) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * Xiol_ (~Xiol@94-193-254-111.zone7.bethere.co.uk) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * shdb (~shdb@80-219-0-163.dclient.hispeed.ch) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * X3NQ (~X3NQ@195.191.107.205) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * cofol1986 (~xwrj@110.90.119.113) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * LeaChim (~LeaChim@97e00ac2.skybroadband.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * terje- (~root@135.109.220.9) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * hflai_ (~hflai@alumni.cs.nctu.edu.tw) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * thelan (~thelan@paris.servme.fr) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * markl (~mark@tpsit.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * jmlowe (~Adium@2601:d:a800:97:34ed:df80:912:bb08) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * yo61 (~yo61@lin001.yo61.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * aardvark (~Warren@2607:f298:a:607:d1d:b885:7622:d9bd) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * rootard (~rootard@pirlshell.lpl.arizona.edu) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * decede (~deaced@178.78.113.112) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * ismell (~ismell@host-24-56-171-198.beyondbb.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * terje_ (~joey@174-16-125-70.hlrn.qwest.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * janos (~janos@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * sig_wall (~adjkru@185.14.185.91) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * `10` (~10@juke.fm) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * dalegaard (~dalegaard@vps.devrandom.dk) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * m0zes (~mozes@beocat.cis.ksu.edu) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * todin (tuxadero@kudu.in-berlin.de) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * nwl (~levine@atticus.yoyo.org) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * Psi-Jack (~Psi-Jack@psi-jack.user.oftc.net) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * _Tass4da1 (~tassadar@tassadar.xs4all.nl) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * maciek (maciek@2001:41d0:2:2218::dead) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * lurbs (user@uber.geek.nz) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * jtang (~jtang@sgenomics.org) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * Anticimex (anticimex@95.80.32.80) Quit (weber.oftc.net resistance.oftc.net)
[21:19] * Svedrin (svedrin@ketos.funzt-halt.net) Quit (weber.oftc.net resistance.oftc.net)
[21:21] * jeff-YF (~jeffyf@pool-173-66-21-43.washdc.fios.verizon.net) has joined #ceph
[21:21] * kyann (~kyann@did75-15-88-160-187-237.fbx.proxad.net) has joined #ceph
[21:21] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[21:21] * dosaboy_ (~dosaboy@host109-145-44-0.range109-145.btcentralplus.com) has joined #ceph
[21:21] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[21:21] * aliguori (~anthony@cpe-70-112-153-179.austin.res.rr.com) has joined #ceph
[21:21] * yehuda_hm (~yehuda@2602:306:330b:1410:ec0d:7518:62da:7c01) has joined #ceph
[21:21] * nwat (~nwat@eduroam-237-79.ucsc.edu) has joined #ceph
[21:21] * sleinen1 (~Adium@2001:620:0:26:29dd:86a4:8a51:6673) has joined #ceph
[21:21] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) has joined #ceph
[21:21] * xarses (~andreww@204.11.231.50.static.etheric.net) has joined #ceph
[21:21] * alram (~alram@38.122.20.226) has joined #ceph
[21:21] * odyssey4me (~odyssey4m@165.233.71.2) has joined #ceph
[21:21] * angdraug (~angdraug@204.11.231.50.static.etheric.net) has joined #ceph
[21:21] * sagelap1 (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) has joined #ceph
[21:21] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[21:21] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) has joined #ceph
[21:21] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[21:21] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[21:21] * Vjarjadian (~IceChat77@176.254.37.210) has joined #ceph
[21:21] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[21:21] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[21:21] * clayb (~kvirc@199.172.169.79) has joined #ceph
[21:21] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[21:21] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[21:21] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[21:21] * joao (~joao@89-181-152-211.net.novis.pt) has joined #ceph
[21:21] * DarkAce-Z (~BillyMays@50.107.55.36) has joined #ceph
[21:21] * Macheske (~Bram@d5152D87C.static.telenet.be) has joined #ceph
[21:21] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[21:21] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[21:21] * vipr (~vipr@frederik.pw) has joined #ceph
[21:21] * Xiol_ (~Xiol@94-193-254-111.zone7.bethere.co.uk) has joined #ceph
[21:21] * psiekl (psiekl@wombat.eu.org) has joined #ceph
[21:21] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[21:21] * shdb (~shdb@80-219-0-163.dclient.hispeed.ch) has joined #ceph
[21:21] * X3NQ (~X3NQ@195.191.107.205) has joined #ceph
[21:21] * cofol1986 (~xwrj@110.90.119.113) has joined #ceph
[21:21] * LeaChim (~LeaChim@97e00ac2.skybroadband.com) has joined #ceph
[21:21] * allsystemsarego (~allsystem@5-12-37-158.residential.rdsnet.ro) has joined #ceph
[21:21] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) has joined #ceph
[21:21] * terje- (~root@135.109.220.9) has joined #ceph
[21:21] * ofu (ofu@dedi3.fuckner.net) has joined #ceph
[21:21] * hflai_ (~hflai@alumni.cs.nctu.edu.tw) has joined #ceph
[21:21] * jksM (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[21:21] * fireD (~fireD@93-139-163-132.adsl.net.t-com.hr) has joined #ceph
[21:21] * nhm (~nhm@184-97-187-196.mpls.qwest.net) has joined #ceph
[21:21] * peetaur (~peter@CPEbc1401e60493-CMbc1401e60490.cpe.net.cable.rogers.com) has joined #ceph
[21:21] * PITon (~pavel@195.182.195.107) has joined #ceph
[21:21] * mxmln (~maximilia@212.79.49.65) has joined #ceph
[21:21] * Qu310 (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[21:21] * ivan` (~ivan`@000130ca.user.oftc.net) has joined #ceph
[21:21] * capri_on (~capri@212.218.127.222) has joined #ceph
[21:21] * thelan (~thelan@paris.servme.fr) has joined #ceph
[21:21] * NaioN (stefan@andor.naion.nl) has joined #ceph
[21:21] * gregmark (~Adium@68.87.42.115) has joined #ceph
[21:21] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[21:21] * markl (~mark@tpsit.com) has joined #ceph
[21:21] * skm (~smiley@205.153.36.170) has joined #ceph
[21:21] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[21:21] * gregaf (~Adium@2607:f298:a:607:c501:9f75:49ae:ffe5) has joined #ceph
[21:21] * jmlowe (~Adium@2601:d:a800:97:34ed:df80:912:bb08) has joined #ceph
[21:21] * zackc (~zack@0001ba60.user.oftc.net) has joined #ceph
[21:21] * decede (~deaced@178.78.113.112) has joined #ceph
[21:21] * rsanti (~rsanti@74.125.122.33) has joined #ceph
[21:21] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) has joined #ceph
[21:21] * janisg (~troll@85.254.50.23) has joined #ceph
[21:21] * Meths (~meths@2.25.193.204) has joined #ceph
[21:21] * kislotniq (~kislotniq@193.93.77.54) has joined #ceph
[21:21] * yo61 (~yo61@lin001.yo61.net) has joined #ceph
[21:21] * tobru (~quassel@2a02:41a:3999::94) has joined #ceph
[21:21] * sglwlb (~sglwlb@221.12.27.202) has joined #ceph
[21:21] * dlan (~dennis@116.228.88.131) has joined #ceph
[21:21] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[21:21] * erwan_taf (~erwan@lns-bzn-48f-62-147-157-222.adsl.proxad.net) has joined #ceph
[21:21] * mynameisbruce (~mynameisb@tjure.netzquadrat.de) has joined #ceph
[21:21] * alexxy[home] (~alexxy@79.173.81.171) has joined #ceph
[21:21] * godog (~filo@0001309c.user.oftc.net) has joined #ceph
[21:21] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[21:21] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) has joined #ceph
[21:21] * lupine (~lupine@lupine.me.uk) has joined #ceph
[21:21] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[21:21] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[21:21] * [fred] (fred@konfuzi.us) has joined #ceph
[21:21] * aardvark (~Warren@2607:f298:a:607:d1d:b885:7622:d9bd) has joined #ceph
[21:21] * athrift (~nz_monkey@203.86.205.13) has joined #ceph
[21:21] * rootard (~rootard@pirlshell.lpl.arizona.edu) has joined #ceph
[21:21] * LCF (ball8@193.231.broadband16.iol.cz) has joined #ceph
[21:21] * adam4 (~adam@46-65-111-12.zone16.bethere.co.uk) has joined #ceph
[21:21] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) has joined #ceph
[21:21] * sagewk (~sage@2607:f298:a:607:219:b9ff:fe40:55fe) has joined #ceph
[21:21] * ccourtaut (~ccourtaut@2001:41d0:1:eed3::1) has joined #ceph
[21:21] * zynzel (zynzel@spof.pl) has joined #ceph
[21:21] * loicd (~loicd@bouncer.dachary.org) has joined #ceph
[21:21] * ggreg (~ggreg@int.0x80.net) has joined #ceph
[21:21] * tserong (~tserong@58-6-101-181.dyn.iinet.net.au) has joined #ceph
[21:21] * niklas (niklas@vm15.hadiko.de) has joined #ceph
[21:21] * jerker (jerker@82ee1319.test.dnsbl.oftc.net) has joined #ceph
[21:21] * janos (~janos@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[21:21] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[21:21] * jantje (~jan@paranoid.nl) has joined #ceph
[21:21] * mjblw (~mbaysek@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[21:21] * ismell (~ismell@host-24-56-171-198.beyondbb.com) has joined #ceph
[21:21] * MooingLemur (~troy@phx-pnap.pinchaser.com) has joined #ceph
[21:21] * piti (~piti@82.246.190.142) has joined #ceph
[21:21] * bstillwell (~bryan@bokeoa.com) has joined #ceph
[21:21] * terje_ (~joey@174-16-125-70.hlrn.qwest.net) has joined #ceph
[21:21] * paravoid (~paravoid@scrooge.tty.gr) has joined #ceph
[21:21] * sig_wall (~adjkru@185.14.185.91) has joined #ceph
[21:21] * `10` (~10@juke.fm) has joined #ceph
[21:21] * twx (~twx@rosamoln.org) has joined #ceph
[21:21] * ntranger (~ntranger@proxy2.wolfram.com) has joined #ceph
[21:21] * Frank9999 (~Frank@kantoor.transip.nl) has joined #ceph
[21:21] * iii8 (~Miranda@91.207.132.71) has joined #ceph
[21:21] * itatar (~itatar@209.6.175.46) has joined #ceph
[21:21] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[21:21] * psomas (~psomas@inferno.cc.ece.ntua.gr) has joined #ceph
[21:21] * hijacker (~hijacker@213.91.163.5) has joined #ceph
[21:21] * lmb (lmb@212.8.204.10) has joined #ceph
[21:21] * jefferai (~quassel@corkblock.jefferai.org) has joined #ceph
[21:21] * baffle (baffle@jump.stenstad.net) has joined #ceph
[21:21] * dalegaard (~dalegaard@vps.devrandom.dk) has joined #ceph
[21:21] * m0zes (~mozes@beocat.cis.ksu.edu) has joined #ceph
[21:21] * john_barbee (~jbarbee@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[21:21] * mdjunaid (uid13426@ealing.irccloud.com) has joined #ceph
[21:21] * chamings (~jchaming@134.134.139.70) has joined #ceph
[21:21] * \ask (~ask@oz.develooper.com) has joined #ceph
[21:21] * wogri (~wolf@nix.wogri.at) has joined #ceph
[21:21] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) has joined #ceph
[21:21] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) has joined #ceph
[21:21] * xmir (~xmeer@cm-84.208.159.149.getinternet.no) has joined #ceph
[21:21] * yeled (~yeled@spodder.com) has joined #ceph
[21:21] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[21:21] * nwl (~levine@atticus.yoyo.org) has joined #ceph
[21:21] * Psi-Jack (~Psi-Jack@psi-jack.user.oftc.net) has joined #ceph
[21:21] * mjevans- (~mje@209.141.34.79) has joined #ceph
[21:21] * darkfader (~floh@88.79.251.60) has joined #ceph
[21:21] * Guest3115 (~coyo@thinks.outside.theb0x.org) has joined #ceph
[21:21] * brother (foobaz@2a01:7e00::f03c:91ff:fe96:ab16) has joined #ceph
[21:21] * sakari (sakari@turn.ip.fi) has joined #ceph
[21:21] * absynth (~absynth@irc.absynth.de) has joined #ceph
[21:21] * iggy_ (~iggy@theiggy.com) has joined #ceph
[21:21] * zjohnson (~zjohnson@guava.jsy.net) has joined #ceph
[21:21] * leseb (~leseb@88-190-214-97.rev.dedibox.fr) has joined #ceph
[21:21] * yehudasa (~yehudasa@2607:f298:a:607:d6be:d9ff:fe8e:174c) has joined #ceph
[21:21] * _Tass4da1 (~tassadar@tassadar.xs4all.nl) has joined #ceph
[21:21] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[21:21] * al (quassel@niel.cx) has joined #ceph
[21:21] * maciek (maciek@2001:41d0:2:2218::dead) has joined #ceph
[21:21] * joelio (~Joel@88.198.107.214) has joined #ceph
[21:21] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) has joined #ceph
[21:21] * Kdecherf (~kdecherf@shaolan.kdecherf.com) has joined #ceph
[21:21] * beardo (~sma310@beardo.cc.lehigh.edu) has joined #ceph
[21:21] * Jakdaw (~chris@puma-mxisp.mxtelecom.com) has joined #ceph
[21:21] * cclien_ (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) has joined #ceph
[21:21] * portante (~portante@nat-pool-bos-t.redhat.com) has joined #ceph
[21:21] * sbadia (~sbadia@yasaw.net) has joined #ceph
[21:21] * maswan (maswan@kennedy.acc.umu.se) has joined #ceph
[21:21] * rturk-away (~rturk@ds2390.dreamservers.com) has joined #ceph
[21:21] * jnq (~jon@0001b7cc.user.oftc.net) has joined #ceph
[21:21] * soren (~soren@hydrogen.linux2go.dk) has joined #ceph
[21:21] * Azrael (~azrael@terra.negativeblue.com) has joined #ceph
[21:21] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) has joined #ceph
[21:21] * masterpe (~masterpe@2a01:670:400::43) has joined #ceph
[21:21] * SubOracle (~quassel@00019f1e.user.oftc.net) has joined #ceph
[21:21] * Rocky (~r.nap@188.205.52.204) has joined #ceph
[21:21] * tomaw (tom@tomaw.netop.oftc.net) has joined #ceph
[21:21] * cephalobot (~ceph@ds2390.dreamservers.com) has joined #ceph
[21:21] * Elbandi (~ea333@elbandi.net) has joined #ceph
[21:21] * nyerup (irc@jespernyerup.dk) has joined #ceph
[21:21] * elmo (~james@faun.canonical.com) has joined #ceph
[21:21] * guerby (~guerby@ip165-ipv6.tetaneutral.net) has joined #ceph
[21:21] * asadpanda (~asadpanda@2001:470:c09d:0:20c:29ff:fe4e:a66) has joined #ceph
[21:21] * seif (uid11725@ealing.irccloud.com) has joined #ceph
[21:21] * phantomcircuit (~phantomci@covertinferno.org) has joined #ceph
[21:21] * rtek (~sjaak@rxj.nl) has joined #ceph
[21:21] * cce (~cce@50.56.54.167) has joined #ceph
[21:21] * Ludo__ (~Ludo@falbala.zoxx.net) has joined #ceph
[21:21] * Gugge-47527 (gugge@kriminel.dk) has joined #ceph
[21:21] * Meyer^_ (meyer@c64.org) has joined #ceph
[21:21] * _are_ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[21:21] * vhasi (vhasi@vha.si) has joined #ceph
[21:21] * MrNPP (~MrNPP@216.152.240.194) has joined #ceph
[21:21] * partner (joonas@ajaton.net) has joined #ceph
[21:21] * r0r_taga (~nick@greenback.pod4.org) has joined #ceph
[21:21] * wonko_be_ (bernard@november.openminds.be) has joined #ceph
[21:21] * Esmil (esmil@horus.0x90.dk) has joined #ceph
[21:21] * houkouonchi-home (~linux@pool-108-38-63-48.lsanca.fios.verizon.net) has joined #ceph
[21:21] * Svedrin (svedrin@ketos.funzt-halt.net) has joined #ceph
[21:21] * jtang (~jtang@sgenomics.org) has joined #ceph
[21:21] * Zethrok (~martin@95.154.26.34) has joined #ceph
[21:21] * Anticimex (anticimex@95.80.32.80) has joined #ceph
[21:21] * lurbs (user@uber.geek.nz) has joined #ceph
[21:21] * Zethrok (~martin@95.154.26.34) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * zjohnson (~zjohnson@guava.jsy.net) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * chamings (~jchaming@134.134.139.70) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * john_barbee (~jbarbee@23-25-46-97-static.hfc.comcastbusiness.net) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * twx (~twx@rosamoln.org) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * sglwlb (~sglwlb@221.12.27.202) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * psiekl (psiekl@wombat.eu.org) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * clayb (~kvirc@199.172.169.79) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * xarses (~andreww@204.11.231.50.static.etheric.net) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * nwat (~nwat@eduroam-237-79.ucsc.edu) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * Meths (~meths@2.25.193.204) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * jantje (~jan@paranoid.nl) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * lxo (~aoliva@lxo.user.oftc.net) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * peetaur (~peter@CPEbc1401e60493-CMbc1401e60490.cpe.net.cable.rogers.com) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * athrift (~nz_monkey@203.86.205.13) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * aliguori (~anthony@cpe-70-112-153-179.austin.res.rr.com) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * joelio (~Joel@88.198.107.214) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * houkouonchi-home (~linux@pool-108-38-63-48.lsanca.fios.verizon.net) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * ivan` (~ivan`@000130ca.user.oftc.net) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * yeled (~yeled@spodder.com) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * allsystemsarego (~allsystem@5-12-37-158.residential.rdsnet.ro) Quit (resistance.oftc.net oxygen.oftc.net)
[21:21] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) Quit (resistance.oftc.net oxygen.oftc.net)
[21:22] * ivan` (~ivan`@li125-242.members.linode.com) has joined #ceph
[21:22] * brambles (lechuck@s0.barwen.ch) has joined #ceph
[21:22] * ChanServ sets mode +o dmick
[21:23] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) Quit (Ping timeout: 480 seconds)
[21:24] * itatar (~itatar@209.6.175.46) has left #ceph
[21:24] * xmltok (~xmltok@pool101.bizrate.com) has joined #ceph
[21:24] * sprachgenerator (~sprachgen@130.202.135.180) has joined #ceph
[21:24] * KindOne (~KindOne@0001a7db.user.oftc.net) has joined #ceph
[21:24] * via (~via@smtp2.matthewvia.info) has joined #ceph
[21:24] * mjeanson_ (~mjeanson@bell.multivax.ca) has joined #ceph
[21:24] * LPG_ (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) has joined #ceph
[21:24] * chiluk (~chiluk@cpe-70-124-70-187.austin.res.rr.com) has joined #ceph
[21:24] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) has joined #ceph
[21:24] * cjh_ (~cjh@ps123903.dreamhost.com) has joined #ceph
[21:24] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[21:24] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[21:24] * barryo (~borourke@cumberdale.ph.ed.ac.uk) has joined #ceph
[21:24] * DLange (~DLange@dlange.user.oftc.net) has joined #ceph
[21:24] * torment2 (~torment@pool-72-91-185-71.tampfl.fios.verizon.net) has joined #ceph
[21:24] * miniyo (~miniyo@0001b53b.user.oftc.net) has joined #ceph
[21:24] * cfreak200 (~cfreak200@p4FF3F540.dip0.t-ipconnect.de) has joined #ceph
[21:24] * [caveman] (~quassel@boxacle.net) has joined #ceph
[21:24] * med (~medberry@ec2-50-17-21-207.compute-1.amazonaws.com) has joined #ceph
[21:24] * a2_ (~avati@ip-86-181-132-209.redhat.com) has joined #ceph
[21:24] * SpamapS (~clint@xencbyrum2.srihosting.com) has joined #ceph
[21:24] * iggy (~iggy@theiggy.com) has joined #ceph
[21:24] * Enigmagic (~nathan@c-98-234-189-23.hsd1.ca.comcast.net) has joined #ceph
[21:24] * nigwil (~idontknow@174.143.209.84) has joined #ceph
[21:24] * L2SHO (~adam@office-nat.choopa.net) has joined #ceph
[21:24] * jpieper (~josh@209-6-205-161.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com) has joined #ceph
[21:24] * Daviey (~DavieyOFT@bootie.daviey.com) has joined #ceph
[21:24] * tdb (~tdb@willow.kent.ac.uk) has joined #ceph
[21:25] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[21:25] * aliguori (~anthony@cpe-70-112-153-179.austin.res.rr.com) has joined #ceph
[21:25] * nwat (~nwat@eduroam-237-79.ucsc.edu) has joined #ceph
[21:25] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) has joined #ceph
[21:25] * xarses (~andreww@204.11.231.50.static.etheric.net) has joined #ceph
[21:25] * clayb (~kvirc@199.172.169.79) has joined #ceph
[21:25] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[21:25] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[21:25] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[21:25] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[21:25] * psiekl (psiekl@wombat.eu.org) has joined #ceph
[21:25] * peetaur (~peter@CPEbc1401e60493-CMbc1401e60490.cpe.net.cable.rogers.com) has joined #ceph
[21:25] * Meths (~meths@2.25.193.204) has joined #ceph
[21:25] * sglwlb (~sglwlb@221.12.27.202) has joined #ceph
[21:25] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[21:25] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[21:25] * athrift (~nz_monkey@203.86.205.13) has joined #ceph
[21:25] * jantje (~jan@paranoid.nl) has joined #ceph
[21:25] * twx (~twx@rosamoln.org) has joined #ceph
[21:25] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[21:25] * john_barbee (~jbarbee@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[21:25] * chamings (~jchaming@134.134.139.70) has joined #ceph
[21:25] * yeled (~yeled@spodder.com) has joined #ceph
[21:25] * zjohnson (~zjohnson@guava.jsy.net) has joined #ceph
[21:25] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[21:25] * joelio (~Joel@88.198.107.214) has joined #ceph
[21:25] * houkouonchi-home (~linux@pool-108-38-63-48.lsanca.fios.verizon.net) has joined #ceph
[21:25] * Zethrok (~martin@95.154.26.34) has joined #ceph
[21:25] * ChanServ sets mode +v scuttlemonkey
[21:25] * DarkAce-Z (~BillyMays@50.107.55.36) Quit (Ping timeout: 480 seconds)
[21:25] * ChanServ sets mode +o ircolle
[21:26] * b1tbkt (~b1tbkt@24-217-192-155.dhcp.stls.mo.charter.com) Quit (Remote host closed the connection)
[21:28] * jskinner (~jskinner@69.170.148.179) has joined #ceph
[21:39] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[21:39] <dmsimard> alfredodeza: ping
[21:39] <alfredodeza> pong
[21:40] <dmsimard> Using ceph-deploy, is there a ceph.conf held somewhere with the list of hosts .. ?
[21:40] <dmsimard> I'm looking through docs and it looks like there would be a "master ceph.conf" file held somewhere
[21:40] <alfredodeza> like in the current directory?
[21:41] * xarses giggles
[21:41] <dmsimard> There's a ceph.conf there, but I don't see any hosts in it.
[21:41] <alfredodeza> so you usually call `ceph-deploy new {hosts}`
[21:41] <alfredodeza> no?
[21:42] <dmsimard> Well I have a running cluster, deployed with ceph-deploy from an "admin" node and I have this in the ceph.conf: http://pastebin.com/aXjnLZf7
[21:42] <dmsimard> It looks like there should be entries for OSDs ? Like so
[21:42] <dmsimard> [osd.1]
[21:42] <dmsimard> host = {hostname}
[21:43] <xarses> dmsimard, aparently ceph-deploy has much un-documented magic
[21:43] <xarses> and it doesn't need any of that cruft =)
[21:43] <alfredodeza> it is indeed very poorly documented, I've been working hard to make it stable to I can then bring docs up to speed :)
[21:44] <dmsimard> Oh okay, so I guess I'm looking for something that doesn't exist if you're using ceph-deploy ?
[21:44] <xarses> ceph-deploy new mon1 ... monX
[21:45] <xarses> will end up creating lines mon_host = mon1 ... monX
[21:45] <dmsimard> Yeah, I have that line
[21:45] <xarses> and mon_initial_members = mon1 .. monX
[21:45] <dmsimard> What about OSDs, though ?
[21:45] <xarses> otherwise nothing is needed
[21:46] <xarses> and i still haven't gotten good information about adding monitors later
[21:46] <xarses> the osd's will discover each other via the monitors
[21:46] <alfredodeza> dmsimard: from playing around with ceph-deploy and osd it doesn't seem to alter ceph.conf to add that
[21:48] <xarses> alfredodeza: when doing ceph-deploy mon create ...
[21:48] <xarses> should it update mon_host or mon_initial_members ?
[21:48] <xarses> it compains that --overwirte-conf is needed
[21:48] <xarses> but it doesn't appear to change the conf file
[21:51] <alfredodeza> really
[21:51] <alfredodeza> hrmn
[21:52] <xarses> when i finish rebuilding my env I'll check some more
[21:52] <alfredodeza> great
[21:52] <xarses> oh, fair note that was using 1.0.0
[21:52] <alfredodeza> do ping me so I can check too
[21:52] <alfredodeza> oh yeah don't do that :)
[21:52] <alfredodeza> use the latest one
[21:52] <xarses> we just bumped 1.2.3
[21:52] <alfredodeza> excellent
[21:53] <xarses> and even though I'll get the dirty eyes for it, we are wrapping ceph-deploy with puppet
[21:55] * LeaChim (~LeaChim@97e00ac2.skybroadband.com) Quit (Ping timeout: 480 seconds)
[21:55] * Lea (~LeaChim@97e00ac2.skybroadband.com) Quit (Ping timeout: 480 seconds)
[21:55] <alfredodeza> noooooooo
[21:56] <alfredodeza> xarses: second paragraph: https://github.com/ceph/ceph-deploy#what-this-tool-is-not
[21:57] <alfredodeza> the idea is to make it as useful as possible for someone that wants to get started
[21:57] <alfredodeza> and nothing more than that
[21:57] <xarses> yes, well your tool also maintains best practaces for starting the monitors and setting up the osd's
[21:57] <sagewk> and to serve as a reference for implementors of orchestration scripts like puppet/chef/juju
[21:57] <alfredodeza> what sagewk just said
[21:57] <alfredodeza> sure, xarses, I understand that point too
[21:58] <xarses> so we use ceph-deploy to roll the monitors and osd's, and puppet handles the order of operations, config file tweaks, the like
[21:58] <alfredodeza> nice
[21:59] <alfredodeza> that is an interesting combination
[21:59] <xarses> and then we stuff it into openstack
[21:59] <xarses> hook into glance, cinder, and nova-compute
[22:04] * Lea (~LeaChim@05407724.skybroadband.com) has joined #ceph
[22:05] * LeaChim (~LeaChim@05407724.skybroadband.com) has joined #ceph
[22:06] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[22:06] * ChanServ sets mode +v andreask
[22:08] <zackc> sagewk: ceph-coverage ends up in $PATH, right?
[22:10] <dmsimard> How can I tell if rbd write back is enabled or not ? I have "rbd cache = true" in ceph.conf but I'm unsure if it's taken into account.
[22:11] * b1tbkt (~b1tbkt@24-217-192-155.dhcp.stls.mo.charter.com) has joined #ceph
[22:12] <zackc> sagewk: because https://github.com/ceph/teuthology/pull/78
[22:13] * b1tbkt (~b1tbkt@24-217-192-155.dhcp.stls.mo.charter.com) Quit (Remote host closed the connection)
[22:13] <sagewk> zackc: not sure; if it's in ceph-tests package, yeah. otherwise, the stuff that teuthology pushes (like daemon-helper) could be installed in /usr/local/bin instead of testdir...
[22:14] * b1tbkt (~b1tbkt@24-217-192-155.dhcp.stls.mo.charter.com) has joined #ceph
[22:15] <sagewk> zackc: oh i see, this does that already. looks good!
[22:16] <zackc> sagewk: ah looks like it's ./ceph/src/ceph-coverage.in
[22:16] <sagewk> it gets installed into /usr/bin by the ceph-tests package
[22:16] <sagewk> which the install task installs. so i think we're good.
[22:16] <zackc> awesome.
[22:16] <zackc> that PR ^^ should make things a bit nicer.
[22:16] * jeff-YF_ (~jeffyf@67.23.123.228) has joined #ceph
[22:20] * jeff-YF (~jeffyf@pool-173-66-21-43.washdc.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[22:20] * jeff-YF_ is now known as jeff-YF
[22:24] <sagewk> zackc: yep. i'll merge it!
[22:25] * jeff-YF_ (~jeffyf@pool-173-66-21-43.washdc.fios.verizon.net) has joined #ceph
[22:26] <odyssey4me> Has anyone seen statvfs being run on a pool entirely hanging the client?
[22:27] <odyssey4me> I see this - http://tracker.ceph.com/issues/3793 - but this is just a sizing response problem - I'm experiencing a complete hang from the client issuing the request.
[22:29] * dmsimard (~Adium@108.163.152.2) Quit (Quit: Leaving.)
[22:30] * jeff-YF (~jeffyf@67.23.123.228) Quit (Ping timeout: 480 seconds)
[22:31] <sagewk> dmick: https://github.com/ceph/ceph/pull/569
[22:33] * jeff-YF_ (~jeffyf@pool-173-66-21-43.washdc.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[22:35] * DarkAce-Z (~BillyMays@50.107.55.36) has joined #ceph
[22:38] * aardvark (~Warren@2607:f298:a:607:d1d:b885:7622:d9bd) Quit (Ping timeout: 480 seconds)
[22:38] * aardvark (~Warren@2607:f298:a:607:8d7c:bc47:8ee1:520d) has joined #ceph
[22:38] * wusui (~Warren@2607:f298:a:607:d1d:b885:7622:d9bd) Quit (Ping timeout: 480 seconds)
[22:39] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:41] * alphe (~alphe@0001ac6f.user.oftc.net) has joined #ceph
[22:42] <alphe> hello all sagewk are you around ?
[22:42] <alphe> I have a comment to adress of a weird issue I experienced this week ...
[22:51] <alphe> My ceph cluster nodes have 2 giga lan one public with dhcp one private for replication with dhcp too so as my private lan dhcp was missconfigure and was broadcasting a concurent default gateway
[22:51] <alphe> so in that case ceph wasn t able to see a probleme
[22:51] * DarkAce-Z (~BillyMays@50.107.55.36) Quit (Max SendQ exceeded)
[22:52] * WarrenUsui (~Warren@2607:f298:a:607:8d7c:bc47:8ee1:520d) has joined #ceph
[22:52] <alphe> because it could connect to the nodes with the second route even if them couldn t connect to the other nodes
[22:52] <alphe> because their default gateway was miss configured
[22:53] * nwat (~nwat@eduroam-237-79.ucsc.edu) Quit (Ping timeout: 480 seconds)
[22:53] <alphe> the problem arise at the client level ... when you transfer data to one of those missrouted nodes the ceph-fuse client suddently freeze and only a reboot can solve the problem
[22:54] <alphe> on the cliente side but only until it freeze again
[22:55] <alphe> so to definitively solve the problem you have to remove the option routers line from the config file of the second lan dhcp server
[22:56] * DarkAceZ (~BillyMays@50.107.55.36) has joined #ceph
[22:57] <alphe> I don t know if it is usefull to add a mechanism to ceph nodes that they can say I can talk and read from the network
[22:57] <alphe> actually a ceph node is consider as dead if it can t be reached
[23:04] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[23:06] <sagewk> zackc: yehudasa: https://github.com/ceph/teuthology/pull/79
[23:06] * wrale (~wrale@wrk-28-217.cs.wright.edu) has joined #ceph
[23:08] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[23:10] <zackc> sagewk: done
[23:11] <sagewk> thanks!
[23:12] <zackc> np!
[23:13] <xarses> alfredodeza: https://github.com/ceph/ceph-deploy/pull/66, Kudos! maybe i can remove my gatherkeys loop =)
[23:13] <Karcaw> I just upgraded to cuttlefish, and most things worked fine, but one osd is complaining about talking to the wrong node:
[23:13] * sjustlaptop (~sam@38.122.20.226) has joined #ceph
[23:13] <Karcaw> 2013-09-06 14:13:36.172847 7f9770b79700 0 -- 0.0.0.0:6801/6631 >> 130.20.232.47:6801/2707 pipe(0x38c8a00 sd=27 :39600 s=1 pgs=0 cs=0 l=0 c=0x25865c0).connect claims to be 130.20.232.47:6801/8096 not 130.20.232.47:6801/2707 - wrong node!
[23:14] <Karcaw> how do i get it to relearn where the new osd services are running?
[23:23] <alphe> ceph-deploy osd activate hostname probably ?
[23:24] <dmick> Karcaw: that can be a sign that the osd died and restarted
[23:24] <alphe> or you ceph-deploy osd delete (or destroy) the previous one then you create a new one and activate it with the same tool ...
[23:24] <dmick> if it keeps happening look for teh osd flapping
[23:24] <dmick> otherwise ignore it
[23:27] * vata (~vata@2607:fad8:4:6:c0c6:4504:1641:57af) Quit (Quit: Leaving.)
[23:31] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Quit: Ex-Chat)
[23:32] <alfredodeza> xarses thank you sir :)
[23:32] * odyssey4me (~odyssey4m@165.233.71.2) Quit (Ping timeout: 480 seconds)
[23:33] <xarses> currently i have a up to 60 times loop on ceph-deploy gatherkeys localhost to try and catch the monitor race
[23:33] <alfredodeza> booo
[23:33] <alfredodeza> that is unfortunate
[23:33] <xarses> it usually catches around 20 seconds
[23:33] <alfredodeza> you should totally tell me what your pain points are so I can improve this
[23:34] <xarses> that was one
[23:34] <xarses> the next one is this sequential monitor additions that i need to test 1.2.3 against
[23:34] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Ping timeout: 480 seconds)
[23:34] <xarses> and lvm support, but we hacked around that
[23:38] <Karcaw> dmick: all the osd's restarted recently, but this one comes up confused, and then never comes back in to the whole system.
[23:39] <alphe> karcaw you can still remove it and recreate it
[23:39] <alphe> this is probably the fastest way
[23:39] * alphe (~alphe@0001ac6f.user.oftc.net) Quit (Quit: Leaving)
[23:39] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[23:39] <Karcaw> so just forget about the data on it and re-create it?
[23:39] <dmick> Karcaw: but is it restarting?
[23:40] <dmick> does it dump core? does it log about problems?
[23:40] <Karcaw> not on its own.. it just sits and messages about wrong node.. i can restart it manually, but it does the same thing
[23:40] <dmick> hm. that doesn't make sense
[23:42] <dmick> actually, so, 130.20.232.47:6801 is the confused one. look in osd dump and see if that's "itself" or some other node
[23:42] <alfredodeza> xarses: this is how it will look now: http://fpaste.org/37723/37850374/
[23:44] * BillK (~BillK-OFT@124-171-168-171.dyn.iinet.net.au) has joined #ceph
[23:44] <Karcaw> .47 is another node.
[23:44] <Karcaw> the complaining node is .88
[23:44] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has left #ceph
[23:45] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[23:45] <xarses> afredodeza: can we block with a ttl ceiling for quorum?
[23:46] <xarses> but that verry nice
[23:50] <dmick> Karcaw: is the osd on .47 flapping
[23:50] * cofol1986 (~xwrj@110.90.119.113) Quit (Read error: Connection reset by peer)
[23:50] * cofol1986 (~xwrj@110.90.119.113) has joined #ceph
[23:53] <Karcaw> no, its up and in.. there are 5 ost's there
[23:54] <dmick> ost's eh :)
[23:54] <Karcaw> doh..
[23:54] <dmick> it can be up and in, but still restarting for some dumb reason
[23:55] <dmick> but so: we think that .88 is complaining about .47 changing nonces, but .47 isn't flapping.
[23:55] <Karcaw> they have all been up for 3 hours
[23:55] <dmick> and .88 never makes enough progress to rejoin.
[23:56] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) Quit (Remote host closed the connection)
[23:59] * jskinner (~jskinner@69.170.148.179) Quit (Remote host closed the connection)
[23:59] <Karcaw> sounds about correct. in the log it complains evey 15 seconds that there are 5 wrong nodes, all on the .47 node

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.