#ceph IRC Log

Index

IRC Log for 2012-09-17

Timestamps are in GMT/BST.

[0:48] * BManojlovic (~steki@212.200.243.39) has joined #ceph
[0:48] * nhm (~nh@67-220-20-222.usiwireless.com) has joined #ceph
[0:48] * nhmhome (~nh@67-220-20-222.usiwireless.com) has joined #ceph
[1:08] * BManojlovic (~steki@212.200.243.39) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:32] * SvenDowideit (~SvenDowid@203-206-171-38.perm.iinet.net.au) has joined #ceph
[1:45] * jlogan (~Thunderbi@50-46-195-28.evrt.wa.frontiernet.net) Quit (Ping timeout: 480 seconds)
[1:48] * SvenDowideit (~SvenDowid@203-206-171-38.perm.iinet.net.au) Quit (Ping timeout: 480 seconds)
[1:56] * SvenDowideit (~SvenDowid@203-206-171-38.perm.iinet.net.au) has joined #ceph
[3:00] * danieagle (~Daniel@177.43.213.15) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[3:25] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[3:32] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[3:35] * loicd (~loic@magenta.dachary.org) has joined #ceph
[3:52] * jlogan (~Thunderbi@50-46-195-28.evrt.wa.frontiernet.net) has joined #ceph
[4:00] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[4:02] * loicd (~loic@magenta.dachary.org) has joined #ceph
[4:19] * maelfius (~mdrnstm@pool-71-160-33-115.lsanca.fios.verizon.net) has left #ceph
[4:54] * scuttlemonkey (~scuttlemo@69.244.181.5) has joined #ceph
[5:09] * jlogan (~Thunderbi@50-46-195-28.evrt.wa.frontiernet.net) has left #ceph
[5:20] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[5:23] * loicd (~loic@magenta.dachary.org) has joined #ceph
[5:40] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[6:53] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[6:55] * loicd (~loic@magenta.dachary.org) has joined #ceph
[7:21] * EmilienM (~EmilienM@ADijon-654-1-133-33.w90-56.abo.wanadoo.fr) has joined #ceph
[7:29] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[7:31] * loicd (~loic@magenta.dachary.org) has joined #ceph
[7:34] * sjustlaptop (~sam@66-214-139-112.dhcp.gldl.ca.charter.com) has joined #ceph
[7:43] * sjustlaptop (~sam@66-214-139-112.dhcp.gldl.ca.charter.com) Quit (Read error: Operation timed out)
[7:46] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[7:49] * loicd (~loic@magenta.dachary.org) has joined #ceph
[8:14] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[8:17] * sjustlaptop (~sam@66-214-139-112.dhcp.gldl.ca.charter.com) has joined #ceph
[8:25] * sjustlaptop (~sam@66-214-139-112.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[8:40] * ao (~ao@85.183.4.97) has joined #ceph
[8:50] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:14] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[9:18] * loicd (~loic@178.20.50.225) has joined #ceph
[9:28] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[9:29] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[9:32] * loicd (~loic@178.20.50.225) Quit (Quit: Leaving.)
[9:32] * loicd (~loic@178.20.50.225) has joined #ceph
[9:40] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[10:45] * deepsa_ (~deepsa@122.172.212.45) has joined #ceph
[10:46] * deepsa (~deepsa@122.172.171.218) Quit (Ping timeout: 480 seconds)
[10:46] * deepsa_ is now known as deepsa
[10:57] * loicd (~loic@178.20.50.225) Quit (Ping timeout: 480 seconds)
[11:11] * loicd (~loic@90.84.144.196) has joined #ceph
[11:15] * huangjun (~hjwsm1989@183.62.232.94) has joined #ceph
[11:20] <huangjun> recently, i want to write a simple app based on rados, but the link step failed, I only include the librados.h file, should i add the librados.so by specifing the -L/usr/lib/librados.so in cmd gcc ?
[11:33] * Tobarja (~athompson@cpe-071-075-064-255.carolina.res.rr.com) Quit (Quit: Leaving.)
[12:18] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[12:21] * MikeMcClurg (~mike@cpc10-cmbg15-2-0-cust205.5-4.cable.virginmedia.com) Quit (Quit: Leaving.)
[12:28] * loicd (~loic@90.84.144.196) Quit (Ping timeout: 480 seconds)
[12:38] * loicd (~loic@jem75-2-82-233-234-24.fbx.proxad.net) has joined #ceph
[12:39] * loicd (~loic@jem75-2-82-233-234-24.fbx.proxad.net) Quit (Read error: No route to host)
[12:39] * loicd (~loic@jem75-2-82-233-234-24.fbx.proxad.net) has joined #ceph
[12:41] * loicd1 (~loic@jem75-2-82-233-234-24.fbx.proxad.net) has joined #ceph
[12:43] * loicd (~loic@jem75-2-82-233-234-24.fbx.proxad.net) Quit (Read error: Operation timed out)
[12:46] * loicd (~loic@jem75-2-82-233-234-24.fbx.proxad.net) has joined #ceph
[12:47] * loicd (~loic@jem75-2-82-233-234-24.fbx.proxad.net) Quit ()
[12:49] * loicd1 (~loic@jem75-2-82-233-234-24.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[12:55] * deepsa_ (~deepsa@101.63.233.66) has joined #ceph
[12:56] * deepsa (~deepsa@122.172.212.45) Quit (Ping timeout: 480 seconds)
[12:56] * deepsa_ is now known as deepsa
[13:43] * deepsa_ (~deepsa@122.167.169.66) has joined #ceph
[13:49] * deepsa (~deepsa@101.63.233.66) Quit (Ping timeout: 480 seconds)
[13:49] * deepsa_ is now known as deepsa
[14:02] * MikeMcClurg (~mike@firewall.ctxuk.citrix.com) has joined #ceph
[14:07] * loicd (~loic@178.20.50.225) has joined #ceph
[14:07] * verwilst (~verwilst@d5152FEFB.static.telenet.be) has joined #ceph
[14:14] * deepsa (~deepsa@122.167.169.66) Quit (Ping timeout: 480 seconds)
[14:14] * deepsa (~deepsa@115.241.121.85) has joined #ceph
[14:28] * scuttlemonkey (~scuttlemo@69.244.181.5) Quit (Quit: zzzzzzzzzzzzzzzzzzzz)
[14:29] * deepsa_ (~deepsa@122.172.161.188) has joined #ceph
[14:30] * deepsa (~deepsa@115.241.121.85) Quit (Ping timeout: 480 seconds)
[14:30] * deepsa_ is now known as deepsa
[15:00] * aliguori (~anthony@cpe-70-123-140-180.austin.res.rr.com) has joined #ceph
[15:01] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[15:05] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has left #ceph
[15:07] <elder> nhm, ping
[15:11] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[15:11] * huangjun (~hjwsm1989@183.62.232.94) Quit (Read error: Connection reset by peer)
[15:11] * huangjun (~hjwsm1989@183.62.232.92) has joined #ceph
[15:11] * huangjun (~hjwsm1989@183.62.232.92) Quit (autokilled: This host may be infected. Mail support@oftc.net with questions. BOPM (2012-09-17 13:11:39))
[15:12] * huangjun (~hjwsm1989@183.62.232.94) has joined #ceph
[15:14] * scuttlemonkey (~scuttlemo@173-14-58-198-Michigan.hfc.comcastbusiness.net) has joined #ceph
[15:20] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[15:21] <tryggvil> Hi
[15:21] <tryggvil> seems that when starting up ceph, I'm running with 35 osd processes
[15:21] <tryggvil> and they exhaust the port range 6800-6900
[15:22] <tryggvil> is this possible to override with setting something in the config
[15:22] <tryggvil> I'm getting:
[15:22] <tryggvil> Starting Ceph osd.6 on s302...
[15:22] <tryggvil> starting osd.6 at :/0 osd_data /var/lib/ceph/osd/ceph-6 /var/lib/ceph/osd/ceph-6/journal
[15:22] <tryggvil> accepter.bind unable to bind to : on any port in range 6800-6900: Address already in use
[15:22] <tryggvil> failed: ' /usr/bin/ceph-osd -i 6 --pid-file /var/run/ceph/osd.6.pid -c /etc/ceph/ceph.conf '
[15:22] <elder> nhmhome, nhm ping
[15:23] * ao (~ao@85.183.4.97) Quit (Quit: Leaving)
[15:30] * EikiGQ (~EikiGQ@rtr1.tolvusky.sip.is) has joined #ceph
[15:34] <EikiGQ> A question about placement groups. The default is 8 per OSD but the docs suggest a much higher per osd. I have 32 disks in a server and want to have 30-32 OSD's. What is an appropriate settings for OSD pg and pool pg when using the default of 2 copies
[15:39] <EikiGQ> ?
[15:44] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[15:45] * sagelap (~sage@74-92-11-25-NewEngland.hfc.comcastbusiness.net) has joined #ceph
[15:52] * stass (stas@ssh.deglitch.com) Quit (Read error: Operation timed out)
[15:57] * stass (stas@ssh.deglitch.com) has joined #ceph
[16:06] * sagelap1 (~sage@74-92-11-25-NewEngland.hfc.comcastbusiness.net) has joined #ceph
[16:06] * sagelap (~sage@74-92-11-25-NewEngland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[16:08] * loicd (~loic@178.20.50.225) Quit (Ping timeout: 480 seconds)
[16:09] * cblack101 (c0373727@ircip2.mibbit.com) has joined #ceph
[16:11] <cblack101> ping
[16:13] * tomaw (tom@tomaw.netop.oftc.net) Quit (Ping timeout: 600 seconds)
[16:15] <cblack101> I have my CEPH cluster setup and 'ceph -s' shows healthy, loaded some client bits, created an rbd device, 'rbd list' works, then I mapped that block device on the client wth 'rbd map ', but when I try to mount the mapped device in a folder I get nada.... Am I approaching this incorrectly?
[16:16] <cblack101> 'mount /dev/rdb0 /mnt/ceph/' returns mount: special device /dev/rdb0 does not exist... Thoughts?
[16:19] * tomaw (tom@tomaw.netop.oftc.net) has joined #ceph
[16:23] * sagelap1 (~sage@74-92-11-25-NewEngland.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[16:24] * sagelap (~sage@74-92-11-25-NewEngland.hfc.comcastbusiness.net) has joined #ceph
[16:26] * nhmlap (~nhm@174-20-43-18.mpls.qwest.net) has joined #ceph
[16:26] <darkfader> hey, /dev/rdb0 is != /dev/rbd0
[16:27] <elder> Normally, yes.
[16:27] <elder> cblack101, I think that was directed at you. Check your spelling.
[16:28] <elder> You want rbd
[16:28] * huangjun (~hjwsm1989@183.62.232.94) Quit ()
[16:33] <cblack101> DOH, guess it would help if I didn't confuse rbd with rdb eh?
[16:48] * loicd (~loic@magenta.dachary.org) has joined #ceph
[16:52] * amatter (~amatter@209.63.136.130) has joined #ceph
[16:53] <amatter> Morning. I have an osd that has been running fine but now all of the sudden dies every time I start it. Log is here: http://pastebin.com/7nZp2Fui
[16:54] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[16:55] * sagelap1 (~sage@74-92-11-25-NewEngland.hfc.comcastbusiness.net) has joined #ceph
[16:55] * sagelap (~sage@74-92-11-25-NewEngland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[16:55] * verwilst (~verwilst@d5152FEFB.static.telenet.be) Quit (Quit: Ex-Chat)
[16:58] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[16:59] * loicd (~loic@82.235.173.177) has joined #ceph
[17:03] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:04] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[17:11] * amatter (~amatter@209.63.136.130) Quit ()
[17:14] * sagelap1 (~sage@74-92-11-25-NewEngland.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[17:18] * jlogan1 (~Thunderbi@2600:c00:3010:1:8934:ad93:2153:3a19) has joined #ceph
[17:22] * sagelap (~sage@74-92-11-25-NewEngland.hfc.comcastbusiness.net) has joined #ceph
[17:38] * Tv_ (~tv@2607:f298:a:607:391b:b457:8e5c:c6ea) has joined #ceph
[17:39] * sagelap (~sage@74-92-11-25-NewEngland.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[17:40] * gregaf (~Adium@2607:f298:a:607:cc76:3f7c:278c:b1f0) Quit (Quit: Leaving.)
[17:42] * gregaf (~Adium@2607:f298:a:607:fcc2:81f2:4b68:23fb) has joined #ceph
[17:48] <EikiGQ> Just a repeat of my question from before:
[17:48] <EikiGQ> A question about placement groups. The default is 8 per OSD but the docs suggest a much higher per osd. I have 32 disks in a server and want to have 30-32 OSD's. What is an appropriate settings for OSD pg and pool pg when using the default of 2 copies?
[17:50] * amatter (~amatter@209.63.136.130) has joined #ceph
[17:53] <gregaf> EikiGQ: depends on how many pools you have, but generally you want an aggregate of 50-500 (that's a very loose bound on the high end — more PGs means more memory and more peers) PGs per OSD
[17:53] <gregaf> if you've got a simple setup where you're only using one pool, just do 100 PGs per OSD and be done with it
[17:54] <gregaf> there's more than one default, but if you're creating new pools the default is just 8, period — that's not a good number for anybody, so make sure you specify the values using "ceph osd pool create <pool> <pgnum>"
[17:54] <amatter> sorry, was offline for a bit. Just checking if anyone may have responded to my osd failure issue in the meantime...
[17:56] <gregaf> amatter: you'll want to ping sjust when he gets on; looks like a LevelDB corruption issue...
[17:56] <amatter> gregaf: thanks
[17:57] <EikiGQ> gregaf: thanks. there were 2 pg params and in the docs it said they should be they same which was a little bit confusing. So I guess we will create a new pool with but with pgnum then = 100xOSD ?
[17:57] <gregaf> yeah
[17:58] <EikiGQ> gregaf: if we want to set it up within a ceph.conf though - what do the config lines look like? I don't remember seeing pool definition examples.
[17:59] <gregaf> you can't define new pools within the config file
[17:59] <EikiGQ> gregaf: ok
[17:59] <EikiGQ> gregaf: but PG for OSD's though?
[17:59] <gregaf> ?
[18:00] * BManojlovic (~steki@212.200.243.39) has joined #ceph
[18:00] <EikiGQ> gregaf: And a side question (related). There is no difference with the default pools (data and rbd), right? Just different names?
[18:00] <gregaf> rigth
[18:01] <EikiGQ> gregaf: About the PG for OSD's. I though I read there were per OSD setting for PG's. But you are saying that PG settings are just for pools
[18:01] <gregaf> I still don't know what you mean, sorry
[18:01] <EikiGQ> hehe
[18:01] <gregaf> :)
[18:03] <EikiGQ> I guess I was confusing it with this
[18:03] <EikiGQ> osd pg bits
[18:03] <EikiGQ> Description: Placement group bits per OSD.
[18:03] <EikiGQ> Type: 32-bit Integer
[18:03] <EikiGQ> Default: 6
[18:03] <gregaf> ah
[18:04] <gregaf> that flag is used to set how many PGs are in the default-created pools
[18:04] <EikiGQ> ok
[18:04] <EikiGQ> gregaf: Thanks for the info. BTW we had a weird problem earlier today
[18:04] <EikiGQ> gregaf: it seems you can't have 32 OSD's :p
[18:04] <EikiGQ> gregaf: on the same server because you run out of open ports?
[18:04] <gregaf> it attempts to scale the numbers semi-appropriately, so it takes the number of OSDs and bit-shifts it "osd pg num" bits to the left in order to choose the number of PGs
[18:05] <gregaf> yeah, that's possible
[18:06] <EikiGQ> gregaf: ok, how? Can we extend the port range? There was nothing else on the 6800-6900 range but we had 2-4 osd that couldn't register
[18:06] <gregaf> yep, looks like it's currently hard-coded to 6800-6900
[18:06] <gregaf> :/
[18:06] <EikiGQ> exactly
[18:06] <gregaf> so no config option
[18:06] <gregaf> I'll go file a bug
[18:06] <EikiGQ> gregaf: so we had to leave 2 disks out.
[18:06] <EikiGQ> gregaf: thanks :)
[18:06] <EikiGQ> gregaf: Since I have your expert attention :p
[18:07] <EikiGQ> gregaf: do journals and mds also "spread" their data around?
[18:07] <gregaf> which journals?
[18:07] <gregaf> mds, yes, of course
[18:08] <EikiGQ> gregaf: I'm just wondering because one specifies multiple osds and mons but global journal and single mds...
[18:09] <EikiGQ> gregaf: Is it better to define per osd journals?
[18:09] <EikiGQ> gregaf: And multiple mds?
[18:09] <gregaf> do you mean the location for the OSD journal?
[18:09] <EikiGQ> yeah
[18:09] <gregaf> each OSD gets its own journal….
[18:10] <EikiGQ> gregaf: yes. Is it better to have the journal on a seperate disk?
[18:10] <gregaf> well, all data that the OSD gets is written on the journal
[18:10] <EikiGQ> gregaf: or just makes things more of a hassle...
[18:10] <gregaf> so it will go faster if the journal is on faster media
[18:10] <gregaf> but whether that's a separate disk or what depends a lot on your specific configuration
[18:11] <EikiGQ> gregaf: I see. And the journal space requirements vs. the data?
[18:12] <gregaf> the journal can be pretty small; it should be able to absorb the writes while your data disk is syncing
[18:12] <gregaf> on 1GigE, 10GB is more than enough
[18:12] <gregaf> on 10GigE, you might want it to be a bit larger, but presumably your storage is also faster so syncs shouldn't take so long
[18:12] <gregaf> *shrug*
[18:12] <EikiGQ> gregaf: wondering if a couple of SSD's or more for journals of those 32 osd would help?
[18:13] <gregaf> depends on the speed of the SSDs and those 32 disks ;)
[18:14] <EikiGQ> gregaf: hehe true. Well 32 SATA (enterprise...but SATA)
[18:15] <gregaf> I have no idea, are the SSDs as fast in aggregate as the disks are? If not, probably it'll slow things down, but again, it depends
[18:15] <gregaf> certainly with 12 disks it's worthing sticking in a couple SSDs, but 32 is a lot bigger number than 12 ;)
[18:15] <EikiGQ> gregaf: ok I guess that would be an interesting thing to test.
[18:15] <gregaf> I've gotta go though, trying to debug something here
[18:16] <Tv_> even with 12 disks, that's often "a couple"
[18:16] <EikiGQ> gregaf: alright no problem. You answered my key things, thanks :)
[18:16] <EikiGQ> Tv_: ok
[18:16] <Tv_> streaming writes to an ssd are not magnicifently faster than streaming writes to spinning rust
[18:16] <Tv_> so don't expect 1 ssd to handle journal for the rest of the disks
[18:17] <joao> gregaf, sagewk, would we want to upgrade the mon store without any user intervention?
[18:17] <Tv_> hmm a 320GB fusionio was $7500 a year ago
[18:17] <joao> or should we expect the user to explicitly run the monitor with an '--please-do-upgrade' kind of option?
[18:17] <EikiGQ> what was the command to see the current PG setting of a pool?
[18:19] <EikiGQ> never mind I'll figure it out. Thanks!
[18:19] * EikiGQ (~EikiGQ@rtr1.tolvusky.sip.is) Quit (Remote host closed the connection)
[18:20] * yehudasa_ (~yehudasa@mda2736d0.tmodns.net) has joined #ceph
[18:29] * loicd (~loic@82.235.173.177) Quit (Quit: Leaving.)
[18:29] * loicd (~loic@magenta.dachary.org) has joined #ceph
[18:32] <Tv_> oh fusionio prices dropped to $11/GB in October 2011
[18:32] <Tv_> and throughput is promised at around 3Gbps
[18:32] <Tv_> that's your "single journal for all the spinning rust", if any -- SSDs can't match that
[18:33] <Tv_> wait that's lowercase b still? soo 300MBps? i have the bits/bytes thing
[18:33] <Tv_> *hate
[18:34] <Tv_> ah there we go, 1.5GB/s: http://www.engadget.com/2012/04/12/fusion-io-iofx/
[18:34] <Tv_> with a capital B
[18:35] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Ping timeout: 480 seconds)
[18:35] <Tv_> $6/GB it seems for the 420GB model
[18:40] * nhorman (~nhorman@2001:470:8:a08:7aac:c0ff:fec2:933b) has joined #ceph
[18:41] <joao> Tv_, those ISP door-to-door salesmen that consistently interrupt me during dinner have also learnt to hate the bits/bytes thing
[18:41] <joao> especially whenever they promise me 100MBps for just over 50 Eur
[18:43] * Cube (~Adium@12.248.40.138) has joined #ceph
[18:43] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[18:44] <Tv_> joao: whoa isp door-to-door? now there's something i haven't seen
[18:46] * nhmlap_ (~nhm@67-220-20-222.usiwireless.com) has joined #ceph
[18:47] <joao> Tv_, it's pretty common here; ISPs employ door-to-door salesmen (usually young, hip teens) to sell their bundles (tv, phone and internet access)
[18:47] * cblack101 (c0373727@ircip2.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[18:47] <Tv_> oh god i how i hate bundles
[18:48] * nhmlap (~nhm@174-20-43-18.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[18:48] <joao> well, I don't really care about the tv or the phone service tbh; but they won't provide you with a higher speed internet connection unless you subscribe to the whole bundle
[18:49] <Tv_> or they do, but at a *higher* price, which is just their way of saying "fuck you, customer"
[18:50] <joao> only if you're an enterprise-grade customer
[18:50] * cblack101 (c0373725@ircip2.mibbit.com) has joined #ceph
[18:50] <joao> although you may opt to just use the internet connection and pay the tv and phone to another provider
[18:50] <joao> but you'll get their service installed anyway
[18:50] <joao> and pay accordingly, of course ;)
[18:50] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[18:54] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[18:58] <joao> sjust, around?
[18:59] * yehudasa_ (~yehudasa@mda2736d0.tmodns.net) Quit (Ping timeout: 480 seconds)
[19:03] * dmick (~dmick@2607:f298:a:607:f906:b0fb:953f:dd54) has joined #ceph
[19:03] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[19:03] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[19:21] * jjgalvez (~jjgalvez@12.248.40.138) has joined #ceph
[19:29] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) has joined #ceph
[19:43] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[19:49] <amatter> gregaf: not sure why I'm getting leveldb errors when I'm on btrfs. I thought leveldb is used only on ext4?
[19:49] <gregaf> nope, everybody uses it
[19:49] <gregaf> just ext4 uses it a lot more
[19:50] * MikeMcClurg (~mike@firewall.ctxuk.citrix.com) Quit (Quit: Leaving.)
[19:50] * The_Bishop (~bishop@e179007133.adsl.alicedsl.de) has joined #ceph
[19:51] <amatter> oic, thanks
[20:05] * The_Bishop_ (~bishop@e179022038.adsl.alicedsl.de) has joined #ceph
[20:05] <amatter> on this particular osd I switched on btrfs compression. Any known issues with btrfs compression on an osd?
[20:08] * sagelap (~sage@c-66-31-47-40.hsd1.ma.comcast.net) has joined #ceph
[20:09] <gregaf> amatter: I don't think so, but I'm not sure how much experience we have running with it
[20:11] * maelfius (~mdrnstm@66.209.104.107) has joined #ceph
[20:11] * BManojlovic (~steki@212.200.243.39) Quit (Remote host closed the connection)
[20:12] * The_Bishop (~bishop@e179007133.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[20:36] * sagelap (~sage@c-66-31-47-40.hsd1.ma.comcast.net) has left #ceph
[21:41] * lofejndif (~lsqavnbok@28IAAHP79.tor-irc.dnsbl.oftc.net) has joined #ceph
[21:41] * Deuns (~kvirc@office.resolvtelecom.fr) has joined #ceph
[21:41] <Deuns> hello all
[21:44] * danieagle (~Daniel@177.43.213.15) has joined #ceph
[21:44] <Deuns> Using Debian 3.2.0-3-amd64 with Ceph version 0.51 (commit:c03ca95d235c9a072dcd8a77ad5274a52e93ae30) and KVM/Qemu 1.1.1/1.1.0, I get that error when starting a VM with a RDB disk : kvm: -drive file=rbd:data/vm_disk1:auth_supported=none,if=none,id=drive-virtio-disk1,format=raw: could not open disk image rbd:data/vm_disk1:auth_supported=none: No such file or directory
[21:44] <Deuns> my ceph.conf shows "auth supported = none"
[21:45] <joshd> sounds like your qemu might not have rbd support compiled in
[21:45] <Deuns> qemu-img info rbd:data/vm_disk1 resturns good result
[21:45] <Deuns> s/resturns/returns
[21:46] <Tv_> Deuns: qemu-img is in a different deb, so you might actually have different versions of those..
[21:46] <joshd> does 'kvm -drive format=?' include rbd?
[21:48] <Deuns> well, I'm ashamed...
[21:48] <dmick> psh, don't be, it happens
[21:49] <Deuns> I really thought it was a all-in-one package but kvm doesn't support rbd
[21:49] <Deuns> thank you very much guys
[21:50] <Deuns> Just read http://www.sebastien-han.fr/blog/2012/09/13/intanks-guys-are-awesome/ this morning and indeed that's true :)
[22:04] * tryggvil_ (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[22:04] * tryggvil_ (~tryggvil@rtr1.tolvusky.sip.is) Quit ()
[22:11] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Ping timeout: 480 seconds)
[22:19] * scuttlemonkey (~scuttlemo@173-14-58-198-Michigan.hfc.comcastbusiness.net) Quit (Quit: zzzzzzzzzzzzzzzzzzzz)
[22:19] <nhmlap_> woot, new drives came in while I was at lunch with elder!
[22:20] <dmick> aw. kind of you to say Deuns
[22:21] <dmick> and Sébastien as well.
[22:23] <nhmlap_> Deuns: that's great, both that you got the help you needed, and that we got an awesome blog post out of it! :)
[22:24] <nhmlap_> sjust++
[22:25] <stan_theman> is there a numeric limit to how high i can number my OSDs in ceph.conf? something like a short int limit?
[22:26] <Tv_> stan_theman: don't leave large gaps
[22:26] <Tv_> stan_theman: now, if you really have more than some tens of thousands of osds, we'd like to talk to you ;)
[22:26] <stan_theman> heh
[22:26] <stan_theman> how large is a large gap?
[22:27] <Tv_> stan_theman: something like greater than what would fit in a single rack; say a hundred to make it round
[22:27] <Tv_> stan_theman: think of it as an array not a map, if that makes sense
[22:28] <stan_theman> it does, but kills a couple ideas I was having :P although thinking of it as an array makes those ideas goofy!
[22:28] <dmick> yeah. Try not to use osd id as a user-meaningful name, is the idea
[22:28] <Tv_> stan_theman: my ideas are heavily in the realm of integer allocation is a job for machines, not humans
[22:29] <Tv_> ceph osd create is your friend
[22:30] <stan_theman> i agree, but it's good to know that it *has* to be a job for the machines
[22:31] * MikeMcClurg (~mike@cpc10-cmbg15-2-0-cust205.5-4.cable.virginmedia.com) has joined #ceph
[22:31] * Burnie (~Bram@d5152D87C.static.telenet.be) Quit (Read error: Connection reset by peer)
[22:33] <amatter> seems this bug is reporting my leveldb crash: http://tracker.newdream.net/issues/2563 However, it's set to can't reproduce. What helpful information can I provide on the bug before I format the osd and start over?
[22:33] * nhorman (~nhorman@2001:470:8:a08:7aac:c0ff:fec2:933b) Quit (Quit: Leaving)
[22:36] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) has joined #ceph
[22:42] <gregaf> amatter: what your workload looked like, the btrfs compression you had on, any other options that you set, and the Ceph version and OS this happened on
[22:42] <Tv_> amatter: it looks like a corruption on the underlying fs.. if the leveldb is small enough, and you can share it, please upload it to the upstream bug report
[22:50] <nhmlap_> bah, I forgot that ceph health doesn't like 1 OSD.
[22:54] * EmilienM (~EmilienM@ADijon-654-1-133-33.w90-56.abo.wanadoo.fr) has left #ceph
[22:57] * BManojlovic (~steki@212.200.243.39) has joined #ceph
[23:07] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[23:15] <mikeryan> i saw an MDS segfault in one of my regression runs on the backfill stuff
[23:15] <mikeryan> which is pretty surprising, since the branch doesn't touch MDS code at all
[23:17] <elder> nhmlap_, good thing we went to lunch.
[23:18] <nhmlap_> elder: exactly, the rule of deliveries states you must not be home.
[23:18] <elder> Yup.
[23:18] <elder> You might have had to wait all week otherwise.
[23:18] <gregaf> mikeryan: what was the backtrace?
[23:19] <mikeryan> gregaf: http://pastebin.com/yDpLFsLQ
[23:23] <gregaf> mikeryan: are there logs?
[23:27] <mikeryan> yep, this was a regression log so i have everything produced by teuthology
[23:28] <mikeryan> unfortunately the logging isn't very verbose
[23:28] <mikeryan> gregaf: this particular one is in teuthology:/a/mikeryan-2012-09-14_15:58:39-regression-wip_backfill_full-master-basic/22595
[23:28] <gregaf> so this issue probably isn't your fault, but it is coming on after an osd_op_reply message so I'd like to check out the logs and be sure
[23:29] <gregaf> I'm in the middle of a review, but i'll check this out later — poke me if I haven't by the time you leave
[23:29] <mikeryan> k
[23:31] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[23:37] <nhmlap_> so, SAS2208 (ie expensive raid controller). 8 rados bench instances, 4MB IOs. JBOD Mode with 6 OSDs: 618MB/s. Raid0 Mode with 1OSD: 195MB/s
[23:37] <dmick> Raid0 meaning one drive?
[23:37] <gregaf> that would be terrifying if I didn't already know an OSD daemon could handle more than 195MB/s of throughput
[23:38] <gregaf> instead it's just sad
[23:38] <nhmlap_> dmick: raid0 with the 6 drives that were previously OSDs.
[23:38] <dmick> oh dear
[23:39] <nhmlap_> gregaf: the $150 SAS controller with 6 OSDs does 790MB/s.
[23:40] <gregaf> what is so strange about our workload that we work better on dumber controllers?
[23:40] <darkfader> so does the $150 controller turn on caches by mistake? :)
[23:40] <darkfader> or anything other disgusting
[23:41] <dmick> gregaf: personally I think this is the dirty secret of HW RAID
[23:41] <nhmlap_> gregaf: I'm the wrong person to ask, I think $150 SAS controllers are inherently superior to expensie raid controllers. :)
[23:41] <dmick> there's always a performance cost
[23:41] <dmick> you're paying more than you think for the redundancy
[23:41] <gregaf> not with RAID0....
[23:42] <nhmlap_> gregaf: it's the same thing the ZFS guys founds.
[23:42] <dmick> for the...potential...for redundancy :)
[23:42] <nhmlap_> s/founds/found
[23:44] <nhmlap_> dmick: yeah, I agree with you on the dirty secret business. Honestly I think it's what makes ceph really really attractive.
[23:46] <elder> nhmlap_, I don't know if you had it but you didn't hand over my Pi
[23:46] <nhmlap_> elder: doh!
[23:46] <elder> I forgot to ask about it too.
[23:46] <nhmlap_> elder: I had it sitting down here in my bag next to the server!
[23:47] <nhmlap_> elder: I assume you already came back from Burnsville?
[23:47] <elder> Yes, I'm at home now.
[23:47] <elder> No problem. I don't know that I have much time to look at it right away anyway.
[23:47] <nhmlap_> Ok, guess that's as good of a reason as any to plan our next meeting. :)
[23:47] <elder> Agreed. Maybe next week, but more half way this time.
[23:49] <nhmlap_> Ok, let me figure out when the Car is free and I'll get back to you. Afternoon preschool kind of mucks things up.
[23:49] <elder> That's fine. I'm probably more free in my schedule during the day then.
[23:52] <cblack101> @dmick - chris here, what does this entry in the ceph log mean: 2012-09-17 14:52:20.207308 mon.0 10.19.226.41:6789/0 15803 : [INF] pgmap v15682: 9224 pgs: 9222 active+clean, 1 active+recovering+degraded+remapped+backfill, 1 active+recovering+degraded+backfill; 399 GB data, 761 GB used, 42998 GB / 43759 GB avail; 10112/204726 degraded (4.939%)
[23:52] <cephalobot> cblack101: Error: "INF" is not a valid command.
[23:55] <joshd> cblack101: sounds like one of your osds went down, and 2 of your placement groups needed a more complete recovery (this is backfill)
[23:55] <cblack101> does that mean that one of my disks in the hosts died?
[23:55] <cblack101> thanks for the reply josh
[23:56] <joshd> possibly, or just the osd became unresponsive for too long (fs bug or osd bug are possible)
[23:56] <joshd> does ceph -s still report all osds up and in?
[23:57] <cblack101> ceph -s reports health HEALTH_WARN 2 pgs backfill; 2 pgs degraded; 2 pgs recovering; 2 pgs stuck unclean; recovery 8038/204726 degraded (3.926%)
[23:57] <dmick> hey cblack101
[23:57] <cblack101> I'm running multiple workers in Iometer against multiple rbds in multiple servers
[23:57] <joshd> what about the line starting with osdmap?
[23:58] <cblack101> that line says: osdmap e56: 48 osds: 47 up, 47 in
[23:58] <mikeryan> 2
[23:58] <dmick> yep, somebody dead
[23:58] <joshd> ceph osd dump | grep down will tell you who
[23:59] <joshd> then you can check that osd's log for the cause
[23:59] <cblack101> I'll go check the drive lights, is there a process (web link) on how to get this OSD back up & running after I replace the drive?
[23:59] <joshd> it's not necessarily a bad drive

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.