#ceph IRC Log

Index

IRC Log for 2013-12-13

Timestamps are in GMT/BST.

[0:00] <sagewk> they should each get their own partition
[0:00] <ishkabob> thanks sagewk
[0:00] <ishkabob> never created raw partitions, but i'm sure its straightforward enough
[0:00] <ishkabob> google it is
[0:00] <wrencsok> we use raw partitions sized to store the max potential of data in flight over the max 5 second default window. solved many of our stability and performance issues by budgeting that with our hardware correctly.
[0:01] * rturk is now known as rturk-away
[0:01] <xarses> sagewk: but it works if they all have the same one? how is the osd_journal_size handled (or ignored)?
[0:03] <sagewk> xarses: no, they each need a separate one
[0:03] <sagewk> they take a lock so ceph-osd won't start up if someone else is using the journal device
[0:04] <sagewk> the size informs how big the partition is (when ceph-disk creates it) or how big the file is (if it is a file created by ceph-osd or ceph-disk)
[0:06] <xarses> sagewk: hmm, multiple osd's appear to work with the same raw journal device
[0:06] <xarses> (dumpling)
[0:07] <sagewk> oh hmm, yeah it's not actually locking it.
[0:07] <sagewk> that should be fixed
[0:08] <sagewk> they may appear to work that way but will fail spectacularly on replay
[0:08] <dmick> that's going to be one unhappy osd
[0:09] * vata (~vata@2607:fad8:4:6:9112:4bc3:caa8:a2d1) Quit (Quit: Leaving.)
[0:12] <ishkabob> sagewk: so i created partitions in and bound them to raw devices, but ceph-deploy didn't seem to like that
[0:13] <ishkabob> sagewk: should be giving it the device path instead of the raw path?
[0:13] <Pedras> doishkabob : do they have entries in /dev ?
[0:13] <sagewk> what do you mean raw path?
[0:13] <sagewk> /dev/sda1, /dev/sda2, etc.
[0:13] <ishkabob> pedras: they do, i just created parittions with fdisk (and a GPT part table)
[0:14] <ishkabob> sagewk: but i do i tell the partition that it's a raw partition
[0:14] <ishkabob> i used the raw command - raw - bind a Linux raw character device
[0:14] <Pedras> ishkabob: just asking??? sometimes I find myself having to run stuff like partx do update that
[0:14] <dmick> those are two different kinds of raw.
[0:14] <dmick> you just want a partition block device
[0:14] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) Quit (Quit: Leaving.)
[0:15] <ishkabob> dmick: thanks, is it ok of the disk has a GPT table? or should I use a dos table?
[0:15] <dmick> GPT is better; there are startup scripts that will autostart OSDs with the right kind of GPT partitions on them
[0:16] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) has joined #ceph
[0:16] <Pedras> sage: do you happen to have a suggestion to why something like dd if=/dev/zero of=??? oflag=direct would work fine whereas with oflag=direct the cluster starts to show a lot of slow response (emperor, cephfs)
[0:23] * dxd828 (~dxd828@host217-43-217-142.range217-43.btcentralplus.com) has joined #ceph
[0:24] * gmeno (~gmeno@38.122.20.226) Quit (Ping timeout: 480 seconds)
[0:26] * dmsimard (~Adium@108.163.152.2) Quit (Ping timeout: 480 seconds)
[0:27] * Underbyte (~jerrad@pat-global.macpractice.net) Quit (Remote host closed the connection)
[0:28] * mfisch (~mfisch@67.79.6.211) Quit (Quit: Leaving)
[0:29] * gmeno (~gmeno@38.122.20.226) has joined #ceph
[0:35] * Underbyte (~jerrad@pat-global.macpractice.net) has joined #ceph
[0:38] <ishkabob> hey guys, I just put some new osds in and everything seems to be working fine except just one osd doesn't want to come up
[0:38] <ishkabob> in the logs, it just hangs on:
[0:38] <ishkabob> journal _open /var/lib/ceph/osd/ceph-32/journal fd 19: 21474836480 bytes, block size 4096 bytes, directio = 1, aio = 1
[0:39] * gmeno (~gmeno@38.122.20.226) Quit (Ping timeout: 480 seconds)
[0:44] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[0:44] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[0:46] <pmatulis2> ishkabob: what OS and does the init script start the process ok?
[0:50] <ishkabob> fc19, init script works fine, all the other osds work fine too
[0:50] <ishkabob> i added 5 from the same box, other 4 are fine
[0:50] * mattbenjamin1 (~matt@aa2.linuxbox.com) Quit (Quit: Leaving.)
[0:51] <Pedras> ishkabob: not to harp on it again but I have usually bumped into a device entry missing for the journal's partition, fc19 as well
[0:51] <ishkabob> pmatulis2: service status says its running
[0:51] <ishkabob> pedras: yeah i checked, its in there, /dev/sdb3
[0:52] <ishkabob> for the journal that is
[0:52] <Pedras> and it is a block device not some plain file
[0:52] <Pedras> I have seen some weird stuff :)
[0:52] <ishkabob> pedras: heh, yes it appears to be
[0:52] * mschiff (~mschiff@dslb-088-073-037-140.pools.arcor-ip.net) Quit (Remote host closed the connection)
[0:52] <Pedras> that is all you see in the osd log?
[0:55] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[0:57] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:58] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[1:03] * jcsp (~jcsp@0001bf3a.user.oftc.net) Quit (Quit: Leaving.)
[1:03] * jcsp (~jcsp@38.122.20.226) has joined #ceph
[1:04] * gmeno (~gmeno@38.122.20.226) has joined #ceph
[1:05] * dis (~dis@109.110.66.181) Quit (Ping timeout: 480 seconds)
[1:08] * dis (~dis@109.110.66.28) has joined #ceph
[1:19] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Remote host closed the connection)
[1:21] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[1:22] * ishkabob (~c7a82cc0@webuser.thegrebs.com) Quit (Quit: TheGrebs.com CGI:IRC (Ping timeout))
[1:24] * xmltok (~xmltok@216.103.134.250) Quit (Quit: Bye!)
[1:27] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[1:27] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[1:29] * scuttlemonkey (~scuttlemo@173-228-7-214.dsl.static.sonic.net) has joined #ceph
[1:29] * ChanServ sets mode +o scuttlemonkey
[1:31] * mozg (~andrei@host81-151-251-29.range81-151.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[1:34] * jcsp (~jcsp@0001bf3a.user.oftc.net) Quit (Quit: Leaving.)
[1:35] * jcsp (~jcsp@38.122.20.226) has joined #ceph
[1:36] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[1:37] <aarontc> sagewk: are you around? :)
[1:42] <aarontc> I'm trying to boot my MDS with the wip-journaler-kludge that Sage committed for me, after fixing a small typo and merging current git master... but I don't know what to insert as the "journaler force write pos" value from my MDS log
[1:44] <Pedras> start doing writes without oflag=direct and bahm!
[1:44] <Pedras> 2013-12-12 16:43:17.383597 osd.43 [WRN] slow request 30.153766 seconds old, received at 2013-12-12 16:42:47.229750: osd_op(client.4605.1:1798755 1000001cc44.00001ad6 [write 0~4194304 [1@-1],startsync 0~0] 0.908e2969 snapc 1=[] e1015) v4 currently waiting for subops from [9]
[1:45] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[1:46] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[1:49] <aarontc> anyone know how to decipher MDS log lines like this? 2013-12-07 14:47:34.069184 7ff8c2472700 1 -- 10.42.5.30:6800/26481 <== osd.3 10.42.6.29:6812/6443 3 ==== osd_op_reply(32 200.00009cce [read 0~4194304] v0'0 uv0 ack = -2 (No such file or directory)) v6 ==== 171+0+0 (712402494 0 0) 0x7ff890000b20 con 0x7ff8ac011260
[1:51] <gregsfortytwo> aarontc: timestamp, thread outputting line, IP receiving message, name of daemon and ip originating message
[1:52] <gregsfortytwo> message type (osd_op_reply), then a bunch of data about the message enclosed in parens, then after the ==== some data about the message on-wire format you won't ever care about (sizes of the pieces), and pointers to the message and the connection it's associated with
[1:52] <gregsfortytwo> the interesting bits are the message itself, obviously
[1:52] <gregsfortytwo> it's an osd op reply, so the osd is replying to a request
[1:52] <aarontc> gregsfortytwo: okay, cool. any idea which field is the journal write ID?
[1:52] <gregsfortytwo> in order, I believe
[1:52] <aarontc> or how I get that, maybe in previous lines?
[1:54] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[1:54] <gregsfortytwo> 32, request id from sender; 200.00009cce, object request is associated with; read 0~4194304, it's a read op for 4MB starting at offset 0; the object is v0'0 (that's osd map epoch followed by a within-map version);, and it's an ack (rather than ondisk) with result -2
[1:54] <aarontc> trying to figure out what will need correctly injected into void Journaler::_finish_read_head(int r, bufferlist& bl) for h.write_pos, essentially
[1:54] <gregsfortytwo> you can't get the journal write id outta there
[1:55] <aarontc> hmm okay
[1:55] <gregsfortytwo> I dont think you can from the message debug in general, but in particular it's a read rather than write request! ;)
[1:55] <aarontc> I was wondering about the v0'0 part too, but only because I'm curious lol
[1:56] <aarontc> okay, do you know how I can figure out that value? when Sage was helping me a few days ago, we determined that object 200.00009cce definitely doesn't exist
[1:56] <aarontc> so I have to make the MDS quit replaying the journal before it hits that
[1:56] <gregsfortytwo> do you have any objects in the journal following that?
[1:57] <aarontc> I have no idea, I don't know how to tell.
[1:57] * mattbenjamin (~matt@76-206-42-105.lightspeed.livnmi.sbcglobal.net) has joined #ceph
[1:57] <aarontc> the log file shows additional lines like "2013-12-07 14:47:34.068953 7ff8aa0ee700 12 mds.0.cache.dir(10000000006) link_primary_inode ..." after the OSD error
[1:57] <gregsfortytwo> object 200.00009cce is the (octal) 9cce'th object in inode 200
[1:57] <aarontc> but not very many before it gets to the result of mds.0.log _replay journaler got error -2, aborting
[1:58] <gregsfortytwo> so list the objects in the metadata pool and see if there are more of them
[1:58] <gregsfortytwo> the link_primary_inode is work that it's still doing from earlier log entries
[1:58] <gregsfortytwo> it prefetches the log
[1:59] <aarontc> 9cce is octal or hex? :)
[1:59] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[1:59] <gregsfortytwo> heh, sorry, right
[1:59] <aarontc> I'm running 'rados ls -p metadata' right now
[1:59] <aarontc> when it finishes I can grep the result
[2:00] <aarontc> yes, there are objects after that one
[2:00] <aarontc> http://hastebin.com/boruvewayu.rb
[2:02] <aarontc> before your explanation I really had no idea how the metadata object names related to the filesystem :) is it possible to make an empty object with the missing name and get the MDS back up that way?
[2:03] <yanzheng> aarontc, new fs issue?
[2:03] <gregsfortytwo> yes, it's possible, but
[2:03] <aarontc> yanzheng: same one, I was trying to apply the fix Sage tried to set up for me last week (had to fix typo in his commit), but now I don't know how to proceed:)
[2:04] <gregsfortytwo> what'd sage give you?
[2:04] <aarontc> in terms of data loss, I understand that there is no guarantee, and I would just like to recover what I can... I'd rather have a couple of unreadable dirs than nothing (but I have no idea how that would present...)
[2:04] <gregsfortytwo> I didn't see any of that talk
[2:05] <aarontc> gregsfortytwo: the branch 'wip-journaler-kludge', which I updated here: https://github.com/aarontc/ceph/commit/533b887ab9ad71e23af4f6d93a6909609e6faba9
[2:05] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[2:05] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[2:05] <aarontc> I can pastebin the IRC log from the time if that'd help
[2:06] <gregsfortytwo> aarontc: so you set that to prior to the dead zone (and are thereby going to lose whatever metadata updates are contained in the log afterwards), but it's still not working?
[2:06] <aarontc> http://hastebin.com/nayirokavi.irc
[2:07] <gregsfortytwo> what you'll want to do is set the write pos to 1 after the end of the last successful log entry
[2:07] <aarontc> gregsfortytwo: I don't know the correct value to set. I just today got the codebase building with that change
[2:07] * jcsp (~jcsp@0001bf3a.user.oftc.net) Quit (Read error: Connection reset by peer)
[2:07] <gregsfortytwo> okay, so if you look through the log you'll see entries like, just a sec...
[2:08] * zerick (~eocrospom@190.187.21.53) Quit (Remote host closed the connection)
[2:08] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[2:09] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[2:09] <gregsfortytwo> _replay <offset>~<size> <timestamp> <output from log event>
[2:09] <yanzheng> enable mds debug 10, then grep _replay in
[2:09] <gregsfortytwo> (basically, grep for _replay)
[2:09] <gregsfortytwo> heh, that
[2:09] * haomaiwang (~haomaiwan@119.4.172.70) has joined #ceph
[2:10] <gregsfortytwo> okay, enjoy yanzheng :)
[2:10] <aarontc> I set every debug param I could find to 20, let me grep :)
[2:10] <aarontc> 2013-12-07 14:47:34.057718 7ff8aa0ee700 10 mds.0.log _replay 168325824678~112965 / 168424513757 2013-12-02 15:44:48.517993: EOpen [metablob 1, 42 dirs], 220 open files
[2:10] <aarontc> that is the last _replay event before the shutdown
[2:11] <aarontc> so 168325824678 + 1 is the correct value to inject?
[2:11] * zidarsk8 (~zidar@89-212-28-144.dynamic.t-2.net) has joined #ceph
[2:12] <yanzheng> 168325824678 + 112965 + 1
[2:13] * LeaChim (~LeaChim@host81-159-251-38.range81-159.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:13] <aarontc> okay, I'll give that a shot
[2:15] * halfss (~halfss@111.161.17.74) has joined #ceph
[2:15] <zidarsk8> Hi, is there and documentation on how ceph handles OSD failure? like how many OSDs can fail to still have the data available? and does ceph rebalance data when an OSD fails?
[2:16] <aarontc> yanzheng: so I'm adding these two lines to my ceph.conf, this seems correct?
[2:16] <aarontc> [mds]
[2:16] <aarontc> journaler force write pos = 168325937644 # 168325824678 + 112965 + 1
[2:16] <yanzheng> i think so
[2:17] <aarontc> is it going to cause problems if I am running the rebuild MDS binary on a different host, and I add the keyring and so forth?
[2:18] * haomaiwang (~haomaiwan@119.4.172.70) Quit (Ping timeout: 480 seconds)
[2:18] * dpippenger1 (~riven@66-192-9-78.static.twtelecom.net) Quit (Quit: Leaving.)
[2:19] <Nats> zidarsk8, it will rebalance when an OSD fails
[2:19] * alram (~alram@38.122.20.226) Quit (Quit: leaving)
[2:23] <aarontc> MDS is booting, will see if it works (fingers crossed!)
[2:23] * dxd828 (~dxd828@host217-43-217-142.range217-43.btcentralplus.com) Quit (Quit: Computer has gone to sleep.)
[2:27] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) Quit (Quit: Leaving.)
[2:27] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[2:28] * nwat (~textual@eduroam-247-164.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:28] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[2:29] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[2:32] <aarontc> yanzheng: it worked! cephfs mounted :)
[2:32] <yanzheng> glad to hear that
[2:32] <aarontc> got a bitcoin address? :)
[2:32] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[2:33] <yanzheng> no
[2:33] <aarontc> lol, well I'm very happy, thank you very much
[2:34] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[2:36] <yanzheng> np
[2:37] * JC (~JC@2607:f298:a:607:903c:efe8:3aa9:302c) Quit (Quit: Leaving.)
[2:42] * zhyan_ (~zhyan@134.134.137.73) has joined #ceph
[2:43] * gmeno (~gmeno@38.122.20.226) Quit (Remote host closed the connection)
[2:45] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) Quit (Remote host closed the connection)
[2:47] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[2:49] * mwarwick (~mwarwick@2407:7800:400:1011:3e97:eff:fe91:d9bf) has joined #ceph
[2:53] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[2:55] * kbader (~Adium@pool-72-67-192-30.lsanca.fios.verizon.net) Quit (Quit: Leaving.)
[2:57] * gmeno (~gmeno@38.122.20.226) has joined #ceph
[2:58] * rturk-away is now known as rturk
[2:59] * rturk is now known as rturk-away
[3:01] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[3:01] * xarses (~andreww@64-79-127-122.static.wiline.com) Quit (Ping timeout: 480 seconds)
[3:04] * harryp (~torment@pool-72-91-144-42.tampfl.fios.verizon.net) has joined #ceph
[3:06] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[3:07] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[3:08] * gmeno (~gmeno@38.122.20.226) Quit (Ping timeout: 480 seconds)
[3:12] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[3:13] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[3:14] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[3:17] * kbader (~Adium@pool-72-67-192-30.lsanca.fios.verizon.net) has joined #ceph
[3:18] * alram (~alram@cpe-76-167-50-51.socal.res.rr.com) has joined #ceph
[3:20] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[3:22] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[3:24] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[3:25] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[3:25] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[3:30] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[3:30] * gmeno (~gmeno@38.122.20.226) has joined #ceph
[3:31] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[3:37] * b1tbkt (~b1tbkt@24-217-192-155.dhcp.stls.mo.charter.com) has joined #ceph
[3:38] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[3:42] * HVT (~root@118.70.170.151) has joined #ceph
[3:45] * angdraug (~angdraug@64-79-127-122.static.wiline.com) Quit (Quit: Leaving)
[3:45] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[3:45] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[3:46] * gmeno (~gmeno@38.122.20.226) Quit (Ping timeout: 480 seconds)
[3:46] * sarob (~sarob@2601:9:7080:13a:9848:4e3a:f9c7:cccc) has joined #ceph
[3:47] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[3:48] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[3:53] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Remote host closed the connection)
[3:53] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[3:54] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:55] * Kaizh (~oftc-webi@c-50-131-202-137.hsd1.ca.comcast.net) Quit (Quit: Page closed)
[3:58] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[3:59] * carif (~mcarifio@pool-173-76-155-34.bstnma.fios.verizon.net) Quit (Quit: Ex-Chat)
[4:00] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[4:01] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[4:01] * ircolle2 (~Adium@2607:f298:a:607:e885:f73d:1065:85de) Quit (Quit: Leaving.)
[4:02] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[4:05] * kbader (~Adium@pool-72-67-192-30.lsanca.fios.verizon.net) Quit (Quit: Leaving.)
[4:05] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[4:10] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[4:11] * haomaiwang (~haomaiwan@119.4.172.70) has joined #ceph
[4:14] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[4:15] * codice (~toodles@71-80-186-21.dhcp.lnbh.ca.charter.com) Quit (Read error: Connection reset by peer)
[4:15] * codice (~toodles@71-80-186-21.dhcp.lnbh.ca.charter.com) has joined #ceph
[4:18] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[4:19] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[4:19] * haomaiwang (~haomaiwan@119.4.172.70) Quit (Ping timeout: 480 seconds)
[4:23] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[4:32] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[4:32] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[4:36] * sjm (~Adium@ip-64-134-217-193.public.wayport.net) has joined #ceph
[4:48] * sjm (~Adium@ip-64-134-217-193.public.wayport.net) Quit (Quit: Leaving.)
[4:51] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[4:55] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[4:57] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[5:01] * sarob (~sarob@2601:9:7080:13a:9848:4e3a:f9c7:cccc) Quit (Remote host closed the connection)
[5:01] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[5:01] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[5:01] * sarob (~sarob@2601:9:7080:13a:9848:4e3a:f9c7:cccc) has joined #ceph
[5:06] * fireD (~fireD@93-142-194-174.adsl.net.t-com.hr) has joined #ceph
[5:06] * sarob (~sarob@2601:9:7080:13a:9848:4e3a:f9c7:cccc) Quit (Remote host closed the connection)
[5:06] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[5:06] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[5:06] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Ping timeout: 480 seconds)
[5:07] * fireD_ (~fireD@93-142-211-219.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[5:08] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[5:11] * haomaiwang (~haomaiwan@119.4.172.70) has joined #ceph
[5:14] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:31] <zhyan_> aarontc, don't forget to remove "journaler force write pos" before next mds reboot
[5:33] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[5:39] * mfisch (~mfisch@67.79.6.211) has joined #ceph
[5:43] * cofol1986 (~xwrj@120.35.11.138) has joined #ceph
[5:46] <cofol1986> Hey guys, does "filestore??journal??writeahead" mean that the file will be written to journal and return success to the writing request and "filestore??journal??parallel" meaning file being written to both the journal and disk parallelly then return success to the write request?
[5:46] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[5:47] * haomaiwang (~haomaiwan@119.4.172.70) Quit (Ping timeout: 480 seconds)
[5:47] * haomaiwa_ (~haomaiwan@119.6.74.95) has joined #ceph
[5:48] * bkero (~bkero@216.151.13.66) has joined #ceph
[5:57] * clayb (~kvirc@199.172.169.79) Quit (Ping timeout: 480 seconds)
[5:59] * xinxinsh (~xinxinsh@134.134.137.73) has joined #ceph
[6:01] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[6:03] * mikedawson_ (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[6:04] * sarob (~sarob@2601:9:7080:13a:245e:49fa:2209:17a5) has joined #ceph
[6:05] * kbader (~Adium@pool-72-67-192-30.lsanca.fios.verizon.net) has joined #ceph
[6:05] * zidarsk8 (~zidar@89-212-28-144.dynamic.t-2.net) Quit (Ping timeout: 480 seconds)
[6:07] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[6:08] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[6:08] * mikedawson_ is now known as mikedawson
[6:09] * xinxinsh (~xinxinsh@134.134.137.73) Quit (Remote host closed the connection)
[6:11] * Cube (~Cube@66-87-65-13.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[6:11] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[6:12] * Cube (~Cube@66-87-65-159.pools.spcsdns.net) has joined #ceph
[6:12] * sarob (~sarob@2601:9:7080:13a:245e:49fa:2209:17a5) Quit (Ping timeout: 480 seconds)
[6:13] * kbader (~Adium@pool-72-67-192-30.lsanca.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[6:16] * mfisch (~mfisch@67.79.6.211) Quit (Ping timeout: 480 seconds)
[6:17] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[6:17] * zidarsk8 (~zidar@84-255-203-33.static.t-2.net) has joined #ceph
[6:25] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[6:26] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[6:31] <aarontc> thanks zhyan_
[6:33] * grepory (foopy@lasziv.reprehensible.net) Quit (Remote host closed the connection)
[6:34] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[6:38] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) has joined #ceph
[6:38] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[6:38] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) Quit (Remote host closed the connection)
[6:47] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[6:48] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[6:50] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[6:52] * grepory (foopy@lasziv.reprehensible.net) has joined #ceph
[6:55] * yguang11_ (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[6:55] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[7:00] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[7:01] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[7:05] * sarob (~sarob@2601:9:7080:13a:ddff:6a09:ef5c:b988) has joined #ceph
[7:17] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:19] * mfisch (~mfisch@67.79.6.211) has joined #ceph
[7:19] <cofol1986> Hey guys, does "filestore??journal??writeahead" mean that the file will be written to journal and return success to the writing request and "filestore??journal??parallel" meaning file being written to both the journal and disk parallelly then return success to the write request?
[7:19] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) Quit (Quit: Leaving.)
[7:21] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[7:24] * mfisch_ (~mfisch@67.79.6.211) has joined #ceph
[7:25] * themgt (~themgt@pc-188-95-160-190.cm.vtr.net) Quit (Quit: themgt)
[7:27] * mfisch (~mfisch@67.79.6.211) Quit (Ping timeout: 480 seconds)
[7:32] * mfisch_ (~mfisch@67.79.6.211) Quit (Ping timeout: 480 seconds)
[7:36] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[7:46] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[7:54] * sarob (~sarob@2601:9:7080:13a:ddff:6a09:ef5c:b988) Quit (Remote host closed the connection)
[7:54] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[8:01] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[8:01] * grepory (foopy@lasziv.reprehensible.net) Quit (Remote host closed the connection)
[8:01] * zhyan_ (~zhyan@134.134.137.73) Quit (Remote host closed the connection)
[8:05] * kbader (~Adium@pool-72-67-192-30.lsanca.fios.verizon.net) has joined #ceph
[8:05] * grepory (foopy@lasziv.reprehensible.net) has joined #ceph
[8:10] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[8:10] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[8:12] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[8:13] * kbader (~Adium@pool-72-67-192-30.lsanca.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[8:15] * zhyan_ (~zhyan@134.134.139.72) has joined #ceph
[8:25] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[8:30] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[8:30] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[8:31] * mwarwick (~mwarwick@2407:7800:400:1011:3e97:eff:fe91:d9bf) Quit (Quit: Leaving.)
[8:33] * jerrad (~jerrad@pat-global.macpractice.net) has joined #ceph
[8:33] * Underbyte (~jerrad@pat-global.macpractice.net) Quit (Read error: Connection reset by peer)
[8:36] * zhyan_ (~zhyan@134.134.139.72) Quit (Remote host closed the connection)
[8:39] * mattt_ (~textual@94.236.7.190) has joined #ceph
[8:41] * thomnico (~thomnico@2a01:e35:8b41:120:c8a2:e18d:5151:e4dd) has joined #ceph
[8:43] * xan (xan@d.clients.kiwiirc.com) has joined #ceph
[8:43] * zjohnson (~zjohnson@guava.jsy.net) Quit (Ping timeout: 481 seconds)
[8:45] <xan> In this page: http://ceph.com/docs/next/rbd/qemu-rbd/, search 'format'. Run qemu with rbd image, like "qemu -m 1024 -drive format=raw,file=rbd:data/squeeze".
[8:45] <xan> Q: what's the difference of "format=raw" and "format=rbd"?
[8:46] <xan> "format=qcow2" proved wrong with "not in qcow2 format" warning.
[8:47] * Sysadmin88 (~IceChat77@90.208.9.12) Quit (Quit: Always try to be modest, and be proud about it!)
[8:50] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[8:54] * zhyan_ (~zhyan@134.134.137.73) has joined #ceph
[8:55] * codice (~toodles@71-80-186-21.dhcp.lnbh.ca.charter.com) Quit (Read error: Operation timed out)
[8:57] * zjohnson (~zjohnson@guava.jsy.net) has joined #ceph
[8:58] * codice (~toodles@71-80-186-21.dhcp.lnbh.ca.charter.com) has joined #ceph
[8:58] * yguang11_ (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Remote host closed the connection)
[8:58] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[8:59] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[8:59] * ChanServ sets mode +v andreask
[9:02] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[9:07] * philipgian (~philipgia@nat.admin.grnet.gr) has joined #ceph
[9:11] * hjjg (~hg@p3EE30ABA.dip0.t-ipconnect.de) has joined #ceph
[9:16] * xinxinsh (~xinxinsh@134.134.137.73) has joined #ceph
[9:17] * fouxm (~fouxm@185.23.92.11) has joined #ceph
[9:24] * mschiff (~mschiff@p4FCDEE6E.dip0.t-ipconnect.de) has joined #ceph
[9:24] * mfisch_ (~mfisch@67.79.6.211) has joined #ceph
[9:26] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[9:28] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[9:29] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: No route to host)
[9:30] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[9:33] * mfisch_ (~mfisch@67.79.6.211) Quit (Ping timeout: 480 seconds)
[9:34] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[9:37] * mxmln (~mxmln@212.79.49.65) has joined #ceph
[9:39] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) has joined #ceph
[9:45] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[9:50] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[9:52] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: No route to host)
[10:05] * kbader (~Adium@pool-72-67-192-30.lsanca.fios.verizon.net) has joined #ceph
[10:05] * HVT (~root@118.70.170.151) Quit (Read error: Connection reset by peer)
[10:05] * HVT (~root@117.7.237.74) has joined #ceph
[10:08] * zhyan_ (~zhyan@134.134.137.73) Quit (Remote host closed the connection)
[10:08] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[10:08] * HVT (~root@117.7.237.74) has left #ceph
[10:10] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) Quit (Remote host closed the connection)
[10:12] * xan (xan@d.clients.kiwiirc.com) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[10:12] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[10:12] * kbader (~Adium@pool-72-67-192-30.lsanca.fios.verizon.net) Quit (Read error: Operation timed out)
[10:15] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[10:16] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[10:17] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit ()
[10:26] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[10:29] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[10:35] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:36] * thomnico (~thomnico@2a01:e35:8b41:120:c8a2:e18d:5151:e4dd) Quit (Quit: Ex-Chat)
[10:44] * LeaChim (~LeaChim@host81-159-251-38.range81-159.btcentralplus.com) has joined #ceph
[10:45] * thomnico (~thomnico@2a01:e35:8b41:120:55:8d0f:ff6a:bb8c) has joined #ceph
[10:52] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[11:03] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[11:04] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[11:07] * ninkotech__ (~duplo@217-112-170-132.adsl.avonet.cz) has joined #ceph
[11:08] * pll (~pll@2001:620:20:16:650f:db13:8b33:1b7a) has joined #ceph
[11:08] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[11:11] * halfss (~halfss@111.161.17.74) Quit (Quit: Leaving)
[11:19] * allsystemsarego (~allsystem@86.126.9.60) has joined #ceph
[11:21] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) has joined #ceph
[11:22] * xinxinsh (~xinxinsh@134.134.137.73) Quit (Ping timeout: 480 seconds)
[11:25] * mfisch_ (~mfisch@67.79.6.211) has joined #ceph
[11:26] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[11:27] * yanzheng (~zhyan@jfdmzpr01-ext.jf.intel.com) has joined #ceph
[11:29] * ScOut3R_ (~ScOut3R@dslC3E4E249.fixip.t-online.hu) has joined #ceph
[11:29] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[11:31] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[11:32] * ScOut3R_ (~ScOut3R@dslC3E4E249.fixip.t-online.hu) Quit (Remote host closed the connection)
[11:33] * ScOut3R (~ScOut3R@dslC3E4E249.fixip.t-online.hu) has joined #ceph
[11:33] * mfisch_ (~mfisch@67.79.6.211) Quit (Ping timeout: 480 seconds)
[11:34] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[11:34] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[11:41] * ScOut3R (~ScOut3R@dslC3E4E249.fixip.t-online.hu) Quit (Ping timeout: 480 seconds)
[11:43] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[11:43] * yanzheng (~zhyan@jfdmzpr01-ext.jf.intel.com) Quit (Remote host closed the connection)
[11:43] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[11:45] * ninkotech__ (~duplo@217-112-170-132.adsl.avonet.cz) Quit (Read error: No route to host)
[11:47] * thomnico (~thomnico@2a01:e35:8b41:120:55:8d0f:ff6a:bb8c) Quit (Quit: Ex-Chat)
[11:47] * thomnico (~thomnico@2a01:e35:8b41:120:55:8d0f:ff6a:bb8c) has joined #ceph
[11:47] * ninkotech_ (~duplo@217-112-170-132.adsl.avonet.cz) has joined #ceph
[11:51] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[12:02] * ScOut3R (~ScOut3R@212.96.46.212) has joined #ceph
[12:06] * kbader (~Adium@pool-72-67-192-30.lsanca.fios.verizon.net) has joined #ceph
[12:06] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) has joined #ceph
[12:06] * ksingh (~Adium@2001:708:10:10:cc7b:40fe:3ec8:d170) has joined #ceph
[12:09] * jnq (~jon@0001b7cc.user.oftc.net) Quit (Quit: WeeChat 0.3.7)
[12:09] * jnq (~jon@gruidae.jonquinn.com) has joined #ceph
[12:14] * kbader (~Adium@pool-72-67-192-30.lsanca.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[12:22] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[12:24] * yanzheng (~zhyan@134.134.137.73) has joined #ceph
[12:26] * sarob (~sarob@2601:9:7080:13a:4477:c874:240c:a6bd) has joined #ceph
[12:35] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[12:47] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[12:48] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[13:01] * mozg (~andrei@host217-46-236-49.in-addr.btopenworld.com) has joined #ceph
[13:02] * WarrenUsui (~Warren@2607:f298:a:607:45f8:cc50:ce17:70fa) Quit (Read error: Connection reset by peer)
[13:03] * sarob (~sarob@2601:9:7080:13a:4477:c874:240c:a6bd) Quit (Ping timeout: 480 seconds)
[13:03] * WarrenUsui (~Warren@2607:f298:a:607:45f8:cc50:ce17:70fa) has joined #ceph
[13:08] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[13:25] <ksingh> hello cephers ?????? how to check per OSD utilization
[13:25] * mfisch_ (~mfisch@67.79.6.211) has joined #ceph
[13:25] <ksingh> i have 154 OSD / physical disks in my cluster want to check how much each disk is utilized
[13:26] * sarob (~sarob@2601:9:7080:13a:ad52:c979:c3a:339c) has joined #ceph
[13:26] <aarontc> ksingh: you can get the data from the bottom of "ceph pg dump"
[13:31] <ksingh> thanks aarontc
[13:31] <ksingh> i am getting a warning health HEALTH_WARN too few pgs per osd (1 < min 20)
[13:31] <ksingh> what is the meaning of (1 < min 20) here
[13:33] * mfisch_ (~mfisch@67.79.6.211) Quit (Ping timeout: 480 seconds)
[13:34] <aarontc> you need to have at least 20 placement groups per OSD
[13:34] <aarontc> is what that means
[13:35] <aarontc> so the sum of placement groups for all your pools together should be at least (154 times 20)
[13:39] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[13:44] * zidarsk8 (~zidar@84-255-203-33.static.t-2.net) has left #ceph
[13:53] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[14:00] * ninkotech__ (~duplo@217-112-170-132.adsl.avonet.cz) has joined #ceph
[14:01] * ninkotech_ (~duplo@217-112-170-132.adsl.avonet.cz) Quit (Read error: Connection reset by peer)
[14:03] * sarob (~sarob@2601:9:7080:13a:ad52:c979:c3a:339c) Quit (Ping timeout: 480 seconds)
[14:05] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[14:16] * themgt (~themgt@pc-188-95-160-190.cm.vtr.net) has joined #ceph
[14:16] <bkero> Hi guys. Trying to tune ceph 0.72.0. I have 1 host with a mon and 4 OSDs (single disks, xfs). I'm benchmarking with swift-bench, and these are my numbers. https://pastebin.mozilla.org/3765612 Afterwards I decide to add an SSD journal (files, all OSD journals on same SSD device). My numbers basically get cut in half. https://pastebin.mozilla.org/3766003
[14:17] <bkero> Am I doing something wrong? I expected a performance gain.
[14:18] * thomnico (~thomnico@2a01:e35:8b41:120:55:8d0f:ff6a:bb8c) Quit (Quit: Ex-Chat)
[14:20] * mozg (~andrei@host217-46-236-49.in-addr.btopenworld.com) Quit (Ping timeout: 480 seconds)
[14:21] * thomnico (~thomnico@2a01:e35:8b41:120:55:8d0f:ff6a:bb8c) has joined #ceph
[14:21] * fouxm (~fouxm@185.23.92.11) Quit (Read error: Connection reset by peer)
[14:22] * fouxm (~fouxm@185.23.92.11) has joined #ceph
[14:26] * sarob (~sarob@2601:9:7080:13a:d4ff:2a96:e701:c22c) has joined #ceph
[14:27] * i_m (~ivan.miro@deibp9eh1--blueice3n2.emea.ibm.com) has joined #ceph
[14:31] * mfisch (~mfisch@67.79.6.211) has joined #ceph
[14:31] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[14:32] * fouxm (~fouxm@185.23.92.11) Quit (Remote host closed the connection)
[14:34] * fouxm (~fouxm@185.23.92.11) has joined #ceph
[14:35] * alfredodeza (~alfredode@198.206.133.89) Quit (Quit: ZNC - http://znc.in)
[14:38] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[14:39] * mschiff (~mschiff@p4FCDEE6E.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[14:39] * nyerup (irc@jespernyerup.dk) Quit (Server closed connection)
[14:39] * nyerup (irc@jespernyerup.dk) has joined #ceph
[14:44] * mfisch (~mfisch@67.79.6.211) Quit (Remote host closed the connection)
[14:46] * brother (foobaz@2a01:7e00::f03c:91ff:fe96:ab16) Quit (Server closed connection)
[14:46] * brother (foobaz@vps1.hacking.dk) has joined #ceph
[14:49] * ofu (ofu@dedi3.fuckner.net) Quit (Server closed connection)
[14:49] * ofu (ofu@dedi3.fuckner.net) has joined #ceph
[14:49] * asadpanda (~asadpanda@67.231.236.80) Quit (Server closed connection)
[14:50] * asadpanda (~asadpanda@67.231.236.80) has joined #ceph
[14:50] * rBEL (robbe@november.openminds.be) Quit (Server closed connection)
[14:50] * rBEL (robbe@november.openminds.be) has joined #ceph
[14:51] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[14:54] * diegows (~diegows@200.68.116.185) has joined #ceph
[14:56] * Djinh (~alexlh@ardbeg.funk.org) Quit (Server closed connection)
[14:56] * Djinh (~alexlh@ardbeg.funk.org) has joined #ceph
[14:57] * twx (~twx@rosamoln.org) Quit (Server closed connection)
[14:57] * twx (~twx@rosamoln.org) has joined #ceph
[14:59] * hjjg_ (~hg@p3EE31DDB.dip0.t-ipconnect.de) has joined #ceph
[15:01] * hjjg (~hg@p3EE30ABA.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[15:01] * yanzheng (~zhyan@134.134.137.73) Quit (Remote host closed the connection)
[15:03] * sarob (~sarob@2601:9:7080:13a:d4ff:2a96:e701:c22c) Quit (Ping timeout: 480 seconds)
[15:06] * dxd828 (~dxd828@host217-43-217-142.range217-43.btcentralplus.com) has joined #ceph
[15:08] * jochen_ (~jochen@laevar.de) Quit (Server closed connection)
[15:08] * jochen (~jochen@laevar.de) has joined #ceph
[15:12] * yo61 (~yo61@lin001.yo61.net) Quit (Server closed connection)
[15:12] * yo61 (~yo61@lin001.yo61.net) has joined #ceph
[15:13] * dxd828 (~dxd828@host217-43-217-142.range217-43.btcentralplus.com) Quit (Quit: Computer has gone to sleep.)
[15:13] * wattsmarcus5 (~mdw@aa2.linuxbox.com) Quit (Server closed connection)
[15:15] * wattsmarcus5 (~mdw@aa2.linuxbox.com) has joined #ceph
[15:16] * gmeno (~gmeno@216.1.187.162) has joined #ceph
[15:21] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[15:21] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[15:23] * mattbenjamin (~matt@76-206-42-105.lightspeed.livnmi.sbcglobal.net) Quit (Quit: Leaving.)
[15:26] * joao|lap (~JL@a79-168-11-205.cpe.netcabo.pt) has joined #ceph
[15:26] * ChanServ sets mode +o joao|lap
[15:26] * sarob (~sarob@2601:9:7080:13a:c049:59b:9af4:3267) has joined #ceph
[15:26] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[15:31] * DarkAceZ (~BillyMays@50-32-28-57.drr01.hrbg.pa.frontiernet.net) has joined #ceph
[15:32] * gmeno (~gmeno@216.1.187.162) Quit (Remote host closed the connection)
[15:33] * psieklFH (psiekl@wombat.eu.org) Quit (Server closed connection)
[15:33] * psieklFH (psiekl@wombat.eu.org) has joined #ceph
[15:38] * SubOracle (~quassel@00019f1e.user.oftc.net) Quit (Server closed connection)
[15:38] * SubOracle (~quassel@app-20945-lnd-gb.cws.io) has joined #ceph
[15:39] * DarkAceZ (~BillyMays@50-32-28-57.drr01.hrbg.pa.frontiernet.net) Quit (Read error: Connection reset by peer)
[15:40] * BillK (~BillK-OFT@106-68-121-202.dyn.iinet.net.au) Quit (Read error: Connection reset by peer)
[15:42] * sarob (~sarob@2601:9:7080:13a:c049:59b:9af4:3267) Quit (Ping timeout: 480 seconds)
[15:43] * BillK (~BillK-OFT@124-169-79-165.dyn.iinet.net.au) has joined #ceph
[15:43] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[15:46] * guerby (~guerby@ip165-ipv6.tetaneutral.net) Quit (Server closed connection)
[15:46] * sileht (~sileht@gizmo.sileht.net) Quit (Server closed connection)
[15:47] * guerby (~guerby@ip165-ipv6.tetaneutral.net) has joined #ceph
[15:47] * todin (tuxadero@kudu.in-berlin.de) Quit (Server closed connection)
[15:47] * sileht (~sileht@gizmo.sileht.net) has joined #ceph
[15:47] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[15:47] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[15:49] * gmeno (~gmeno@216.1.187.162) has joined #ceph
[15:52] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit (Quit: Quitte)
[15:53] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[15:53] * mfisch (~mfisch@of1-nat2.aus1.rackspace.com) has joined #ceph
[15:55] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:56] * mattbenjamin (~matt@aa2.linuxbox.com) has joined #ceph
[15:58] * Elbandi_ (~ea333@elbandi.net) Quit (Server closed connection)
[15:58] * Elbandi (~ea333@elbandi.net) has joined #ceph
[16:01] * DarkAceZ (~BillyMays@50-32-43-152.drr01.hrbg.pa.frontiernet.net) has joined #ceph
[16:01] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit (Quit: Quitte)
[16:01] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[16:02] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit ()
[16:02] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[16:03] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit ()
[16:03] * joao (~joao@a79-168-11-205.cpe.netcabo.pt) Quit (Server closed connection)
[16:04] * joao (~joao@a79-168-11-205.cpe.netcabo.pt) has joined #ceph
[16:04] * ChanServ sets mode +o joao
[16:04] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[16:06] * kbader (~Adium@pool-72-67-192-30.lsanca.fios.verizon.net) has joined #ceph
[16:06] * glzhao (~glzhao@118.195.65.67) has joined #ceph
[16:07] * nwl (~levine@atticus.yoyo.org) Quit (Server closed connection)
[16:07] * nwl (~levine@atticus.yoyo.org) has joined #ceph
[16:07] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit (Remote host closed the connection)
[16:07] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[16:07] * lukhas (~lucas@rincevent.net) has joined #ceph
[16:08] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[16:08] * ChanServ sets mode +v andreask
[16:08] <lukhas> hello
[16:10] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit ()
[16:10] * DarkAceZ (~BillyMays@50-32-43-152.drr01.hrbg.pa.frontiernet.net) Quit (Ping timeout: 480 seconds)
[16:10] <lukhas> we're looking into Ceph for our next storage infrastructure, and I've got a few questions
[16:11] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[16:12] <lukhas> namely, do you have a timeline for stable Xen support of the block mode?
[16:13] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) Quit (Server closed connection)
[16:13] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) has joined #ceph
[16:13] * toabctl (~toabctl@toabctl.de) Quit (Server closed connection)
[16:14] * toabctl (~toabctl@toabctl.de) has joined #ceph
[16:14] * glzhao (~glzhao@118.195.65.67) Quit (Remote host closed the connection)
[16:14] * kbader (~Adium@pool-72-67-192-30.lsanca.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[16:15] * renzhi (~renzhi@ec2-54-249-150-103.ap-northeast-1.compute.amazonaws.com) Quit (Ping timeout: 480 seconds)
[16:15] * markbby (~Adium@168.94.245.3) has joined #ceph
[16:16] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) Quit (Server closed connection)
[16:17] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) has joined #ceph
[16:20] * glzhao (~glzhao@118.195.65.67) has joined #ceph
[16:24] * vata (~vata@2607:fad8:4:6:4825:d168:6ccf:f31e) has joined #ceph
[16:25] * dalegaard (~dalegaard@vps.devrandom.dk) Quit (Server closed connection)
[16:25] * dalegaard (~dalegaard@vps.devrandom.dk) has joined #ceph
[16:26] * sarob (~sarob@2601:9:7080:13a:d5e:ec60:37fc:333) has joined #ceph
[16:26] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[16:28] * JC (~JC@71-94-44-243.static.trlk.ca.charter.com) has joined #ceph
[16:32] * thomnico (~thomnico@2a01:e35:8b41:120:55:8d0f:ff6a:bb8c) Quit (Ping timeout: 480 seconds)
[16:34] * ninkotech__ (~duplo@217-112-170-132.adsl.avonet.cz) Quit (Ping timeout: 480 seconds)
[16:35] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[16:39] * tobru (~quassel@2a02:41a:3999::94) Quit (Server closed connection)
[16:39] * tobru (~quassel@2a02:41a:3999::94) has joined #ceph
[16:43] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) Quit (Server closed connection)
[16:44] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[16:45] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Server closed connection)
[16:46] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[16:49] * garphy (~garphy@frank.zone84.net) has joined #ceph
[16:49] * clayb (~kvirc@proxy-ny2.bloomberg.com) has joined #ceph
[16:51] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) Quit (Server closed connection)
[16:52] * JC1 (~JC@71-94-44-243.static.trlk.ca.charter.com) has joined #ceph
[16:53] * hijacker (~hijacker@213.91.163.5) Quit (Server closed connection)
[16:53] * hijacker (~hijacker@213.91.163.5) has joined #ceph
[16:54] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[16:54] * glambert (~glambert@37.157.50.80) has joined #ceph
[16:55] * JC (~JC@71-94-44-243.static.trlk.ca.charter.com) Quit (Ping timeout: 480 seconds)
[16:55] <glambert> with ceph fs, can you mount a specific pool that you would access via the s3 gateway?
[16:55] * hjjg_ (~hg@p3EE31DDB.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[16:57] * jerrad (~jerrad@pat-global.macpractice.net) Quit (Quit: Linkinus - http://linkinus.com)
[16:57] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[16:58] * thebigm (~thebigm@et-0-29.gw-nat.bs.kae.de.oneandone.net) Quit (Server closed connection)
[17:00] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) Quit (Server closed connection)
[17:00] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) has joined #ceph
[17:01] * thebigm (~thebigm@2001:8d8:1fe:7:a6ba:dbff:fefc:c429) has joined #ceph
[17:03] * sarob (~sarob@2601:9:7080:13a:d5e:ec60:37fc:333) Quit (Ping timeout: 480 seconds)
[17:04] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[17:04] * BillK (~BillK-OFT@124-169-79-165.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[17:05] * tomaw (tom@tomaw.netop.oftc.net) Quit (Quit: Quit)
[17:06] * tomaw (tom@basil.tomaw.net) has joined #ceph
[17:07] * Underbyte (~jerrad@pat-global.macpractice.net) has joined #ceph
[17:07] * DarkAceZ (~BillyMays@50-32-19-144.drr01.hrbg.pa.frontiernet.net) has joined #ceph
[17:10] * scuttlemonkey (~scuttlemo@173-228-7-214.dsl.static.sonic.net) Quit (Read error: Operation timed out)
[17:10] * LeaChim (~LeaChim@host81-159-251-38.range81-159.btcentralplus.com) Quit (Server closed connection)
[17:10] * LeaChim (~LeaChim@host81-159-251-38.range81-159.btcentralplus.com) has joined #ceph
[17:11] <jerker> mount the same data both via S3 gateway and via CephFS? I did not think that was possible. (I'm only a user.)
[17:11] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Server closed connection)
[17:11] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[17:11] * garphy is now known as garphy`aw
[17:12] * glzhao_ (~glzhao@118.195.65.67) has joined #ceph
[17:12] * tomaw (tom@tomaw.netop.oftc.net) Quit (Quit: Quit)
[17:13] * tomaw (tom@tomaw.netop.oftc.net) has joined #ceph
[17:13] * glzhao (~glzhao@118.195.65.67) Quit (Ping timeout: 480 seconds)
[17:14] <linuxkidd_> glambert: jerker is correct. They are two different data storage techniques. CephFS and S3/Swift objects are not compabitle types and therefore cannot access each others data within the cluster.
[17:14] <glambert> linuxkidd_, ok thanks
[17:15] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit (Ping timeout: 480 seconds)
[17:15] <linuxkidd_> From a design perspective, this wouldn't really make sense. Object storage is a flat structure with names = data... CephFS is a filesystem hierarchy with potentially many levels of directories, files, etc..
[17:15] * linuxkidd_ is now known as linuxkidd
[17:16] * yanzheng (~zhyan@134.134.137.73) has joined #ceph
[17:16] <linuxkidd> glambert: no worries.. happy to help
[17:16] <glambert> I've mounted cephfs but I'm getting errors whilst trying to create an image there
[17:16] <glambert> /usr/bin/kvm-img create -f qcow2 /mnt/ceph/test.img 10485760K
[17:16] <glambert> Formatting '/mnt/ceph/test.img', fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off
[17:16] <glambert> kvm-img: /mnt/ceph/test.img: error while creating qcow2: Operation not permitted
[17:17] <glambert> Logged in as root so isn't a permissions issue unless specific to ceph perhaps?
[17:17] <glambert> If I change the path to /tmp or something it works fine
[17:17] <linuxkidd> first, if you're wanting to do image storage, I wouldn't do that on top of cephfs..
[17:17] <linuxkidd> cephfs isn't considered stable for production use just yet.. however Ceph's RBD is...
[17:17] * yanzheng (~zhyan@134.134.137.73) Quit (Remote host closed the connection)
[17:18] <glambert> Testing out webvirtmgr at the moment and that doesn't have as many options for storage as other things
[17:18] * glzhao_ (~glzhao@118.195.65.67) Quit (Remote host closed the connection)
[17:18] * glzhao (~glzhao@118.195.65.67) has joined #ceph
[17:18] <linuxkidd> ah... ok, as long as it's for testing / non-prod...
[17:19] * gmeno (~gmeno@216.1.187.162) Quit (Ping timeout: 480 seconds)
[17:19] <linuxkidd> can you write other files into cephfs?
[17:19] <glambert> linuxkidd, initially, yes just for testing/evaluation but if it works well will consider for production
[17:19] <linuxkidd> e.g. touch /mnt/ceph/testfile
[17:19] <glambert> yes that worked, no errors
[17:19] * Hakisho (~Hakisho@0001be3c.user.oftc.net) has joined #ceph
[17:20] <glambert> I know have testfile and test.img at 0bytes in there
[17:20] <linuxkidd> what happens if you create the image locally, then move it to the cephfs mount?
[17:20] <linuxkidd> also.. are there any messages in your syslog or /var/log/ceph/ceph-mds*.log files about the failure?
[17:20] <linuxkidd> and... what distro / kernel are you running
[17:20] <glambert> linuxkidd, moving from local worked
[17:20] <linuxkidd> ?
[17:21] <glambert> ubuntu 13.04 across the board
[17:21] <linuxkidd> k
[17:21] <linuxkidd> and kernel version?
[17:22] <glambert> nothing in logs btw
[17:23] <glambert> kernel = 3.8.0-30-generic
[17:23] <linuxkidd> k
[17:23] <glambert> on the server /mnt/ceph is mounted at
[17:23] <glambert> the monitoring server / meta data server in the ceph cluster is on kernel 3.2.0-55-generic
[17:24] <linuxkidd> k..
[17:24] <glambert> actually on 12.04 on those servers in cluster, my mistake
[17:24] * Pedras (~Adium@172.56.16.25) has joined #ceph
[17:24] <linuxkidd> so, how long does the kvm-img create run before it bombs?
[17:24] <glambert> immediately
[17:25] <linuxkidd> k..
[17:25] <linuxkidd> try this and pastebin the output for me...
[17:25] <linuxkidd> strace -f -s 200 /usr/bin/kvm-img create -f qcow2 /mnt/ceph/test.img 10485760K
[17:25] <linuxkidd> you may need to change the image name or delete the existing image first..
[17:26] <glambert> ok
[17:26] <pmatulis2> prolly want to send that to a file (-o output_file)
[17:26] * sarob (~sarob@2601:9:7080:13a:3907:5baa:6da1:d832) has joined #ceph
[17:26] <linuxkidd> ya.. thx.. :)
[17:26] <linuxkidd> or &> strace.out at the end
[17:27] <linuxkidd> strace -f -s 200 /usr/bin/kvm-img create -f qcow2 /mnt/ceph/test.img 10485760K &> strace.out
[17:27] <linuxkidd> :)
[17:28] <glambert> is there a way to write the strace output to file?
[17:28] <glambert> too much to capture on my shell
[17:29] <glambert> ah
[17:29] <glambert> ^
[17:29] <linuxkidd> Ya.. apologies on that...
[17:30] * mfisch (~mfisch@of1-nat2.aus1.rackspace.com) Quit (Quit: Leaving)
[17:31] <glambert> http://pastebin.com/jHZ2FC19
[17:33] <linuxkidd> reviewing...
[17:33] <glambert> thanks
[17:33] * gmeno (~gmeno@216.1.187.162) has joined #ceph
[17:34] * sarob (~sarob@2601:9:7080:13a:3907:5baa:6da1:d832) Quit (Ping timeout: 480 seconds)
[17:37] * haomaiwang (~haomaiwan@119.6.71.174) has joined #ceph
[17:38] <linuxkidd> so.. it seems the kvm-img command spawns a new process.. the new process attempts to 'pread' (or read at a specific offset without modifying the actual position pointer).. and the pread generates the permission error
[17:38] <linuxkidd> unfortunately... I'm not quite sure what to make of that..
[17:42] * haomaiwa_ (~haomaiwan@119.6.74.95) Quit (Ping timeout: 480 seconds)
[17:43] <glambert> linuxkidd, ok, thanks anyway
[17:45] * Gamekiller77 (~Gamekille@128-107-239-233.cisco.com) has joined #ceph
[17:45] * alram (~alram@cpe-76-167-50-51.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[17:50] * sagelap (~sage@2600:1012:b010:7f08:40f9:2be6:5364:272a) has joined #ceph
[17:51] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[17:52] * Sysadmin88 (~IceChat77@90.208.9.12) has joined #ceph
[17:52] <linuxkidd> glambert: However, nothing you've said would rule out using the (much more stable) RBD capability..
[17:53] <linuxkidd> You would create an RBD image within Ceph, then it mounts just like any other block device ( after issuing rbd map on the client system )
[17:53] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:53] <linuxkidd> once it's mounted, then you would create your kvm images inside that
[17:53] <glambert> can you mount rbd?
[17:53] * mxmln (~mxmln@212.79.49.65) Quit (Quit: mxmln)
[17:53] <linuxkidd> yep
[17:54] <linuxkidd> it's a block device
[17:54] * Pedras (~Adium@172.56.16.25) Quit (Quit: Leaving.)
[17:54] <linuxkidd> you map it, format it, then mount it.. just like another HDD
[17:54] <linuxkidd> except, the data is striped throughout your ceph cluster
[17:54] <linuxkidd> RBD = Rados Block Device
[17:55] <glambert> ok, how do I mount it at /mnt/ceph? I have to provide a directory for storage in webvirtmgr
[17:55] * markbby (~Adium@168.94.245.3) has joined #ceph
[17:55] <linuxkidd> here's a quick / dirty guide for it..
[17:55] <linuxkidd> http://ceph.com/docs/master/start/quick-rbd/
[17:56] <linuxkidd> In a production deployment, I would generate a separate key for RBD instead of using the admin key...
[17:56] <linuxkidd> but aside from that, the above link should get you started
[17:58] * gmeno (~gmeno@216.1.187.162) Quit (Ping timeout: 480 seconds)
[17:58] <glambert> right
[17:59] <glambert> so I create a huge block device in a pool to then store all of my images etc. in and mount that block device on other servers?
[17:59] * kbader (~Adium@38.122.20.226) has joined #ceph
[18:01] * gmeno (~gmeno@38.122.20.226) has joined #ceph
[18:01] <glambert> only ever used Cloudstack with Ceph but Cloudstack is becoming a bit of a pain so we're evaluating other options at the mo
[18:01] * angdraug (~angdraug@64-79-127-122.static.wiline.com) has joined #ceph
[18:01] * mattt_ (~textual@94.236.7.190) Quit (Quit: Computer has gone to sleep.)
[18:02] * aliguori (~anthony@74.202.210.82) has joined #ceph
[18:02] * glzhao (~glzhao@118.195.65.67) Quit (Quit: leaving)
[18:03] * Pedras (~Adium@216.207.42.132) has joined #ceph
[18:03] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[18:03] * ChanServ sets mode +v andreask
[18:05] <glambert> linuxkidd, does this look right? http://pastebin.com/pG8mVkQZ
[18:06] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[18:09] * gmeno (~gmeno@38.122.20.226) Quit (Remote host closed the connection)
[18:10] * gmeno (~gmeno@38.122.20.226) has joined #ceph
[18:10] * ScOut3R (~ScOut3R@212.96.46.212) Quit (Ping timeout: 480 seconds)
[18:10] * jksM (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[18:10] <linuxkidd> glambert: You'll need to copy over the /etc/ceph/ceph.conf and the keyring to the remote server's own /etc/ceph folder
[18:11] <linuxkidd> And the 'map' part will have to occur on the remote server prior to mounting
[18:11] <linuxkidd> so..
[18:11] * haomaiwang (~haomaiwan@119.6.71.174) Quit (Remote host closed the connection)
[18:11] <linuxkidd> lemme get you some more concise instructions
[18:11] <glambert> linuxkidd, the remote server doesn't have ceph installed
[18:12] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[18:13] <linuxkidd> it only needs the rbd module and tool... then the two items I noted above in /etc/ceph
[18:13] <linuxkidd> http://pastebin.com/qgtE5Db3
[18:14] * cfreak201 (~cfreak200@p4FF3EF60.dip0.t-ipconnect.de) has joined #ceph
[18:15] * gmeno (~gmeno@38.122.20.226) Quit (Remote host closed the connection)
[18:16] * gmeno (~gmeno@38.122.20.226) has joined #ceph
[18:16] * cfreak200 (~cfreak200@p4FF3EF60.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[18:17] * jks (~jks@3e6b5724.rev.stofanet.dk) Quit (Ping timeout: 480 seconds)
[18:18] <glambert> linuxkidd, what do I need to install for rbd?
[18:18] <glambert> on the remote node
[18:19] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (Quit: Leaving.)
[18:21] * lofejndif (~lsqavnbok@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[18:27] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[18:27] * BillK (~BillK-OFT@124-169-79-165.dyn.iinet.net.au) has joined #ceph
[18:29] * xarses (~andreww@64-79-127-122.static.wiline.com) has joined #ceph
[18:30] * ircolle (~Adium@38.122.20.226) has joined #ceph
[18:31] * sagelap (~sage@2600:1012:b010:7f08:40f9:2be6:5364:272a) Quit (Ping timeout: 480 seconds)
[18:33] * jcsp (~jcsp@0001bf3a.user.oftc.net) has joined #ceph
[18:33] * philipgian (~philipgia@nat.admin.grnet.gr) Quit (Ping timeout: 480 seconds)
[18:34] <linuxkidd> glambert: lemme look, but I believe it's simply 'apt-get install rbd'
[18:34] * gmeno (~gmeno@38.122.20.226) Quit (Ping timeout: 480 seconds)
[18:34] <glambert> no package found for me
[18:34] <linuxkidd> actually, looks like it's provided by the 'ceph' package
[18:34] <linuxkidd> or, ceph-common
[18:34] <linuxkidd> ya, try ceph-common
[18:34] <glambert> linuxkidd, I'm going to have to leave now but I'll log back in later and carry on hopefully
[18:35] <glambert> thanks for your help though
[18:35] <linuxkidd> k.. np
[18:35] * alram (~alram@38.122.20.226) has joined #ceph
[18:36] * nwat (~textual@eduroam-247-164.ucsc.edu) has joined #ceph
[18:36] * diegows (~diegows@200.68.116.185) Quit (Ping timeout: 480 seconds)
[18:39] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) has joined #ceph
[18:42] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[18:42] * sagelap (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) has joined #ceph
[18:52] * DarkAce-Z (~BillyMays@50-32-22-111.drr01.hrbg.pa.frontiernet.net) has joined #ceph
[18:52] * markbby (~Adium@168.94.245.3) Quit (Remote host closed the connection)
[18:53] * DarkAceZ (~BillyMays@50-32-19-144.drr01.hrbg.pa.frontiernet.net) Quit (Read error: Operation timed out)
[18:54] * fouxm (~fouxm@185.23.92.11) Quit (Remote host closed the connection)
[19:03] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) has joined #ceph
[19:05] * gmeno (~gmeno@38.122.20.226) has joined #ceph
[19:07] * codice (~toodles@71-80-186-21.dhcp.lnbh.ca.charter.com) Quit (Read error: Connection reset by peer)
[19:07] * codice (~toodles@71-80-186-21.dhcp.lnbh.ca.charter.com) has joined #ceph
[19:08] * i_m (~ivan.miro@deibp9eh1--blueice3n2.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[19:13] <ksingh> is German Anders online here
[19:23] * gmeno (~gmeno@38.122.20.226) Quit (Remote host closed the connection)
[19:27] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[19:28] * BillK (~BillK-OFT@124-169-79-165.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[19:32] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[19:33] * GermanAnders (~oftc-webi@190.18.55.15) has joined #ceph
[19:34] <GermanAnders> hi ksingh
[19:39] * madkiss (~madkiss@88.128.80.2) has joined #ceph
[19:45] * madkiss (~madkiss@88.128.80.2) Quit (Remote host closed the connection)
[19:46] * madkiss (~madkiss@88.128.80.2) has joined #ceph
[19:48] * gmeno (~gmeno@38.122.20.226) has joined #ceph
[19:53] <GermanAnders> hi to all, someone could give me a hand with a problem on my ceph cluster
[19:53] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) has joined #ceph
[19:58] <janos> GermanAnders: it's more likely yuou'll get help if you state the nature of the problme - different people here with different areas of knowledge
[19:58] <xarses> GermanAnders: start talking... We'll jump in if we can
[19:58] <janos> that said, we could all just be a bunch of idlers!
[19:59] * xarses idles his idle
[19:59] * gmeno (~gmeno@38.122.20.226) Quit (Read error: Connection reset by peer)
[19:59] <xarses> o/
[19:59] <kraken> \o
[19:59] * gmeno (~gmeno@38.122.20.226) has joined #ceph
[20:01] * gianni (~gianni@adsl-ull-185-8.44-151.net24.it) has joined #ceph
[20:02] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[20:02] * itamar_ (~congfoo@37.26.146.176) has joined #ceph
[20:03] * itamar_ (~congfoo@37.26.146.176) Quit ()
[20:05] <GermanAnders> ok thanks, the problem i had is that someone from support on my team tries to add a new MON on the cluster
[20:05] <GermanAnders> unfortunatly in no luck..and then we can't enter the ceph cluster anymore
[20:05] <GermanAnders> if i run a "ceph" command from any node, it freeze and i had to kill the process with cntrl+c
[20:06] <Pedras> anyone can pitch in on why 2 different cephfs clients would have different versions of the directory tree, say different ls -l listing at the root of the fs
[20:06] <janos> can't enter the cluster - sounds like no longer have quorum
[20:06] <janos> how many Mons did you have prior to this?
[20:06] * slang (~slang@pat.hitachigst.com) has joined #ceph
[20:08] <GermanAnders> there was only one mon daemon running, and when trying to add the second one it failed
[20:08] * cmdrk (~lincoln@c-24-12-206-91.hsd1.il.comcast.net) Quit (Ping timeout: 480 seconds)
[20:09] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[20:09] <xarses> GermanAnders: It looks like you dont have public_network set which i understand is required to add monitors
[20:10] <xarses> also the output shows that monitor on ceph-01 is dead
[20:10] <GermanAnders> how can i add the public_net
[20:11] * codice_ (~toodles@71-80-186-21.dhcp.lnbh.ca.charter.com) has joined #ceph
[20:11] * codice (~toodles@71-80-186-21.dhcp.lnbh.ca.charter.com) Quit (Read error: Connection reset by peer)
[20:11] <xarses> GermanAnders: add it to the global section on all monitors
[20:11] * cmdrk (~lincoln@c-24-12-206-91.hsd1.il.comcast.net) has joined #ceph
[20:12] <xarses> it could look like "public_network = 10.111.82.0/24"
[20:13] <GermanAnders> thanks, so i will add that entry on every ceph.conf file and then make a service ceph restart on every node
[20:13] <xarses> ya, create-keys is running which implies that the cluster isn't in quorum, odd that it's still running on ceph-node01 not 02
[20:13] <xarses> GermanAnders: yes, but start your monitors 01 first then 02
[20:14] * xmltok (~xmltok@216.103.134.250) has joined #ceph
[20:17] * ksingh (~Adium@2001:708:10:10:cc7b:40fe:3ec8:d170) Quit (Quit: Leaving.)
[20:17] * scuttlemonkey (~scuttlemo@pat.hitachigst.com) has joined #ceph
[20:17] * ChanServ sets mode +o scuttlemonkey
[20:17] <GermanAnders> root@ceph-node01:/tmp# service ceph restart === mon.ceph-node01 === === mon.ceph-node01 === Stopping Ceph mon.ceph-node01 on ceph-node01...done === mon.ceph-node01 === Starting Ceph mon.ceph-node01 on ceph-node01... failed: 'ulimit -n 32768; /usr/bin/ceph-mon -i ceph-node01 --pid-file /var/run/ceph/mon.ceph-node01.pid -c /etc/ceph/ceph.conf ' Starting ceph-create-keys on ceph-node01...
[20:20] * jjgalvez (~jjgalvez@pat.hitachigst.com) has joined #ceph
[20:20] * robbat2 (~robbat2@2001:470:e889:1:212:32ff:fe00:221e) has joined #ceph
[20:22] <robbat2> how do I list users that exist in rados? there doesn't seem to be any 'user list' operation
[20:22] * mxmln (~mxmln@195.222.244.63) has joined #ceph
[20:23] <GermanAnders> no luck
[20:24] <GermanAnders> also the log from the mon on node01 said:
[20:24] <GermanAnders> 2013-12-13 14:23:53.149081 7f3b9a265700 1 mon.ceph-node01@0(probing) e2 _ms_dispatch dropping stray message mon_subscribe({monmap=0+,osdmap=30}) from client.4142 10.111.82.244:0/2686418903 2013-12-13 14:23:53.149094 7f3b9a265700 0 ms_deliver_dispatch: unhandled message 0x2254a80 mon_subscribe({monmap=0+,osdmap=30}) from client.4142 10.111.82.244:0/2686418903 2013-12-13 14:23:59.277257 7f3b97951700 0 -- 10.111.82.242:6789/0 >> 10.111.82.244:0/26864189
[20:24] * gregsfortytwo1 (~Adium@2607:f298:a:607:882e:231e:b0c3:e7fe) has joined #ceph
[20:25] * gmeno (~gmeno@38.122.20.226) Quit (Remote host closed the connection)
[20:29] * madkiss (~madkiss@88.128.80.2) Quit (Quit: Leaving.)
[20:32] * aliguori (~anthony@74.202.210.82) Quit (Remote host closed the connection)
[20:35] * diegows (~diegows@190.190.16.126) has joined #ceph
[20:39] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit (Ping timeout: 480 seconds)
[20:40] * gianni (~gianni@adsl-ull-185-8.44-151.net24.it) Quit (Quit: Leaving)
[20:41] * gregsfortytwo1 (~Adium@2607:f298:a:607:882e:231e:b0c3:e7fe) Quit (Quit: Leaving.)
[20:44] <GermanAnders> and also the create-keys is still present when i do a ps aux | grep create-keys
[20:45] * gregsfortytwo1 (~Adium@2607:f298:a:607:9006:2412:aed6:9f2e) has joined #ceph
[20:45] <GermanAnders> so how can i add a new mon to the cluster? because i can't run any "ceph" command, ceph-mon commands yes but just "ceph" no way..
[20:48] <pmatulis2> GermanAnders: have you tried 'ceph-deploy install <mon>' and 'ceph-deploy mon create <mon>' ?
[20:48] <GermanAnders> i will try to do that now and see if it works
[20:49] <pmatulis2> GermanAnders: after having put in the 'public_network' in all nodes' ceph.conf
[20:50] <GermanAnders> yes, i've already put that in the ceph.conf file on all nodes
[20:50] * cmdrk (~lincoln@c-24-12-206-91.hsd1.il.comcast.net) Quit (Ping timeout: 480 seconds)
[20:55] * sroy (~sroy@ip-208-88-110-45.savoirfairelinux.net) has joined #ceph
[20:55] * sagelap (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) Quit (Quit: Leaving.)
[20:59] <xarses> GermanAnders: you will probably need to kill the old create-keys processes
[21:01] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[21:01] * kbader (~Adium@38.122.20.226) Quit (Quit: Leaving.)
[21:07] * ksingh (~Adium@teeri.csc.fi) has joined #ceph
[21:10] * cmdrk (~lincoln@c-24-12-206-91.hsd1.il.comcast.net) has joined #ceph
[21:12] <GermanAnders> do i need first to add in the ceph.conf file the entry [mon.ceph-node02] for the new mon? or first i need to run the ceph-deploy mon create command=
[21:12] <GermanAnders> ?
[21:12] * mattt_ (~textual@cpc25-rdng20-2-0-cust162.15-3.cable.virginm.net) has joined #ceph
[21:14] * ksingh (~Adium@teeri.csc.fi) has left #ceph
[21:14] * ksingh (~Adium@teeri.csc.fi) has joined #ceph
[21:14] * pll (~pll@2001:620:20:16:650f:db13:8b33:1b7a) Quit (Remote host closed the connection)
[21:15] * zidarsk8 (~zidar@84-255-203-33.static.t-2.net) has joined #ceph
[21:15] <ksingh> hi german , for quorum you need 1 or 3 monitor node ( odd numbers )
[21:16] <ksingh> so if you have 2 mons ,then you will not able to connect to ceph cluster
[21:17] * zidarsk8 (~zidar@84-255-203-33.static.t-2.net) has left #ceph
[21:17] <ksingh> so you have to add 2 more monitor nodes now to make it live
[21:18] <pmatulis2> GermanAnders: no, ceph-deploy does not need the name
[21:18] <pmatulis2> (in ceph.conf)
[21:19] * sroy (~sroy@ip-208-88-110-45.savoirfairelinux.net) Quit (Read error: Operation timed out)
[21:19] <ksingh> hello pmatulis :-)
[21:19] <GermanAnders> the problem is that actually i dont't had any monitor
[21:19] <GermanAnders> since the only monitor on ceph-node01 it seems that it's not working at all
[21:20] <pmatulis2> good day ksingh
[21:21] <ksingh> try adding 2 more monitors using ceph deply mon create
[21:21] <ksingh> post that check status of monitor services of all 3 monitor , all should come up
[21:22] <ksingh> OR else you need to remove the other monitor that your support team member added unsuccessfully
[21:24] <GermanAnders> but how can i remove the 'new' mon if i could not run ceph?
[21:25] <pmatulis2> GermanAnders: stop the process if running and 'ceph-deploy mon destroy <mon>'
[21:28] * xdeller (~xdeller@91.218.144.129) Quit (Quit: Leaving)
[21:30] <GermanAnders> thanks ksingh, i've already destroy the 'new' mon
[21:31] <GermanAnders> but now when i run "service ceph restart" on ceph-node01, i get:
[21:31] <GermanAnders> root@ceph-node01:/home/ceph/ceph-cluster# service ceph restart === mon.ceph-node01 === === mon.ceph-node01 === Stopping Ceph mon.ceph-node01 on ceph-node01...done === mon.ceph-node01 === Starting Ceph mon.ceph-node01 on ceph-node01... failed: 'ulimit -n 32768; /usr/bin/ceph-mon -i ceph-node01 --pid-file /var/run/ceph/mon.ceph-node01.pid -c /etc/ceph/ceph.conf ' Starting ceph-create-keys on ceph-node01... root@ceph-node01:/home/ceph/ceph-cluster#
[21:31] <GermanAnders> in the log file:
[21:31] <GermanAnders> 2013-12-13 15:30:45.091536 7fc97b4b6780 0 ceph version 0.72.1 (4d923861868f6a15dcb33fef7f50f674997322de), process ceph-mon, pid 54924 2013-12-13 15:30:45.098520 7fc97b4b6780 -1 failed to create new leveldb store 2013-12-13 15:31:00.052934 7f36e8f66700 0 mon.ceph-node01@0(probing).data_health(0) update_stats avail 93% total 156018588 used 1612628 avail 146480612
[21:32] <pmatulis2> yeah, a cluster with no working monitor is a bit of a pain
[21:33] <pmatulis2> that's why one creates a cluster and the initial monitors simultaneously
[21:34] <pmatulis2> GermanAnders: i recommend you post to the ceph-users mailing list
[21:39] <ksingh> check mon logs and put output on mailing list
[21:40] <GermanAnders> i've run successfuly the ceph-deploy mon create command to ceph-node03, but i think that now i need to activate or start it right?
[21:40] <GermanAnders> [ceph-node03][WARNIN] ceph-node03 is not defined in `mon initial members` [ceph-node03][WARNIN] monitor ceph-node03 does not exist in monmap
[21:41] <ksingh> so now you have 3 monitors i believe
[21:41] <ksingh> ??
[21:41] <pmatulis2> GermanAnders: try going to the node and using an init script to stat the daemon
[21:41] <pmatulis2> GermanAnders: what distro do we have here?
[21:42] <GermanAnders> actually i had a mon 'appears not working' on ceph-node01, now on ceph-node03
[21:42] <GermanAnders> ubuntu 12.10
[21:42] <pmatulis2> ok, so an upstart script then
[21:43] <GermanAnders> the "ceph-mon -i {mon-id} --public-addr {ip:port}" ?
[21:43] <pmatulis2> sudo start ceph-mon-all
[21:43] <kraken> this ain't your shell
[21:43] * garphy`aw is now known as garphy
[21:44] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) Quit (Quit: Leaving.)
[21:44] <GermanAnders> ok so now i had:
[21:44] <GermanAnders> root@ceph-node03:~# initctl list | grep ceph ceph-mds-all-starter stop/waiting ceph-mds-all stop/waiting ceph-osd-all stop/waiting ceph-osd-all-starter stop/waiting ceph-all stop/waiting ceph-mon-all start/running ceph-mon-all-starter stop/waiting ceph-mon (ceph/ceph-node03) start/running, process 16999 ceph-create-keys start/running, process 17000 ceph-osd stop/waiting ceph-mds stop/waiting
[21:45] <pmatulis2> no idea why you did that
[21:45] <pmatulis2> stop filling the channel with useless information
[21:47] * kbader (~Adium@38.122.20.226) has joined #ceph
[21:48] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[21:50] * runfromnowhere (~runfromno@0001c1e7.user.oftc.net) has joined #ceph
[21:52] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[21:52] * cmdrk (~lincoln@c-24-12-206-91.hsd1.il.comcast.net) Quit (Ping timeout: 480 seconds)
[21:58] * wwang001 (~wwang001@fbr.reston.va.neto-iss.comcast.net) has joined #ceph
[21:59] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[22:01] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[22:01] * cmdrk (~lincoln@c-24-12-206-91.hsd1.il.comcast.net) has joined #ceph
[22:03] * nregola_comcast (~nregola_c@fbr.reston.va.neto-iss.comcast.net) has joined #ceph
[22:08] * gregsfortytwo1 (~Adium@2607:f298:a:607:9006:2412:aed6:9f2e) Quit (Quit: Leaving.)
[22:14] * wwang001 (~wwang001@fbr.reston.va.neto-iss.comcast.net) Quit (Remote host closed the connection)
[22:15] * mattt_ (~textual@cpc25-rdng20-2-0-cust162.15-3.cable.virginm.net) Quit (Quit: Computer has gone to sleep.)
[22:20] * ksingh (~Adium@teeri.csc.fi) has left #ceph
[22:21] * GermanAnders (~oftc-webi@190.18.55.15) has left #ceph
[22:23] * cmdrk (~lincoln@c-24-12-206-91.hsd1.il.comcast.net) Quit (Ping timeout: 480 seconds)
[22:24] * allsystemsarego (~allsystem@86.126.9.60) Quit (Quit: Leaving)
[22:25] * davidzlap1 (~Adium@cpe-23-242-31-175.socal.res.rr.com) has joined #ceph
[22:25] * markbby (~Adium@168.94.245.2) has joined #ceph
[22:26] * cmdrk (~lincoln@c-24-12-206-91.hsd1.il.comcast.net) has joined #ceph
[22:27] <vata> I'm trying to imagine what can be a good hardware configuration for a storage node with 9 OSDs
[22:28] <vata> I saw that it's possible to have multiple OSD journals on the same SSD
[22:29] * jcsp (~jcsp@0001bf3a.user.oftc.net) Quit (Quit: Leaving.)
[22:29] * jcsp (~jcsp@38.122.20.226) has joined #ceph
[22:29] <vata> but I've not found a clear "recommandation" on "how many journals per SSD" (sometimes I saw 8, sometimes 3)
[22:29] * Rahvin (~Rahvin@93.92.102.4) Quit (Ping timeout: 480 seconds)
[22:30] * jcsp (~jcsp@0001bf3a.user.oftc.net) Quit ()
[22:30] * jcsp (~jcsp@0001bf3a.user.oftc.net) has joined #ceph
[22:30] <vata> is 3 journals per SSD an overkill solution ?
[22:31] <pmatulis2> vata: as long as you give ~15 GB per journal
[22:32] <pmatulis2> vata: after that, it's up to you what level of failover you want. if the SSD dies so do all associated OSDs
[22:33] <vata> pmatulis2: yes, in fact I'm thinking about running few RAID1 SSD on the same machine
[22:34] <vata> pmatulis2: does this configuration seams bad/overkill for you ? http://pastebin.com/mkidFYuJ
[22:35] * Gamekiller77 (~Gamekille@128-107-239-233.cisco.com) Quit (Quit: This computer has gone to sleep)
[22:35] * Gamekiller77 (~Gamekille@128-107-239-235.cisco.com) has joined #ceph
[22:35] <pmatulis2> vata: looks very good to me
[22:36] <pmatulis2> vata: just make sure you benchmark/test etc
[22:37] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[22:37] <robbat2> vata, the only comment I have re SSD, is be careful w/ wear-levelling. when I tested, I found the performance kept dropping if the SSD was fully allocated; I kept a partition that was only trimmed and never used, and that helped a lot
[22:38] * Kai (~oftc-webi@128-107-239-235.cisco.com) has joined #ceph
[22:39] * Kai is now known as Guest9223
[22:40] <vata> pmatulis2: thanks
[22:40] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit ()
[22:41] <vata> robbat2: ok thanks, I'll take a look at that
[22:44] * mattt_ (~textual@cpc25-rdng20-2-0-cust162.15-3.cable.virginm.net) has joined #ceph
[22:45] <vata> is there a limit of OSD journals per RAIDed SSD that I shouldn't exceed ?
[22:48] * hijacker (~hijacker@213.91.163.5) Quit (Read error: Connection timed out)
[22:48] * hijacker (~hijacker@bgva.sonic.taxback.ess.ie) has joined #ceph
[22:49] * mattt_ (~textual@cpc25-rdng20-2-0-cust162.15-3.cable.virginm.net) Quit (Quit: Computer has gone to sleep.)
[22:50] * Rahvin (~Rahvin@93.92.102.4) has joined #ceph
[22:50] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[22:51] * sroy (~sroy@ip-208-88-110-45.savoirfairelinux.net) has joined #ceph
[22:56] * nregola_comcast (~nregola_c@fbr.reston.va.neto-iss.comcast.net) Quit (Quit: Leaving.)
[22:56] * Guest9223 (~oftc-webi@128-107-239-235.cisco.com) Quit (Quit: Page closed)
[22:57] * kaizh (~oftc-webi@128-107-239-234.cisco.com) has joined #ceph
[22:57] <kbader> vata: depends a lot on your ssds and controller, w/o raid most people find 3-4 osd per ssd journal device is a good ratio
[23:00] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[23:01] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[23:04] * JC1 (~JC@71-94-44-243.static.trlk.ca.charter.com) has left #ceph
[23:04] * JC1 (~JC@71-94-44-243.static.trlk.ca.charter.com) has joined #ceph
[23:06] * sroy (~sroy@ip-208-88-110-45.savoirfairelinux.net) Quit (Read error: Operation timed out)
[23:07] <vata> kbader: ok thanks, I'm planning to buy good quality hardware, but I'm still not sure I'll go for raided-SSD
[23:08] <vata> kbader: it will depend on the average failure rate of the chosen SSD, and how long it will take to rebuild associated SSDs
[23:08] <vata> *OSDs
[23:10] <kbader> sounds reasonable
[23:24] * DarkAce-Z is now known as DarkAceZ
[23:24] * dmsimard (~Adium@108.163.152.2) Quit (Read error: Operation timed out)
[23:30] * vata (~vata@2607:fad8:4:6:4825:d168:6ccf:f31e) Quit (Quit: Leaving.)
[23:34] * markbby (~Adium@168.94.245.2) Quit (Quit: Leaving.)
[23:34] * markbby (~Adium@168.94.245.2) has joined #ceph
[23:35] * jhurlbert (~jhurlbert@216.57.209.252) has joined #ceph
[23:39] * themgt (~themgt@pc-188-95-160-190.cm.vtr.net) Quit (Quit: themgt)
[23:41] * ScOut3R (~scout3r@5401D98F.dsl.pool.telekom.hu) has joined #ceph
[23:42] * AfC (~andrew@jim1020952.lnk.telstra.net) has joined #ceph
[23:45] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[23:47] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[23:47] * Rahvin (~Rahvin@93.92.102.4) Quit (Ping timeout: 480 seconds)
[23:48] * kaizh (~oftc-webi@128-107-239-234.cisco.com) Quit (Quit: Page closed)
[23:48] * kaizh (~oftc-webi@128-107-239-233.cisco.com) has joined #ceph
[23:49] * kbader1 (~Adium@2607:f298:a:607:1587:60ad:3181:4850) has joined #ceph
[23:49] * kbader (~Adium@38.122.20.226) Quit (Read error: Connection reset by peer)
[23:53] * kaizh (~oftc-webi@128-107-239-233.cisco.com) Quit ()
[23:56] * nwat (~textual@eduroam-247-164.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[23:56] * nwat (~textual@eduroam-247-164.ucsc.edu) has joined #ceph
[23:58] * scuttlemonkey (~scuttlemo@pat.hitachigst.com) Quit (Ping timeout: 480 seconds)
[23:59] * LeaChim (~LeaChim@host81-159-251-38.range81-159.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[23:59] * jjgalvez (~jjgalvez@pat.hitachigst.com) Quit (Quit: Leaving.)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.