#ceph IRC Log

Index

IRC Log for 2014-06-13

Timestamps are in GMT/BST.

[0:04] <seapasulli> It doesn't it just tries to time out or take forever. I'm doing this through cinder+ceph and cinder ends up timing out
[0:04] <seapasulli> or rabbitmq really
[0:05] <seapasulli> but I don't see any errors on cinders end or cephs
[0:05] <seapasulli> i just see that the images still exist in ceph
[0:05] * dmsimard is now known as dmsimard_away
[0:06] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[0:08] * mtanski (~mtanski@65.107.210.227) has joined #ceph
[0:08] * mtanski (~mtanski@65.107.210.227) Quit ()
[0:09] * aldavud_ (~aldavud@213.55.184.220) Quit (Ping timeout: 480 seconds)
[0:09] * aldavud (~aldavud@213.55.184.220) Quit (Ping timeout: 480 seconds)
[0:09] * brytown (~brytown@2620:79:0:8204:10b2:aba6:3e68:fb2d) has joined #ceph
[0:10] <rweeks> hey scuttlemonkey you around?
[0:11] * sleinen1 (~Adium@2001:620:0:26:90b:3fe1:9680:6b06) Quit (Quit: Leaving.)
[0:24] * Cube (~Cube@66.87.130.121) Quit (Ping timeout: 480 seconds)
[0:24] <lurbs> seapasulli: It's entirely possible that Ceph is still deleting the image in the background. From memory it needs to check for the existence of each object, and so takes quite some time.
[0:24] <lurbs> The last thing to be deleted is the actual metadata about the existence of the volume.
[0:25] * Cube (~Cube@66.87.130.121) has joined #ceph
[0:33] * rweeks (~goodeats@192.169.20.75.static.etheric.net) Quit (Quit: Leaving)
[0:34] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[0:35] * aldavud_ (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[0:35] * rpowell (~rpowell@128.135.219.215) Quit (Ping timeout: 480 seconds)
[0:36] * leseb (~leseb@185.21.174.206) Quit (Killed (NickServ (Too many failed password attempts.)))
[0:36] * leseb (~leseb@185.21.174.206) has joined #ceph
[0:42] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[0:46] * thb (~me@0001bd58.user.oftc.net) Quit (Quit: Leaving.)
[0:50] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Quit: Leaving.)
[0:53] * sarob_ (~sarob@2001:4998:effd:7801::112f) has joined #ceph
[0:54] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[0:57] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) Quit (Quit: Leaving)
[0:58] * rweeks (~goodeats@192.169.20.75.static.etheric.net) has joined #ceph
[0:58] * sarob (~sarob@2601:9:1d00:c7f:91b4:c64f:4ea5:d826) Quit (Ping timeout: 480 seconds)
[1:03] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) has joined #ceph
[1:03] <scuttlemonkey> rweeks: am now, sup?
[1:05] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[1:09] * sarob_ (~sarob@2001:4998:effd:7801::112f) Quit (Remote host closed the connection)
[1:10] * sarob (~sarob@2001:4998:effd:7801::112f) has joined #ceph
[1:12] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[1:15] * oms101 (~oms101@p20030057EA623E00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:15] * rweeks (~goodeats@192.169.20.75.static.etheric.net) Quit (Quit: Leaving)
[1:17] * vbellur (~vijay@122.167.103.182) Quit (Ping timeout: 480 seconds)
[1:21] * huangjun (~kvirc@117.151.43.143) Quit (Ping timeout: 480 seconds)
[1:23] * fdmanana (~fdmanana@bl10-142-30.dsl.telepac.pt) Quit (Quit: Leaving)
[1:23] * oms101 (~oms101@p20030057EA592200EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[1:25] * fdmanana (~fdmanana@bl10-142-30.dsl.telepac.pt) has joined #ceph
[1:25] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit (Quit: valeech)
[1:28] * aldavud_ (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:28] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:31] * jdmason (~jon@192.55.55.39) Quit (Quit: Leaving)
[1:33] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) Quit (Remote host closed the connection)
[1:35] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) has joined #ceph
[1:38] * BManojlovic (~steki@cable-94-189-165-169.dynamic.sbb.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:41] * reed (~reed@75-101-54-131.dsl.static.sonic.net) Quit (Read error: Operation timed out)
[1:41] * sarob (~sarob@2001:4998:effd:7801::112f) Quit (Read error: Connection reset by peer)
[1:41] * sarob (~sarob@2601:9:1d00:c7f:91b4:c64f:4ea5:d826) has joined #ceph
[1:41] * ircolle (~Adium@2601:1:8380:2d9:4c94:c25f:d77d:8b8c) Quit (Quit: Leaving.)
[1:43] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[1:44] * alram (~alram@38.122.20.226) Quit (Read error: Operation timed out)
[1:46] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[1:52] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[1:53] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) Quit (Remote host closed the connection)
[1:56] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[2:00] * dford (~fford@93.93.251.146) has joined #ceph
[2:01] * jsfrerot_ (~jsfrerot@192.222.132.57) has joined #ceph
[2:02] <jsfrerot_> good evening all
[2:03] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[2:03] * aldavud_ (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[2:04] <jsfrerot_> I'm trying to add as osd to my running cluster, but got some error: http://pastebin.com/U0Mpu7Fj
[2:04] <jsfrerot_> Any idea what is happening ?
[2:04] * Sysadmin88 (~IceChat77@94.4.22.173) has joined #ceph
[2:05] * eford (~fford@p509901f2.dip0.t-ipconnect.de) Quit (Read error: Operation timed out)
[2:06] * wschulze (~wschulze@80.149.32.4) Quit (Quit: Leaving.)
[2:07] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) Quit (Remote host closed the connection)
[2:07] * nwat (~textual@eduroam-237-191.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:08] * rmoe_ (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:09] <jsfrerot_> ha nevermind, got to use "add" instead of "set"...
[2:15] * jsfrerot_ (~jsfrerot@192.222.132.57) Quit (Quit: leaving)
[2:23] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[2:24] * aldavud_ (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[2:24] * huangjun (~kvirc@111.173.98.164) has joined #ceph
[2:25] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:26] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[2:29] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[2:29] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:32] * Cube (~Cube@66.87.130.121) Quit (Ping timeout: 480 seconds)
[2:34] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:36] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[2:37] * brytown (~brytown@2620:79:0:8204:10b2:aba6:3e68:fb2d) Quit (Quit: Leaving.)
[2:37] * fdmanana (~fdmanana@bl10-142-30.dsl.telepac.pt) Quit (Quit: Leaving)
[2:38] * haomaiwang (~haomaiwan@124.161.78.189) has joined #ceph
[2:41] * mrjack (mrjack@office.smart-weblications.net) has joined #ceph
[2:41] <mrjack> hi
[2:41] <mrjack> is there a way to find out which rbd client does how many iops, read and write?
[2:42] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[2:43] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:46] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[2:48] * cephalopod (~chris@194.28.69.111.static.snap.net.nz) has joined #ceph
[2:54] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Ping timeout: 480 seconds)
[2:55] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[2:56] <sherry> yanzheng: ping
[2:56] * nwat (~textual@50.141.85.5) has joined #ceph
[2:56] * nwat_ (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[2:57] * nwat_ (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[2:59] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[3:04] * nwat (~textual@50.141.85.5) Quit (Ping timeout: 480 seconds)
[3:06] * Cube (~Cube@66.87.65.246) has joined #ceph
[3:06] * KaZeR (~kazer@64.201.252.132) Quit (Remote host closed the connection)
[3:07] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[3:11] * yanzheng (~zhyan@jfdmzpr01-ext.jf.intel.com) has joined #ceph
[3:11] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[3:12] * adamcrume_ (~quassel@c-71-204-162-10.hsd1.ca.comcast.net) has joined #ceph
[3:15] * adamcrume (~quassel@2601:9:6680:47:d4ae:c10b:3f5c:8831) Quit (Ping timeout: 480 seconds)
[3:17] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[3:21] * mdxi_ (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) has joined #ceph
[3:22] * zhaochao (~zhaochao@124.205.245.26) has joined #ceph
[3:22] * zerick (~eocrospom@190.114.249.148) Quit (Read error: Connection reset by peer)
[3:23] * mdxi (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[3:24] * sarob (~sarob@2601:9:1d00:c7f:91b4:c64f:4ea5:d826) Quit (Remote host closed the connection)
[3:26] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[3:26] * ChanServ sets mode +v andreask
[3:31] * Cube (~Cube@66.87.65.246) Quit (Ping timeout: 480 seconds)
[3:32] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[3:32] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[3:40] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[3:40] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[3:43] * Cube (~Cube@66.87.65.246) has joined #ceph
[3:43] * zack_dolby (~textual@pdf8519e7.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:44] * haomaiwa_ (~haomaiwan@124.161.78.169) has joined #ceph
[3:47] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has left #ceph
[3:49] * haomaiwang (~haomaiwan@124.161.78.189) Quit (Ping timeout: 480 seconds)
[3:52] * haomaiwa_ (~haomaiwan@124.161.78.169) Quit (Ping timeout: 480 seconds)
[3:53] * haomaiwang (~haomaiwan@2002:ca73:49df:c:c1c9:d41:9ede:4ad1) has joined #ceph
[3:57] <mrjack> is there any way to get stats for rbd usage?
[4:03] * haomaiwang (~haomaiwan@2002:ca73:49df:c:c1c9:d41:9ede:4ad1) Quit (Ping timeout: 480 seconds)
[4:04] <sherry> hi, I would like to know if my journal is broken but my ceph data disk is still running, do I need to create the osd again in order to make that osd running?
[4:08] * keeperandy (~textual@68.55.0.244) has joined #ceph
[4:14] * erice (~erice@50.240.86.181) Quit (Read error: Connection reset by peer)
[4:14] * erice (~erice@50.240.86.181) has joined #ceph
[4:19] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[4:21] * brytown (~brytown@142-254-47-204.dsl.dynamic.sonic.net) has joined #ceph
[4:22] * brytown (~brytown@142-254-47-204.dsl.dynamic.sonic.net) Quit ()
[4:23] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[4:25] * sarob (~sarob@2601:9:1d00:c7f:8560:75b1:5efc:4ffa) has joined #ceph
[4:31] * lesserevil (~lesser.ev@searspoint.nvidia.com) has joined #ceph
[4:32] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[4:33] * sarob (~sarob@2601:9:1d00:c7f:8560:75b1:5efc:4ffa) Quit (Ping timeout: 480 seconds)
[4:33] <lesserevil> looking for some help recovering from a catastrophic error
[4:34] <lesserevil> anyone want to point and laugh, and then possible suggest ideas?
[4:41] * zack_dolby (~textual@219.117.239.161.static.zoot.jp) has joined #ceph
[4:41] <iggy> lesserevil: start talking
[4:45] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[4:46] <lesserevil> I have a ceph cluster which has lost all of it's ceph mon data. Is there any way to re-create it?
[4:48] * adamcrume_ (~quassel@c-71-204-162-10.hsd1.ca.comcast.net) Quit (Quit: No Ping reply in 180 seconds.)
[4:48] <lesserevil> I had two mons (yes, I know I should have had three). one had a hard disk failure, and the other crapped out while we were replacing the first. So I have a bunch of OSD with no Mon to glue them together.
[4:48] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[4:49] * adamcrume (~quassel@2601:9:6680:47:d4ae:c10b:3f5c:8831) has joined #ceph
[4:50] <lesserevil> unless I'm misunderstanding something, I think I'm doomed. The mon data is stored on local disk on each mon server, right?
[4:50] <lesserevil> and not replicated anywhere in the cluster, like the mds metadata is.
[4:55] <lesserevil> ?
[4:55] <dmick> correct; the mon data is outside the cluster (because mons make up the cluster)
[4:56] <dmick> but it's possible you could rebuild it by hand
[4:56] <lesserevil> how so?
[4:56] <dmick> oh, I dunno
[4:56] <lesserevil> tease :)
[4:56] <dmick> there's docs about it; wondering if there's an actual procedure
[4:56] <lesserevil> there are? my google-fu must be weak.
[4:56] <lesserevil> I am a little unnerved at the moment.
[4:57] <dmick> what I'm remembering is recovering from a broken monmap
[4:57] <dmick> I guess I don't know about the rest of the mon info
[4:58] <dmick> yyyeahhhh....that might be hard
[4:58] <lesserevil> what I've found about recovering broken monmaps assumes you have other mons to copy/steal data from. I don't have that.
[4:58] <dmick> yeah.
[4:59] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[4:59] <dmick> probably safer to assume I'm wrong; if you can do it it's definitely high-wire stuff
[5:00] <lesserevil> well, I'm asking here at the same time I'm trying to recover the lost disk. Either fix is going to require some heroism and luck.
[5:04] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[5:08] <dmick> yeah
[5:12] * davyjang (~oftc-webi@171.216.179.127) has joined #ceph
[5:13] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[5:13] <davyjang> why this error occured:admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[5:14] <dmick> hard to say without knowing the command
[5:14] <davyjang> ceph-deploy mon create node1
[5:15] * keeperandy (~textual@68.55.0.244) Quit (Quit: Textual IRC Client: www.textualapp.com)
[5:15] <davyjang> [node1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status [node1][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[5:19] * davidzlap (~Adium@ip68-4-173-198.oc.oc.cox.net) Quit (Quit: Leaving.)
[5:22] <dmick> does /var/run/ceph/ceph-mon.node1.asok exist?
[5:24] <davyjang> oh,I find something,the name seems to be not right
[5:26] <davyjang> itceph-mon.node1.asok
[5:26] <davyjang> it is ceph-mon.ubuntu.asok,I have changed the host name
[5:27] * Vacum_ (~vovo@88.130.203.95) has joined #ceph
[5:34] * Vacum (~vovo@i59F7A4DC.versanet.de) Quit (Ping timeout: 480 seconds)
[5:41] <iggy> lesserevil: you might g=have to wait for the devs tomorrow
[5:42] * Pedras (~Adium@50.185.218.255) Quit (Quit: Leaving.)
[5:43] <lesserevil> iggy: you're likely right.
[5:50] * lesserevil (~lesser.ev@searspoint.nvidia.com) Quit (Quit: Bye)
[5:50] * lesserevil (~lesser.ev@searspoint.nvidia.com) has joined #ceph
[6:03] * zack_dolby (~textual@219.117.239.161.static.zoot.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[6:03] * zack_dolby (~textual@219.117.239.161.static.zoot.jp) has joined #ceph
[6:03] * zack_dolby (~textual@219.117.239.161.static.zoot.jp) Quit ()
[6:05] * nwat (~textual@50.141.87.8) has joined #ceph
[6:05] * nwat (~textual@50.141.87.8) Quit ()
[6:08] * eford (~fford@93.93.251.146) has joined #ceph
[6:10] * Shmouel1 (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) has joined #ceph
[6:14] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[6:15] * dford (~fford@93.93.251.146) Quit (Ping timeout: 480 seconds)
[6:30] * haomaiwang (~haomaiwan@124.161.76.154) has joined #ceph
[6:32] * Cube (~Cube@66.87.65.246) Quit (Quit: Leaving.)
[6:33] * haomaiwa_ (~haomaiwan@125-227-255-23.HINET-IP.hinet.net) has joined #ceph
[6:38] * lesserevil (~lesser.ev@searspoint.nvidia.com) Quit (Ping timeout: 480 seconds)
[6:39] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[6:41] * haomaiwang (~haomaiwan@124.161.76.154) Quit (Ping timeout: 480 seconds)
[6:42] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[6:42] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[6:44] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[6:50] <sherry> yanzheng: ping
[6:53] <sherry> last time u told me that I need to check "ceph mds tell \* dumpcache", it is okay! and also I checked /sys/kernel/debug/ceph/*/mdsc and it doesn't show me any process in progress!!! but the objects are still in the pool!
[6:54] <sherry> why removing a file in a directory which is mounted to a special pool, does not remove the object in the pool and disk itself!?
[6:55] * saurabh (~saurabh@nat-pool-blr-t.redhat.com) has joined #ceph
[7:10] * rdas (~rdas@nat-pool-pnq-t.redhat.com) has joined #ceph
[7:21] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[7:28] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[7:34] * ikrstic (~ikrstic@178-221-100-206.dynamic.isp.telekom.rs) has joined #ceph
[7:35] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[7:35] * davyjang (~oftc-webi@171.216.179.127) Quit (Remote host closed the connection)
[7:49] * oblu (~o@62.109.134.112) has joined #ceph
[7:49] * adamcrume (~quassel@2601:9:6680:47:d4ae:c10b:3f5c:8831) Quit (Remote host closed the connection)
[7:52] * drankis_ (~drankis__@89.111.13.198) has joined #ceph
[7:53] * vbellur (~vijay@209.132.188.8) has joined #ceph
[7:54] * sarob (~sarob@2601:9:1d00:c7f:55ad:cd95:3bd:4282) has joined #ceph
[7:57] * oblu (~o@62.109.134.112) Quit (Quit: ~)
[7:58] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[8:00] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[8:02] * sarob (~sarob@2601:9:1d00:c7f:55ad:cd95:3bd:4282) Quit (Ping timeout: 480 seconds)
[8:02] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) Quit (Quit: Leaving.)
[8:07] * oblu (~o@62.109.134.112) has joined #ceph
[8:09] * wschulze (~wschulze@80.149.32.4) has joined #ceph
[8:15] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[8:17] * davidzlap (~Adium@ip68-4-173-198.oc.oc.cox.net) has joined #ceph
[8:23] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[8:24] <sherry> please help me with this: a single client have ceph mounted when it removes a file, the space for that file doesn't get freed on the OSDs.
[8:26] <singler_> you use cephfs or rbd?
[8:28] <sherry> cephfs
[8:31] <sherry> singler_: u have any idea?
[8:32] * sleinen (~Adium@2001:620:0:26:dd11:8346:2d57:72ac) has joined #ceph
[8:32] * vbellur (~vijay@209.132.188.8) Quit (Ping timeout: 480 seconds)
[8:35] * Sysadmin88 (~IceChat77@94.4.22.173) Quit (Quit: Don't push the red button!)
[8:36] * Clabbe (~oftc-webi@alv-global.tietoenator.com) has joined #ceph
[8:36] <Clabbe> health HEALTH_ERR 3390 scrub errors
[8:37] * thb (~me@0001bd58.user.oftc.net) has joined #ceph
[8:37] <Clabbe> How do get out of this issue :( ?
[8:38] * thb (~me@0001bd58.user.oftc.net) Quit ()
[8:38] * thb (~me@0001bd58.user.oftc.net) has joined #ceph
[8:43] <Pauline> This http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-May/001628.html suggests you need to run pg repairs.
[8:43] * vbellur (~vijay@nat-pool-blr-t.redhat.com) has joined #ceph
[8:52] <singler_> Clabbe: check dmesg, maybe you have disk with bad blocks
[8:52] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has left #ceph
[8:52] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[8:53] <singler_> sherry: as I observed, cephfs starts freeing space after short delay. you delete file, and later running "df -h" you can see that used space is droping gradually. If it is not the case with you, then I cannot help you
[8:54] <sherry> singler_: thanks for ur reply but unfortunately it doesn't!
[8:54] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit ()
[8:55] <singler_> Pauline, Clabbe, if you run repair, it will copy data from primary (and primary's data may be corrupted). So before running repair, you should at least check disk state if you do not have other methods of data check
[8:57] <Pauline> singler_: assuming no disk errors, how to determine which is right, the prim or sec?
[8:57] <Pauline> also assuming only 2 copies of the data :)
[8:59] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[8:59] * bandrus (~oddo@adsl-71-137-198-68.dsl.scrm01.pacbell.net) Quit (Quit: Leaving.)
[9:00] * vilobhmm (~vilobhmm@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) Quit (Quit: vilobhmm)
[9:01] <Clabbe> singler_: no bad blocks
[9:01] <Clabbe> what I can see
[9:03] <Clabbe> singler_: repair doesnt work? :S
[9:04] <singler_> Pauline: a coin may help you with decision :)
[9:05] <singler_> Clabbe: repair starts deep-scrub, so after issuing repair, it may take some time for it to get repaired
[9:06] <Pauline> Clabbe: ceph -w should show some actions going on...
[9:06] <singler_> also you could try checking on which OSDs the inconsistent pgs are (ceph pg map <pg_num>). Maybe all pgs have some common osds
[9:07] <Pauline> which would mean 3300 pgs have landed on a single osd? that seems a bit much
[9:09] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Ping timeout: 480 seconds)
[9:09] * davidzlap (~Adium@ip68-4-173-198.oc.oc.cox.net) Quit (Quit: Leaving.)
[9:10] <singler_> Pauline: well, it depends on pg count. Also maybe there are controller problem, so maybe all/few OSD on same host are problematic, etc
[9:11] <Clabbe> singler_: ceph status doesnt state anything about the scrubbing :|
[9:11] <sherry> anyone can help with the cephfs remove? how can I get rid of objects in data pool, while my CephFS doesn't show me any file in the mounted directory to the data pool?
[9:11] <Clabbe> 4.d 1420 0 0 0 5955911680 1672 1672 active+clean+inconsistent 2014-06-13 08:55:29.446107 1757'1672 1869:2960 [3,1] [3,1] 1351'
[9:12] <Clabbe> 4.21 1471 0 1471 0 6169821184 1723 1723 active+degraded+inconsistent 2014-06-13 08:31:47.451952 1764'1723 1869:23947 [0] [0] 1351'
[9:12] <Clabbe> 3.22 645 0 645 0 2212609544 3533 3533 active+degraded+inconsistent 2014-06-13 08:31:47.455705 1869'359656 1869:599528 [0] [0] 1674'
[9:12] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[9:12] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) has joined #ceph
[9:13] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[9:13] * ChanServ sets mode +v andreask
[9:16] <Clabbe> singler_: any other ideas = ;I
[9:16] * jcsp (~john@176.12.107.140) has joined #ceph
[9:17] <singler_> Clabbe: so you run "ceph pg repair <pg_number>" and after that inconsistent pg count didn't decrease by one?
[9:17] * analbeard (~shw@support.memset.com) has joined #ceph
[9:18] <Clabbe> singler_: nope
[9:18] <Clabbe> singler_: hmm trying it again now, it seems its doing something lets see what happens
[9:19] <Clabbe> active+clean+scrubbing+deep+inconsistent+repair
[9:20] <Clabbe> singler_: repairing 4.d worked though not 4.21
[9:20] * qhartman (~qhartman@75-151-85-53-Colorado.hfc.comcastbusiness.net) Quit (Read error: Operation timed out)
[9:21] <singler_> Clabbe: 4.21 is degraded (only 1 copy) so maybe because of that it doesn't work?
[9:21] <Clabbe> How do I get it to replicate it to another osd?
[9:22] * haomaiwa_ (~haomaiwan@125-227-255-23.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[9:22] * haomaiwang (~haomaiwan@125-227-255-23.HINET-IP.hinet.net) has joined #ceph
[9:23] * qhartman (~qhartman@75-151-85-53-Colorado.hfc.comcastbusiness.net) has joined #ceph
[9:25] <singler_> Clabbe: is your cluster currently backfilling?
[9:25] <Clabbe> nope
[9:25] <kraken> http://i.imgur.com/2xwe756.gif
[9:26] <Clabbe> singler_: no backfilling
[9:26] <singler_> Clabbe: pastebin "ceph -s"
[9:26] * odyssey4me (~odyssey4m@165.233.71.2) has joined #ceph
[9:27] <Clabbe> singler_: http://pastebin.com/8CU5ubbB
[9:29] * drankis_ (~drankis__@89.111.13.198) Quit (Quit: Leaving)
[9:29] * drankis_ (~drankis__@89.111.13.198) has joined #ceph
[9:31] * haomaiwang (~haomaiwan@125-227-255-23.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[9:34] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[9:38] <Clabbe> singler_: should i update the crush map for it?
[9:40] <Clabbe> hmm seems I cant set any pg specifics there
[9:43] * qhartman (~qhartman@75-151-85-53-Colorado.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[9:43] * thomnico (~thomnico@2a01:e35:8b41:120:1411:521a:4bff:6fd) has joined #ceph
[9:45] * qhartman (~qhartman@75-151-85-53-Colorado.hfc.comcastbusiness.net) has joined #ceph
[9:48] <singler_> Clabbe: why 1 OSD is not in?
[9:48] <Clabbe> singler_: its being outed
[9:48] <Clabbe> singler_: is it possible to copy a pg? :D
[9:48] <singler_> what is replica count of your pools?
[9:49] <Clabbe> 2
[9:49] <singler_> do you use erasure coding?
[9:51] <Clabbe> singler_: dont think so
[9:51] <singler_> is your crush map customized?
[9:52] <Clabbe> singler_: nope
[9:53] <singler_> pastebin "ceph osd tree"
[9:54] <Clabbe> singler_: http://pastebin.com/CXiigFAP
[9:56] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has left #ceph
[9:57] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[9:57] <singler_> by default ceph puts replicas into other hosts (host is failure domain), in your case ceph-osd1 would dublicate into ceph-osd3, but as I can see there is one small disk
[9:58] <Clabbe> singler_: 0.5T
[9:58] <Clabbe> Ive reset the weight as it was going full :|
[9:59] <Clabbe> singler_: reweight by utilization?
[9:59] <singler_> but it does not have capacity to hold replicas of remaining osds
[10:00] <Clabbe> hmm ok
[10:00] <singler_> you should change failure domain to OSD, so that same host could hold replicas on other osds
[10:00] <singler_> or add disks/hosts
[10:01] <Clabbe> singler_: how do I change failure domain
[10:02] <Clabbe> just so I get the cluster to OK, then I can start adding osd+disks
[10:03] <singler_> if you'll add hosts+disks, cluster will make itself ok
[10:03] <Clabbe> singler_: yes. but atm I dont have them
[10:03] <singler_> currently your problem is that you have not enough hosts and osds
[10:03] <Clabbe> they are orded
[10:03] <singler_> oh
[10:04] <Clabbe> ordered
[10:04] <singler_> http://ceph.com/docs/master/rados/operations/crush-map/#editing-a-crush-map
[10:04] <singler_> pastebin decompiled crush map
[10:04] * Andreas-IPO (~andreas@2a01:2b0:2000:11::cafe) has joined #ceph
[10:04] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[10:05] <singler_> there is some way to change failure domain via ceph commands, but I do not know them, I change the crush map
[10:05] <Clabbe> singler_: http://pastebin.com/ViiYZFN4
[10:07] <singler_> lines 58, 67, 76. Change host to osd
[10:08] <singler_> then compile it and inject
[10:08] <singler_> *set
[10:10] * lesserevil (~lesser.ev@99-72-217-92.lightspeed.stlsmo.sbcglobal.net) has joined #ceph
[10:11] <Clabbe> singler_: processing :D
[10:12] <singler_> later, when you'll have more hosts/osds, change it back to host
[10:12] <Clabbe> singler_: will do,
[10:12] <Clabbe> 71% degraded :O
[10:12] <Clabbe> much processing now I guess
[10:13] <singler_> yes, ceph will be moving large amount of pgs
[10:13] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit (Remote host closed the connection)
[10:17] * dats (~dats@c-98-196-180-85.hsd1.tx.comcast.net) has joined #ceph
[10:19] <dats> Hi, having an issue with trim/discard.
[10:19] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:19] <dats> qemu-system-ppc64: -set block.scsi0-0-0.discard_granularity=65536: There is no option group 'block'
[10:20] <dats> has anyone run in to this?
[10:20] <lesserevil> Is there a way to recreate monitor's data from working osd? I have the remnants of a cluster with working osd but no valid mon. Disk corruption hit both mon at the same time :(
[10:34] * haomaiwang (~haomaiwan@124.161.76.154) has joined #ceph
[10:36] * haomaiwa_ (~haomaiwan@125-227-255-23.HINET-IP.hinet.net) has joined #ceph
[10:38] * LeaChim (~LeaChim@host86-174-77-240.range86-174.btcentralplus.com) has joined #ceph
[10:41] * haomaiwa_ (~haomaiwan@125-227-255-23.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[10:42] * haomaiwang (~haomaiwan@124.161.76.154) Quit (Ping timeout: 480 seconds)
[10:42] * haomaiwang (~haomaiwan@125-227-255-23.HINET-IP.hinet.net) has joined #ceph
[10:48] * allsystemsarego (~allsystem@86.121.2.97) has joined #ceph
[10:50] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) has joined #ceph
[10:52] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) Quit (Remote host closed the connection)
[10:54] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) has joined #ceph
[10:55] * boichev (~boichev@213.169.56.130) Quit (Ping timeout: 480 seconds)
[10:55] * dneary (~dneary@87-231-145-225.rev.numericable.fr) Quit (Ping timeout: 480 seconds)
[10:55] * ikrstic (~ikrstic@178-221-100-206.dynamic.isp.telekom.rs) Quit (Ping timeout: 480 seconds)
[10:58] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[10:59] * yanzheng (~zhyan@jfdmzpr01-ext.jf.intel.com) Quit (Remote host closed the connection)
[11:01] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[11:01] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[11:01] * ikrstic (~ikrstic@178-221-100-206.dynamic.isp.telekom.rs) has joined #ceph
[11:01] * aldavud_ (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[11:04] * dats (~dats@c-98-196-180-85.hsd1.tx.comcast.net) has left #ceph
[11:04] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Read error: Operation timed out)
[11:05] * dneary (~dneary@87-231-145-225.rev.numericable.fr) has joined #ceph
[11:05] * fdmanana (~fdmanana@bl10-142-30.dsl.telepac.pt) has joined #ceph
[11:15] * ghartz (~ghartz@ircad17.u-strasbg.fr) has joined #ceph
[11:15] <Clabbe> singler_: thanks for all of your help :)
[11:16] <singler_> no problem :)
[11:19] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[11:19] <mrjack> hi
[11:20] <mrjack> anyone got any idea how to get the client that causes so many iops that my cluster gets down to it knees? is there a way to find out which rbd image makes how many iops, bytes read/write?
[11:21] * andreask (~andreask@zid-vpnn099.uibk.ac.at) has joined #ceph
[11:21] * ChanServ sets mode +v andreask
[11:23] <singler_> mrjack: maybe there is better way, but you could check network traffic (bytes and packets per sec)
[11:24] * andreask (~andreask@zid-vpnn099.uibk.ac.at) has left #ceph
[11:25] <liiwi> iotop
[11:25] * _are__ is now known as _are_
[11:28] <mrjack> iotop does not show qemu rbd instances usages
[11:28] <liiwi> yup, it only shows processes
[11:29] <mrjack> i can't find out by the network traffic, because the "bad vm" saturates the cluster so bad that osds have many slow requests and cluster is in recovery all the time with osds flapping...
[11:29] * yguang11_ (~yguang11@2406:2000:ef96:e:5501:47e1:2e36:d3f2) Quit (Remote host closed the connection)
[11:29] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[11:31] * Infitialis (~infitiali@194.30.182.18) has joined #ceph
[11:31] <rotbeard> mrjack, its more an ongoing thing, but in my case I monitor all clients that are able to talk to ceph. on linux clients I am able to monitor the io latency and iops
[11:32] * huangjun (~kvirc@111.173.98.164) Quit (Ping timeout: 480 seconds)
[11:32] <mrjack> rotbeard: imagine an untrusted client which does so many iops your platters are puking...
[11:32] * ikrstic (~ikrstic@178-221-100-206.dynamic.isp.telekom.rs) Quit (Quit: Konversation terminated!)
[11:32] <mrjack> rotbeard: yo don't know which client, so how could you find it
[11:33] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Remote host closed the connection)
[11:34] <rotbeard> mrjack, well. if you monitor all clients in a zentralized monitoring system like icinga you are able to filter out that clients that are performing much iops
[11:35] * ikrstic (~ikrstic@178-221-100-206.dynamic.isp.telekom.rs) has joined #ceph
[11:48] <mrjack> rotbeard: how would that work?
[11:48] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[11:50] <rotbeard> mrjack, I check the iops and io latency on all my _client_ maschines every 1 minute. so if our ceph cluster is under high load, I can check my monitoring web ui for clients that are taking pressure on iops or so
[11:51] <mrjack> well
[11:51] <mrjack> how do you check iops?
[11:52] * haomaiwang (~haomaiwan@125-227-255-23.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[11:54] * haomaiwang (~haomaiwan@125-227-255-23.HINET-IP.hinet.net) has joined #ceph
[11:57] <absynth> especially on a ceph-rbd setup where "client" is not an easily defined phrase
[11:57] <absynth> mrjack: we have lots of custom scripts for that. it is possible, but tricky
[11:58] <absynth> we hook into those scripts to throttle nasty vms
[11:58] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[11:59] <mrjack> absynth: could you give me a hint?
[11:59] <absynth> lemme ask oliver
[12:01] <absynth> mrjack: are we talking about qemu-kvm VMs or other kinds of client?
[12:01] <absynth> there's no ceph-inherent possibility that we know of
[12:01] <cookednoodles> why can't you just throttle the vms ?
[12:01] <mrjack> absynth: yep, qemu kvm
[12:02] <mrjack> i could
[12:02] <cookednoodles> libvirt etc.. all have support for this
[12:02] <absynth> mrjack: well, take a look at the blockstats in the qemu monitoring socket
[12:02] <mrjack> if i knew which one
[12:02] <mrjack> ok
[12:02] <cookednoodles> you can give each vm and iops or volume limit
[12:02] <absynth> for i in `ls running vms`; do ...
[12:02] <mrjack> cookednoodles: i had set iotune for read/write, but forget to set read iops and write iops limits
[12:03] * wschulze (~wschulze@80.149.32.4) Quit (Read error: Operation timed out)
[12:04] <cookednoodles> are you using qemu directly or via a library ?
[12:05] <mrjack> virsh
[12:05] <cookednoodles> ok so libvirt
[12:05] * haomaiwa_ (~haomaiwan@124.161.76.154) has joined #ceph
[12:06] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[12:06] <cookednoodles> the 'easiest' way would be to hook in a monitor that can talk directly to libvirt
[12:06] <cookednoodles> it'll graph all the block device's io
[12:08] <rotbeard> mrjack, I use check_mk. there is a default check which monitors iops (I think its just parsing iostat or something like that)
[12:08] <rotbeard> I am pretty sure that there are some checks for other systems around
[12:12] <mrjack> ok thanks for the input, i think i'll be able to find the vm now, thanks
[12:12] * haomaiwang (~haomaiwan@125-227-255-23.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[12:20] * DV (~veillard@libvirt.org) has joined #ceph
[12:21] * haomaiwang (~haomaiwan@124.161.76.154) has joined #ceph
[12:21] * haomaiwa_ (~haomaiwan@124.161.76.154) Quit (Read error: Connection reset by peer)
[12:22] * haomaiwang (~haomaiwan@124.161.76.154) Quit (Remote host closed the connection)
[12:22] * haomaiwang (~haomaiwan@124.248.205.17) has joined #ceph
[12:24] * haomaiwa_ (~haomaiwan@124.161.76.154) has joined #ceph
[12:25] * haomaiwa_ (~haomaiwan@124.161.76.154) Quit (Remote host closed the connection)
[12:26] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) has joined #ceph
[12:26] * haomaiwa_ (~haomaiwan@124.248.205.17) has joined #ceph
[12:26] * haomaiwang (~haomaiwan@124.248.205.17) Quit (Read error: Connection reset by peer)
[12:27] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[12:31] * DV (~veillard@libvirt.org) Quit (Ping timeout: 480 seconds)
[12:32] * haomaiwang (~haomaiwan@124.161.76.154) has joined #ceph
[12:32] * leseb (~leseb@185.21.174.206) Quit (Killed (NickServ (Too many failed password attempts.)))
[12:34] * leseb (~leseb@185.21.174.206) has joined #ceph
[12:39] * haomaiwa_ (~haomaiwan@124.248.205.17) Quit (Ping timeout: 480 seconds)
[12:40] * haomaiwa_ (~haomaiwan@203.69.59.208) has joined #ceph
[12:40] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Ping timeout: 480 seconds)
[12:41] * haomaiwa_ (~haomaiwan@203.69.59.208) Quit (Remote host closed the connection)
[12:41] * zhaochao (~zhaochao@124.205.245.26) has left #ceph
[12:41] * kr0m (~kr0m@62.82.228.34.static.user.ono.com) has joined #ceph
[12:41] <kr0m> Hello evybody
[12:42] * haomaiwa_ (~haomaiwan@124.161.76.154) has joined #ceph
[12:42] <kr0m> i have a question about ceph osds
[12:42] * haomaiwang (~haomaiwan@124.161.76.154) Quit (Remote host closed the connection)
[12:42] <kr0m> i have accessed to osd storage throught block device
[12:43] <kr0m> the client map the unit and the data is uploaded to both OSDs of the cluster
[12:43] <kr0m> i can see how the free storage is reduced
[12:43] <kr0m> but when i delete the file in the client it is not freed in the OSDs
[12:43] <kr0m> has anybody experimented the same situation?
[12:44] <singler_> to free space from rbd you need client, which supports TRIM (discard) operations
[12:44] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[12:46] * haomaiwang (~haomaiwan@116.251.217.246) has joined #ceph
[12:47] <kr0m> cat /sys/block/rbd0/queue/discard_max_bytes
[12:47] <kr0m> 0
[12:47] * haomaiwang (~haomaiwan@116.251.217.246) Quit (Remote host closed the connection)
[12:47] <kr0m> does it mean that my client doesnt support it?
[12:47] <kr0m> i am sorry if i am doing stupid questions, but i am new in ceph world
[12:47] * haomaiwa_ (~haomaiwan@124.161.76.154) Quit (Remote host closed the connection)
[12:47] * haomaiwang (~haomaiwan@124.161.76.154) has joined #ceph
[12:48] <singler_> yes, that means that client does not support it
[12:48] * Meths_ (~meths@2.25.193.115) has joined #ceph
[12:48] * rdas (~rdas@nat-pool-pnq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[12:48] <singler_> are you using qemu rbd?
[12:48] <kr0m> i am testing with kvm vms
[12:49] <kr0m> the mon, 2OSDs and 1 client
[12:49] <singler_> is rbd attached directly or from kvm host
[12:49] <singler_> ?
[12:49] <kr0m> the proxmox where are all the vms is not using any kind of remote file server
[12:50] <kr0m> all the vm images are local
[12:50] * Meths (~meths@2.25.191.168) Quit (Ping timeout: 480 seconds)
[12:51] <singler_> what is qemu-kvm version?
[12:51] <kr0m> QEMU emulator version 1.4.1, Copyright (c) 2003-2008 Fabrice Bellard
[12:52] <kr0m> Proxmox 2.3-13
[12:52] <kr0m> is that enough info?
[12:54] <singler_> I never used proxmox so I am not sure, I think that qemu version 1.5 is needed for unmap support
[12:54] <singler_> check this link: http://dustymabe.com/2013/06/11/recover-space-from-vm-disk-images-by-using-discardfstrim/
[12:55] <singler_> you need this setting "<driver name='qemu' type='raw' discard='unmap'/>"
[12:55] <singler_> discard='unmap'
[12:55] <kr0m> ok, i will ckeck it
[12:58] * rdas (~rdas@nat-pool-pnq-t.redhat.com) has joined #ceph
[12:58] * sleinen (~Adium@2001:620:0:26:dd11:8346:2d57:72ac) Quit (Read error: Connection reset by peer)
[12:58] * sleinen (~Adium@2001:620:0:26:dd11:8346:2d57:72ac) has joined #ceph
[12:58] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[13:06] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[13:08] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) Quit (Remote host closed the connection)
[13:09] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) has joined #ceph
[13:10] <kr0m> singler_: i think that option oly affects to the storage of the proxmox, the vm are not aware of the virtualization layer
[13:10] * vince (~quassel@160.85.231.230) has joined #ceph
[13:10] <kr0m> so when i delete a file in the rbd device it should be removed in the OSDs, in fact in the client i can see the space gained by the deletion of the file
[13:11] <singler_> kr0m: oh, you also need scsi or scsi-virtio controller
[13:11] <kr0m> if i delete the pool the space is freed in the OSDs
[13:12] <singler_> well you see. WHen you write data, rbd allocates blocks (because objects do not exist yet). When client deletes files, blocks are freed on filesystem level, but rbd is not informed that blocks are free and can be unallocated
[13:13] <singler_> rbd does not know if blocks are used or not, so it cannot free them
[13:14] <singler_> when trim is enabled, OS tells disk that blocks are not needed, then qemu (if unmap is set) tells ceph to free blocks
[13:14] <singler_> if unmap is not set, then qemu ignores trim requests
[13:15] <kr0m> i dont understand why the virtualization is affecting me if the storage system of the proxmox is not under ceph
[13:15] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[13:15] <singler_> virtio block device (vda, vdb, etc disks) does not support trim. You need pass scsi disk to guest (scsi or scsi-virtio, sda, sdb etc disks)
[13:16] <kr0m> if i mount my testing enviroinment in physical servers i shouldnt have any problem and the space will be freed?
[13:18] <singler_> krbd (kernel rbd) does not support trim
[13:19] <kfei> singler_, do you know in QEMU 1.7.1, how to set rbd cache options? I only have `<driver name='qemu' type='rbd' cache='writeback'/>` now, not sure where to put things like `rbd_cache_size=134217728`
[13:20] <kfei> singler_, In an older version of QEMU it can be `<source protocol='rbd' name='libvirt-pool/image:rbd_cache=true:rbd_cache_size=134217728:rbd_cache_max_dirty=125829120'>`
[13:20] <vince> I need some help with radosgw, I have followed this tutorial http://ceph.com/docs/dumpling/start/quick-rgw/ and double-checked on the manual installation, but if I try to auth, I get a 404. If I simply try curl <hostname> I get the apache "It Works" reply
[13:20] <vince> I am on Ubuntu 14.04
[13:22] <singler_> kfei: sorry, I do not have experience with rbd caching (also my rbd host is on qemu 0.12...)
[13:22] <vince> here is my curl for authenticating: http://pastebin.com/yDWReE3D
[13:22] <vince> (I am using Swift API)
[13:23] <kfei> singler_, OK :p
[13:26] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[13:28] * jcsp (~john@176.12.107.140) Quit (Quit: Ex-Chat)
[13:29] * jcsp (~jcsp@0001bf3a.user.oftc.net) has joined #ceph
[13:31] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) Quit (Remote host closed the connection)
[13:31] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) has joined #ceph
[13:34] * saurabh (~saurabh@nat-pool-blr-t.redhat.com) Quit (Quit: Leaving)
[13:35] * diegows (~diegows@190.190.5.238) has joined #ceph
[13:36] * jcsp (~jcsp@0001bf3a.user.oftc.net) Quit (Quit: Leaving.)
[13:37] * hybrid512 (~walid@195.200.167.70) Quit (Quit: Leaving.)
[13:38] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) has joined #ceph
[13:39] * rdas (~rdas@nat-pool-pnq-t.redhat.com) Quit (Quit: Leaving)
[13:41] * glambert (~glambert@78-33-123-23.static.enta.net) Quit (Ping timeout: 480 seconds)
[13:43] * vbellur (~vijay@nat-pool-blr-t.redhat.com) Quit (Ping timeout: 480 seconds)
[13:53] * glambert (~glambert@78-33-123-23.static.enta.net) has joined #ceph
[13:53] * wschulze (~wschulze@80.149.32.4) has joined #ceph
[13:59] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[14:00] * rdas (~rdas@nat-pool-pnq-t.redhat.com) has joined #ceph
[14:04] * wschulze (~wschulze@80.149.32.4) Quit (Quit: Leaving.)
[14:18] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[14:19] * jammcq (~jam@c-24-11-53-228.hsd1.mi.comcast.net) Quit (Ping timeout: 480 seconds)
[14:19] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[14:23] * The_Bishop_ (~bishop@f055210250.adsl.alicedsl.de) has joined #ceph
[14:24] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) Quit (Quit: koleosfuscus)
[14:24] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[14:27] * haomaiwang (~haomaiwan@124.161.76.154) Quit (Remote host closed the connection)
[14:27] * haomaiwang (~haomaiwan@li721-169.members.linode.com) has joined #ceph
[14:27] * glambert (~glambert@78-33-123-23.static.enta.net) Quit (Read error: Connection reset by peer)
[14:28] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[14:29] * rdas (~rdas@nat-pool-pnq-t.redhat.com) Quit (Quit: Leaving)
[14:30] * The_Bishop (~bishop@f055011185.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[14:31] * Infitialis (~infitiali@194.30.182.18) Quit (Remote host closed the connection)
[14:31] * Infitialis (~infitiali@194.30.182.18) has joined #ceph
[14:33] * ganders (~root@200-127-158-54.net.prima.net.ar) has joined #ceph
[14:35] * haomaiwa_ (~haomaiwan@124.161.76.154) has joined #ceph
[14:42] * haomaiwang (~haomaiwan@li721-169.members.linode.com) Quit (Ping timeout: 480 seconds)
[14:50] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[14:50] * ChanServ sets mode +v andreask
[14:55] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[15:02] <pmatulis> re logging, what does 'memory level' mean? ==> debug {subsystem} = {log-level}/{memory-level}
[15:11] <cronix> hi
[15:11] <cronix> my osd's are aeting up 6GB of ram each atm
[15:12] <cronix> i dont think this is normal behaviour?
[15:12] <cronix> im in the process of restarting the whole cluster after setting it to nodown + noout
[15:13] <cronix> and now i need to kill OSD's or else the whole host would become unresponsive due to that massive amount of ram eaten by the OSD processes (firefly 80.1
[15:14] <pmatulis> cronix: are those OSDs idle or is their data migration happening?
[15:14] <pmatulis> s/their/there
[15:14] <kraken> pmatulis meant to say: cronix: are those OSDs idle or is there data migration happening?
[15:14] <pmatulis> beat ya
[15:15] <cronix> since the cluster is set on nodown and noout i would assume that no migration is happening
[15:21] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has left #ceph
[15:21] <singler_> cronix: pastebin "ceph -s"
[15:23] <cronix> http://pastebin.com/ccphpP76
[15:23] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) has joined #ceph
[15:24] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[15:26] <cronix> i killed more OSD's seconds ago
[15:26] <cronix> and now the reaming ones are @8GB ram each
[15:26] <cronix> *remeaning
[15:26] <singler_> cronix: how many osds per host?
[15:26] <cronix> if everything would be find and up and clean
[15:26] <cronix> the are 60 OSD's per host
[15:26] <cronix> with 4TB each
[15:27] <cronix> atm were down to 40 / host due to the massive ram consumption and were considering killing the next 5
[15:27] * odyssey4me (~odyssey4m@165.233.71.2) Quit (Quit: Leaving)
[15:27] <cronix> hosts have 192GB of ecc ram each
[15:28] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) has joined #ceph
[15:28] <singler_> cronix: set noin, unset nodown. So your osd would be marked as down, but newly started OSDs would not join
[15:28] <cronix> example pmap of an OSD: total kB 8331420 6719876 6715804
[15:28] <singler_> I guess your cpu usage is high?
[15:28] <singler_> set noup
[15:28] <cronix> nope
[15:28] <kraken> http://i.imgur.com/ErtgS.gif
[15:28] <cronix> load average is 1.0
[15:29] <cronix> we have 32cores in each machine
[15:29] <cronix> mostly idling a 0.*%
[15:30] <cronix> http://img5.picload.org/image/lilailo/2014-06-13-152929_934x169_scro.png
[15:30] <cronix> htop screenshot of an osd server
[15:30] <singler_> other osd servers have similar load?
[15:30] <cronix> jep
[15:32] <singler_> I do not have experience with such high scale, but I would try starting osds with noup/noin and checking their memory consumption, and adding them to cluster by one
[15:33] <cronix> :O
[15:33] <cronix> we have thousands of osd's
[15:34] * jrankin (~jrankin@d47-69-66-231.try.wideopenwest.com) has joined #ceph
[15:34] <cronix> *cry*
[15:34] * vince (~quassel@160.85.231.230) Quit (Remote host closed the connection)
[15:34] <singler_> you could semi-automate it :)
[15:34] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) Quit (Quit: koleosfuscus)
[15:34] <cronix> lmao
[15:34] <singler_> but if memory usage would stabilize, you could recover
[15:35] <cronix> jeah
[15:35] <cronix> were hacking around since about a week on those issues now
[15:35] <singler_> maybe if osds start peering with each other at same time, they start using large amounts of memory
[15:35] <cronix> now a single OSD's eats 9GB
[15:35] <cronix> that insane
[15:36] <singler_> try unseting nodown, if osd would be marked as down, maybe recovery process would be easier
[15:36] <singler_> are OSDs flapping?
[15:40] <cronix> nope
[15:40] <kraken> http://i.imgur.com/ST9lw3U.gif
[15:40] <cronix> lol
[15:40] <cronix> kraken you are funny
[15:40] <cronix> total kB 17642280 14986988 14981004
[15:40] <singler_> kraken is a bot
[15:40] <cronix> A Single OSD eats up 17GB atm
[15:40] <alfredodeza> thanks kr0m
[15:40] <cronix> i smell a bug
[15:40] <alfredodeza> errr
[15:41] <alfredodeza> thanks kraken
[15:41] * kraken is overpowered by the brawny asseveration of worthy
[15:41] * ircolle (~Adium@2601:1:8380:2d9:6177:a369:8027:1ea4) has joined #ceph
[15:44] <cronix> i set noin and nodown unset
[15:45] <cronix> osd's are flying out of the cluster now
[15:46] <singler_> what about memory usage?
[15:47] <cronix> dosn't get better
[15:47] <cronix> still several GB per OSD
[15:47] * rpowell (~rpowell@128.135.219.215) has joined #ceph
[15:47] <cronix> did not decrease
[15:47] <janos_> i have no real basis for saying this, but that sounds like a bug to me
[15:47] <singler_> maybe it does not deallocate memory.. Try starting new osd, check it's memory
[15:48] <cronix> i have no memory left to start additional OSD's anymore
[15:49] <singler_> what about restarting all osd on one host?
[15:49] * dmsimard_away is now known as dmsimard
[15:52] * diegows (~diegows@186.61.107.85) has joined #ceph
[15:57] * hufman (~hufman@cpe-184-58-235-28.wi.res.rr.com) has joined #ceph
[16:01] <singler_> cronix: I have to go now, but I would like to hear from you if you would manage to solve this problem
[16:01] * markbby (~Adium@168.94.245.4) has joined #ceph
[16:03] <lesserevil> Is there a way to recreate monitor's data from working osd? I have the remnants of a cluster with working osd but no valid mon. Disk corruption hit both mon at the same time.
[16:03] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) Quit (Quit: Leaving)
[16:04] * rpowell (~rpowell@128.135.219.215) has left #ceph
[16:06] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[16:11] * Pedras (~Adium@50.185.218.255) has joined #ceph
[16:11] * lpabon (~lpabon@66-189-8-115.dhcp.oxfr.ma.charter.com) has joined #ceph
[16:11] <lesserevil> I think I'm doomed, but I'd like to hear it from a dev just in case there is some way to rebuild the data.
[16:13] <janos_> that sounds pretty awful, lesserevil
[16:13] <lesserevil> it happens. I'm mostly pissed at myself for not having more mons set up.
[16:14] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[16:17] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[16:19] * vince (~quassel@160.85.231.230) has joined #ceph
[16:19] <vince> the solution to my problem was this: http://stackoverflow.com/a/19521307/528313, basically ubuntu-related
[16:20] <vince> it would be nice to have a pointer in the documentation
[16:20] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[16:24] <lesserevil> anyway, any suggestions are appreciated. Also, if it isn't possible to rebuild, that's good to know, too, so I can stop hoping.
[16:26] * diegows (~diegows@186.61.107.85) Quit (Ping timeout: 480 seconds)
[16:26] * tdasilva (~thiago@dhcp-0-26-5a-b5-f8-68.cpe.townisp.com) has joined #ceph
[16:27] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: Leaving)
[16:28] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[16:28] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[16:31] * vince__ (~quassel@160.85.122.247) has joined #ceph
[16:34] * vince (~quassel@160.85.231.230) Quit (Ping timeout: 480 seconds)
[16:35] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[16:36] * b0e (~aledermue@juniper1.netways.de) Quit (Quit: Leaving.)
[16:39] * diegows (~diegows@186.61.107.85) has joined #ceph
[16:41] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) has joined #ceph
[16:41] * blinky_ghost_ (~psousa@213.228.167.67) has joined #ceph
[16:44] * vince__ (~quassel@160.85.122.247) Quit (Ping timeout: 480 seconds)
[16:45] <blinky_ghost_> hi all, is it possible to access to my ceph object storage from my client linux box as a fuse mountpoint?
[16:46] <lpabon> blinky_ghost_, do you mean, you want to access the objects located on the ceph cluster also from a fuse mount?
[16:48] * diegows (~diegows@186.61.107.85) Quit (Ping timeout: 480 seconds)
[16:48] <blinky_ghost_> lpabon: yes, have a driver that when I copy/download a file to the mount point translates it to a put or a get in the object storage.
[16:51] * bandrus (~oddo@adsl-71-137-198-68.dsl.scrm01.pacbell.net) has joined #ceph
[16:52] <blinky_ghost_> lpabon: my idea is to use object storage to my remote endpoints to store data. Have a ceph mountpoint on the operating system to upload and retrieve files.
[16:52] * BManojlovic (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[16:54] <lpabon> blinky_ghost_, afaik, ceph does not support this
[16:55] * drankis_ (~drankis__@89.111.13.198) Quit (Remote host closed the connection)
[16:56] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[16:56] * ChanServ sets mode +v andreask
[16:57] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Read error: Operation timed out)
[16:58] <baylight> blinky_host_, aren't you essentially describing CephFS?
[16:59] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has left #ceph
[17:00] <blinky_ghost_> baylight: maybe, but my endpoints are remote on a WAN, so I was thinking to use a object storage, but my assumptions could be wrong :)
[17:03] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[17:06] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:07] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[17:08] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[17:09] <baylight> I'm certainly no expert, but it seems like that's what it's designed to do: translate posix calls to object storage. The wrinkle is the metadata server, which while not directly in the data path would certainly see hits from the endpoints.
[17:11] * Infitial_ (~infitiali@194.30.182.18) has joined #ceph
[17:13] <blinky_ghost_> baylight: thanks I'll look into it :)
[17:13] * Pedras (~Adium@50.185.218.255) Quit (Quit: Leaving.)
[17:15] * Pedras (~Adium@50.185.218.255) has joined #ceph
[17:16] <pmatulis> re logging, what does 'memory level' mean? ==> debug {subsystem} = {log-level}/{memory-level}
[17:18] * Infitialis (~infitiali@194.30.182.18) Quit (Ping timeout: 480 seconds)
[17:19] * lofejndif (~lsqavnbok@tor-exit.server6.tvdw.eu) has joined #ceph
[17:19] * Infitial_ (~infitiali@194.30.182.18) Quit (Ping timeout: 480 seconds)
[17:21] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:33] * rendar (~I@host227-140-dynamic.59-82-r.retail.telecomitalia.it) has joined #ceph
[17:34] * lesserevil (~lesser.ev@99-72-217-92.lightspeed.stlsmo.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[17:44] * adamcrume (~quassel@2601:9:6680:47:6447:8c3c:110e:3aab) has joined #ceph
[17:45] * gford (~fford@93.93.251.146) has joined #ceph
[17:52] * eford (~fford@93.93.251.146) Quit (Ping timeout: 480 seconds)
[17:57] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[17:58] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) Quit (Remote host closed the connection)
[17:59] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) has joined #ceph
[17:59] * vbellur (~vijay@122.167.103.182) has joined #ceph
[18:02] * davidzlap (~Adium@ip68-4-173-198.oc.oc.cox.net) has joined #ceph
[18:03] * davidzlap (~Adium@ip68-4-173-198.oc.oc.cox.net) Quit ()
[18:04] * ganders (~root@200-127-158-54.net.prima.net.ar) Quit (Quit: WeeChat 0.4.1)
[18:04] * davidzlap (~Adium@ip68-4-173-198.oc.oc.cox.net) has joined #ceph
[18:06] * Pedras (~Adium@50.185.218.255) Quit (Quit: Leaving.)
[18:07] * zerick (~eocrospom@190.114.249.148) has joined #ceph
[18:09] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:11] * reed (~reed@75-101-54-131.dsl.static.sonic.net) has joined #ceph
[18:11] * dlan (~dennis@116.228.88.131) Quit (Ping timeout: 480 seconds)
[18:11] <joao> pmatulis, the level up to which the logging system will keep in memory, regardless of writing it to the log file or not
[18:11] <joao> it's rather useful when some assert is triggered
[18:11] * thomnico (~thomnico@2a01:e35:8b41:120:1411:521a:4bff:6fd) Quit (Quit: Ex-Chat)
[18:12] <joao> as it will dump the last (5k or something) messages in memory
[18:13] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[18:14] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Remote host closed the connection)
[18:14] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[18:14] <pmatulis> joao: thanks
[18:14] * Henson_D (~kvirc@206-248-167-80.dsl.teksavvy.com) has joined #ceph
[18:17] <Henson_D> Hi all, I have a question about Ceph's RBD performance for non-parallel IO situations. I have a small 2 OSD node cluster at home that I use for scalable filesystem storage. It's write performance is quite low because the journal drives are slow 30 MB/s drives, but the data drives are fast 90 MB/s 1 TB drives. The read performance, however, is quite poor, and I get about 20 MB/s.
[18:18] * Cube (~Cube@66.87.130.235) has joined #ceph
[18:19] <Henson_D> The nodes are connected with a gigabit network on the public and private networks, and they get about 950 Mbit between computers. I have increased the readahead values for the RBD devices up to give slightly better performance. The OSD computers (which are also the computers mounting and using the RBD devices) are old 3 GHz Athlon X2 machines from the early 2000s. I'd like to know if my poor
[18:19] <Henson_D> RBD read speed is due to a misconfiguration in my ceph config, or just a consequence of my slow computers. A lot of ceph benchmarks I've looked at focus on parallelism, but I'm interested in non-parallel behaviour.
[18:20] <Henson_D> The filesystems I have on the RBD devices are EXT4, and I use dd and bonnie++ for doing the tests.
[18:21] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:22] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:25] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[18:25] * thomnico (~thomnico@2a01:e35:8b41:120:1411:521a:4bff:6fd) has joined #ceph
[18:25] <Pauline> Henson_D: I would do a iostat and some vmstating on one of the osd boxes while reading doing your testing on a thrid machine
[18:28] <Henson_D> Pauline: I've tried so many things to figure out why it's so slow. Perhaps the most helpful thing for me would be to know that it's POSSIBLE for Ceph to acheive good performance in the case of serial RBD with random IO with 4k blocks. Is ceph suited for this kind of thing, or will it never be able to perform very well? I would expect to get read rates of at least 90 MB/s given that each of
[18:28] <Pauline> What do you mean by "serial RBD"?
[18:28] <Henson_D> Pauline: two harddrives can read data at 90 MB/s. However I saw something in a post from Sebastian Han on his blog saying that Ceph doesn't read some multiple OSDs (like RAID0).
[18:29] <Henson_D> Pauline: I mean, only one client reading from the RBD device. So not making use of parallel IO.
[18:29] <Pauline> btw, the default blocksize for an rbd image are normally 4MB, so using 4K random IO might actually force it to move a lot of data around.
[18:30] <Pauline> oh. but unless you have a single OSD, the blocks will be placed in pgs which will be placed on (hopefully) different hosts.
[18:30] <Pauline> Ceph only reads from the primary osd asaik, yes.
[18:31] <JCL> Henson_D: Am I understanding correctly from your message that you mount the RBD image on the same node where the OSD is running?
[18:32] <Henson_D> Pauline: yeah. So is it unreasonable to expect high performance from a filesystem on an RBD device? I've tried all sorts of things to make the filesystem faster, using EXT4, XFS, BTRFS, software RAID over RBDs.
[18:32] <Henson_D> Pauline: good point about the PGs
[18:32] <Henson_D> JCL: that is correct.
[18:32] <JCL> Using KRBD?
[18:32] <Henson_D> JCL: yes
[18:32] <JCL> In a non virtualized environment?
[18:33] <Pauline> Henson_D: I can do roughly 40MB/s from a single VM (qemu, using krbd) but I can multiple of those on multiple machines.
[18:33] <Henson_D> JCL: both. I export a filesystem on an RBD device over NFS, and also have QEMU VMs that use RBDs as a RAW disk.
[18:33] <Pauline> s/can multiple/can do multiple/
[18:33] <kraken> Pauline meant to say: Henson_D: I can do roughly 40MB/s from a single VM (qemu, using krbd) but I can do multiple of those on multiple machines.
[18:34] <JCL> Just remember that if you mount and RBD image in the OS instance where your OSD is running can result in a memory deadlock situtraion that will lead to your OSD frezzing and then suicide.
[18:35] <Henson_D> Pauline: how large is your ceph cluster? Whenever I try to do parallel IO stuff on my filesystems, the performance is even worse than doing it in serial.
[18:35] <hufman> JCL: I've heard that, and i'm afraid of it, but i can't find that documented anywhere anymore
[18:35] <Pauline> 3 nodes with each 12x 2TB disks and 2x SSD for journal+os
[18:35] <Pauline> (also on a 1GE network)
[18:36] <JCL> Because I don't believe such a document exist. But I'm telling you I have seen it for real and in our support subrscriptions, it is NOT supported
[18:36] <hufman> i know, i just want to show something to my bosses to back up my fears
[18:36] * analbeard (~shw@host86-155-196-30.range86-155.btcentralplus.com) has joined #ceph
[18:36] <Pauline> JCL: is that just for krbd or alsi librbd?
[18:37] <Henson_D> Pauline: so you have a decent cluster. Do you have an OSD for each of disks, or one for each computer?
[18:37] <Henson_D> s/each of disks/each of the disks/
[18:37] <kraken> Henson_D meant to say: Pauline: so you have a decent cluster. Do you have an OSD for each of the disks, or one for each computer?
[18:37] <Pauline> Henson_D: Per disk
[18:38] <Pauline> Henson_D: I did some testing and my sas backplane if fast enough to saturate all 12 disks at the same time.
[18:38] <Pauline> is*
[18:39] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[18:40] <Henson_D> Pauline: so in that case I'm surprised that you get only 40 MB/s for a single VM. Do you have any comments to help me understand why Ceph isn't able to produce higher throughput for a single VM? I understand there is the overhead of the OSDs doing computations, fetching things from the backing disks, latency of the network, etc. Do all of these things add up to limit your IO speed?
[18:41] * Infitialis (~infitiali@5ED48E69.cm-7-5c.dynamic.ziggo.nl) has joined #ceph
[18:42] * analbeard (~shw@host86-155-196-30.range86-155.btcentralplus.com) Quit (Quit: Leaving.)
[18:43] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[18:43] * aldavud_ (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[18:43] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[18:43] <JCL> Henson_D: What type of 1TB disk drive do you have? As you mention the chassi are over 10 yo are they SATA drives? 5400 or 7200 RPM drives?
[18:43] <Pauline> Henson_D: I'm not entirely sure myself. I would have expected more from a single reader throughput, but as the cluster is having an ok-ish throughput(guts feeling here) in my setup with ~70 VMs, I'm not going to rock the boat much :P
[18:43] * Infitialis (~infitiali@5ED48E69.cm-7-5c.dynamic.ziggo.nl) Quit ()
[18:44] <JCL> Other thing is that if you are using small IO block size, you may consider using smaller object size in your RBD images to try to betetr align data. You do not have to use 4MB objects. It's just the default.
[18:44] <Henson_D> JCL: Western Digital 1 TB 7200 rpm drives. They get about 90 MB/s read/write speed.
[18:45] <JCL> The problem is not the bandwidth, the pb will be latency and the type of inetrface: SATA or not SATA?
[18:45] <JCL> SAS or not SAS
[18:45] <Henson_D> JCL: really!? I didn't know that could be changed. How do you specify the object size.
[18:45] <Henson_D> JCL: they're SATA drives
[18:46] <Pauline> use --order on the rbd cmd.
[18:47] <JCL> OK so those drive will give you no more than 80-100 IOPS in random mode per device. That's the figure used generally when some perf assesments are conducted in storage.
[18:47] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) Quit (Ping timeout: 480 seconds)
[18:47] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) has joined #ceph
[18:47] <JCL> If your IOs are 4K, use ???order 16 or ???order 17 and give it another try.
[18:48] <JCL> Then your problem, given your disk drive is that you do not have enough // processing combining multiple disk drives.
[18:48] <Henson_D> JCL: does "// processing" mean "parallel processing"
[18:48] <JCL> Yes
[18:50] * KaZeR (~kazer@64.201.252.132) has joined #ceph
[18:52] * Gamekiller77 (~Gamekille@128-107-239-233.cisco.com) has joined #ceph
[18:52] <Henson_D> JCL: so if I'm using some RBD devices with 4k filesystems, then to achieve higher performance I should make smaller object sizes? Also, having many OSDs to read/write from in parallel will boost the performance.
[18:56] * xarses (~andreww@12.164.168.117) has joined #ceph
[18:56] * The_Bishop__ (~bishop@f055108224.adsl.alicedsl.de) has joined #ceph
[19:00] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[19:01] * Sysadmin88 (~IceChat77@94.4.22.173) has joined #ceph
[19:02] * The_Bishop_ (~bishop@f055210250.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[19:06] * kwaegema (~kwaegema@daenerys.ugent.be) Quit (Ping timeout: 480 seconds)
[19:06] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[19:11] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:12] * jharley (~jharley@192.171.39.220) has joined #ceph
[19:16] <JCL> Henson_D : The file system is one thing, your application IO size is another one. On top, file systems will try coalesce IOs and will use have some sort of cache mechanisms that can influence the behavior of some applications with some io profiles. But anyway, for what you describe, I would recommend trying with a smaller order for the RBD images. But I'm an outsider with only a portion of the view.
[19:17] <JCL> My strongest belief is that with the reduced config you have, what you can achieve is a functional demo/POC showing that Ceph can be a storage backend supporting the functional reqs you may have. Which is already important.
[19:18] <JCL> Anyway, make a test with ???order 16, then 17, then 18 and see what works best for YOUR config.
[19:19] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[19:19] <JCL> You can also use rados bench and rbd bench or fio to get a quicker view of the changes. fio will use librbd though, not krbd if I remember right
[19:23] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[19:24] * sleinen (~Adium@2001:620:0:26:dd11:8346:2d57:72ac) Quit (Quit: Leaving.)
[19:27] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[19:27] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit ()
[19:28] * erice (~erice@50.240.86.181) Quit (Remote host closed the connection)
[19:31] * rendar (~I@host227-140-dynamic.59-82-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[19:32] <jharley> howdy, folk. Can someone point me in the direction of a doc that tells me what cephx permissions the ???restapi??? account should have?
[19:34] * tdb (~tdb@willow.kent.ac.uk) Quit (Quit: leaving)
[19:36] * angdraug (~angdraug@12.164.168.117) Quit (Remote host closed the connection)
[19:38] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[19:38] * aldavud_ (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[19:39] <jharley> ???mon allow *??? and ???osd allow *???? or is that way too much?
[19:40] * Gamekiller77 is now known as GK77_afk
[19:42] <gregsfortytwo> that's probably accurate, but I don't think we have that kind of user documentation around the rest api yet
[19:42] <gregsfortytwo> it gives you full management access though, so it needs to be an admin user
[19:43] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[19:44] * diegows (~diegows@190.190.5.238) has joined #ceph
[19:45] <jharley> @gregsfortytwo I can???t find it if you do
[19:45] <cephalobot`> jharley: Error: "gregsfortytwo" is not a valid command.
[19:45] <jharley> those two options seem to do it, tho
[19:45] <jharley> ( well, it starts anyway )
[19:49] <tdasilva> hello, I'm trying to setup a ceph cluster following the "Quick Start" guide but I'm getting stuck on this command: ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1. I get an error: "No data was received after 300 seconds"...wondering if anybody has any ideas...Here's the log: http://pastebin.com/aW2bR6xf
[19:52] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[19:56] * dereky (~derek@129-2-129-154.wireless.umd.edu) has joined #ceph
[19:56] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:56] * aldavud_ (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:58] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[19:58] * aldavud_ (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[19:58] <Henson_D> JCL: thanks a lot for your suggestions. I think for the kind of system that is practical for me to run at home, building a multi-disk ZFS NAS would give me the performance and storage space I need. Ceph is totally cool, but I don't think it's suited to my small-scale usage case.
[20:00] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[20:00] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[20:03] * sleinen1 (~Adium@2001:620:0:26:8c02:c8c7:b3f1:b558) has joined #ceph
[20:04] * markbby (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[20:04] * markbby (~Adium@168.94.245.4) has joined #ceph
[20:07] * lalatenduM (~lalatendu@122.167.47.102) has joined #ceph
[20:08] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:08] * rotbeard (~redbeard@2a02:908:df11:9480:76f0:6dff:fe3b:994d) has joined #ceph
[20:11] * tracphil (~tracphil@130.14.71.217) has joined #ceph
[20:12] * tracphil (~tracphil@130.14.71.217) Quit ()
[20:12] * tracphil (~tracphil@130.14.71.217) has joined #ceph
[20:15] * sanstorm (sanstorm@49.204.218.126) has joined #ceph
[20:15] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[20:15] * Henson_D (~kvirc@206-248-167-80.dsl.teksavvy.com) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[20:16] <sanstorm> Hi
[20:16] * Henson_D (~kvirc@206.248.167.80) has joined #ceph
[20:16] <sanstorm> I need some information about deploying ceph in existing environment
[20:16] <sanstorm> can someone guide through that
[20:16] <Henson_D> Pauline: thank you as well for your help and suggestions
[20:16] * blinky_ghost_ (~psousa@213.228.167.67) Quit (Quit: Ex-Chat)
[20:16] * Henson_D (~kvirc@206.248.167.80) Quit ()
[20:16] <Sysadmin88> might want to give some information about your environment so people can help you
[20:17] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[20:17] <Pauline> Henson_D: yw
[20:18] <sanstorm> awesome....my environment has more than 4000 vms, 1500 physical servers, has more than 40 IBM XIV, 10 VMAX and 10 VNX. Total capacity is around 10 PB. We also have large isilon environment
[20:19] <sanstorm> Does this information will help ?
[20:20] * Meths_ is now known as Meths
[20:23] * aldavud_ (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:24] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:25] <sanstorm> Hi Admin, can you help me with my query ?
[20:27] <eqhmcow> sanstorm: what hardware are you thinking about using and goals are you looking to achieve
[20:36] <sanstorm> we would like to host to the private scalable cloud. About storage hardware, can we use exisiting infra or do we need to get new arrays ?
[20:37] * ScOut3R (~ScOut3R@5401C5FF.dsl.pool.telekom.hu) has joined #ceph
[20:39] <eqhmcow> so you want to deploy ceph RBD ? if you can run linux on your existing hardware then you can run ceph on it, yes
[20:39] <JCL> Henson_d: yw
[20:39] <Sysadmin88> but you may want to define how much performance you want... so you can see if the hardware is capable of what you expect
[20:40] <darkfader> i'll take one of those xiv ;)
[20:44] <sanstorm> Ignore my lehman knowledge in these areas. Yes i wold like to deploy RBD and about performance, I am interested in scaling the model as per the business growth so scalability and performance are criteria's.
[20:44] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[20:45] * ScOut3R (~ScOut3R@5401C5FF.dsl.pool.telekom.hu) Quit (Ping timeout: 480 seconds)
[20:45] <sanstorm> Wish I could have gave you XIV....I'm also bored with that...
[20:49] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) has joined #ceph
[20:50] <koleosfuscus> hi, my cluster is giving HEALTH_WARN 192 pgs incomplete; 192 pgs stuck inactive; 192 pgs stuck unclean
[20:50] <koleosfuscus> any idea why?
[20:52] <koleosfuscus> is it because the cluster didn???t finish peering?
[20:52] <Pauline> try: https://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/#stuck-placement-groups
[20:53] <koleosfuscus> thanks Pauline
[20:53] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[20:53] <koleosfuscus> i just reinstalled everything, not sure if i should wait some time
[20:54] <koleosfuscus> i mean to get ceph health without warnings
[20:54] * number80 (~80@218.54.128.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[20:55] <Pauline> if ceph -w does not show any progress, then waiting longer is prob not going to work
[20:55] <Pauline> s/work/help much/
[20:55] <kraken> Pauline meant to say: if ceph -w does not show any progress, then waiting longer is prob not going to help much
[20:56] <Pauline> indeed i did
[20:56] <lupu> koleosfuscus: do you have a single storage node ?
[20:56] * lpabon (~lpabon@66-189-8-115.dhcp.oxfr.ma.charter.com) Quit (Quit: Leaving)
[20:57] <koleosfuscus> no, i have 5 vm
[20:57] <koleosfuscus> one admin, one mon, 3 osd
[20:58] <Pauline> on one host?
[20:58] <koleosfuscus> but actually i activated 2 osd
[20:58] <koleosfuscus> yes
[20:58] <Pauline> you need to edit your crushmap
[20:58] <lupu> if so you should add a second node or edit the crushmap
[20:58] * lpabon (~quassel@66-189-8-115.dhcp.oxfr.ma.charter.com) has joined #ceph
[20:58] * tdasilva (~thiago@dhcp-0-26-5a-b5-f8-68.cpe.townisp.com) Quit (Remote host closed the connection)
[20:59] <lupu> default replication is between storage nodes(hosts) not between osd (hdd)
[20:59] <Pauline> koleosfuscus: actually, that's all at the top of the link I sent ya. "One Node Cluster"
[20:59] * number80 (~80@218.54.128.77.rev.sfr.net) has joined #ceph
[21:00] * vilobhmm (~vilobhmm@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) has joined #ceph
[21:00] <koleosfuscus> i see???thanks
[21:01] * Pauline shivers "even though that doc suxs... it talks about numbers, wheras my cruuhmap has names for chooseleaf type"
[21:01] * tdasilva (~thiago@dhcp-0-26-5a-b5-f8-68.cpe.townisp.com) has joined #ceph
[21:02] <koleosfuscus> but it says that i should modify the chooseleaf type before creating the monitors and osd
[21:02] <koleosfuscus> should i have to do everything again?
[21:03] <Pauline> nope. you can just edit the crushmap
[21:03] <Pauline> it will rebalnce
[21:03] <koleosfuscus> i am not sure if that is correct, i have a virtualized environment (5 nodes, in other words, 5 vm)
[21:04] <koleosfuscus> is it the node the real machine or the guest?
[21:04] <Pauline> storage nodes == host where osd runs on...
[21:04] <Pauline> you said you have one of those.
[21:04] <koleosfuscus> no i have 5 VM
[21:05] <Pauline> what does a VM has to do with OSD? those are the ceph clients.
[21:05] <Sysadmin88> maybe he's trying ceph on VMs instead of hardware
[21:06] <Pauline> are you running osds in your VMs, koleosfuscus ?
[21:06] <koleosfuscus> exactly
[21:06] <koleosfuscus> i have 3 partitions for the data
[21:06] <Pauline> then storage node in your setup are all those VMs which run osds
[21:07] <lupu> koleosfuscus: ceph osd tree
[21:07] <Pauline> (and your performance will be horrible)
[21:08] <koleosfuscus> i don???t care about performance, i want to toy with ceph and check how it works,
[21:08] <koleosfuscus> eventually i will move to real cluster, but don???t have access to such resources at the moment
[21:09] <koleosfuscus> i am having plenty of problem because lack of resources, i know. it is absolutely not easy to install ceph in a tiny sandbox environment
[21:13] * gregsfortytwo1 (~Adium@38.122.20.226) has joined #ceph
[21:15] * rweeks (~goodeats@192.169.20.75.static.etheric.net) has joined #ceph
[21:15] <rweeks> scuttlemonkey: blueprint submitted
[21:16] <scuttlemonkey> rweeks: was just reading it
[21:16] <scuttlemonkey> :)
[21:17] <rweeks> awesome
[21:17] <rweeks> feedback highly appreciated
[21:17] <scuttlemonkey> I think this should be plenty to get the conversation started
[21:17] <scuttlemonkey> we'll just be sure to take accurate notes in the etherpad
[21:17] <rweeks> sweet
[21:18] * qhartman_ (~qhartman@75-151-85-53-Colorado.hfc.comcastbusiness.net) has joined #ceph
[21:18] <rweeks> what's the deal with the BlueJeans video? I just read that on the wiki
[21:18] * tdasilva (~thiago@dhcp-0-26-5a-b5-f8-68.cpe.townisp.com) Quit (Remote host closed the connection)
[21:19] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[21:19] * qhartman (~qhartman@75-151-85-53-Colorado.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[21:19] <scuttlemonkey> yeah, I'll probably run some tests next week
[21:19] <scuttlemonkey> it's a video conferencing tool
[21:19] <scuttlemonkey> http://bluejeans.com/
[21:19] <scuttlemonkey> will allow me to have 100 people in a single call
[21:19] <rweeks> I'm not on the mailing list, but I'd be happy to help test
[21:20] <rweeks> ah, nice
[21:20] <scuttlemonkey> cool
[21:20] <Sysadmin88> anyone using the read/write cache pool yet? i have something that would be ideal for my environment but not sure it will work yet
[21:22] * GK77_afk is now known as Gamekiller77
[21:26] * jrankin (~jrankin@d47-69-66-231.try.wideopenwest.com) Quit (Quit: Leaving)
[21:27] * diegows (~diegows@190.190.5.238) has joined #ceph
[21:29] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[21:29] * tdasilva (~thiago@dhcp-0-26-5a-b5-f8-68.cpe.townisp.com) has joined #ceph
[21:31] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[21:34] <tdasilva> hello...looking for some help with setting up a cluster following the quick start guide...anybody?
[21:34] * jharley (~jharley@192.171.39.220) Quit (Quit: jharley)
[21:34] * sputnik13 (~sputnik13@207.8.121.241) Quit ()
[21:36] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[21:36] <tdasilva> after running command: sudo ceph-disk-activate --mark-init sysvinit --mount /var/local/osd0
[21:36] <tdasilva> get this error: No data was received after 300 seconds, disconnecting...
[21:39] * thomnico (~thomnico@2a01:e35:8b41:120:1411:521a:4bff:6fd) Quit (Quit: Ex-Chat)
[21:42] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[21:43] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[21:43] * gregsfortytwo1 (~Adium@38.122.20.226) Quit (Quit: Leaving.)
[21:49] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[21:50] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[21:50] * lalatenduM (~lalatendu@122.167.47.102) Quit (Ping timeout: 480 seconds)
[21:53] * jdmason (~jon@134.134.137.73) has joined #ceph
[21:54] * tracphil (~tracphil@130.14.71.217) Quit (Quit: leaving)
[21:55] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[21:58] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit (Quit: Quitte)
[21:58] * danieagle (~Daniel@179.178.77.57.dynamic.adsl.gvt.net.br) has joined #ceph
[22:03] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[22:03] * ChanServ sets mode +v andreask
[22:05] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has left #ceph
[22:06] * sanstorm (sanstorm@49.204.218.126) has left #ceph
[22:09] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[22:10] * rendar (~I@host227-140-dynamic.59-82-r.retail.telecomitalia.it) has joined #ceph
[22:10] * rendar (~I@host227-140-dynamic.59-82-r.retail.telecomitalia.it) Quit ()
[22:15] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[22:18] * scuttlemonkey (~scuttlemo@c-107-5-193-244.hsd1.mi.comcast.net) Quit (Ping timeout: 480 seconds)
[22:19] * lpabon (~quassel@66-189-8-115.dhcp.oxfr.ma.charter.com) Quit (Read error: Operation timed out)
[22:23] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[22:24] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[22:27] * scuttlemonkey (~scuttlemo@c-107-5-193-244.hsd1.mi.comcast.net) has joined #ceph
[22:27] * ChanServ sets mode +o scuttlemonkey
[22:37] * allsystemsarego (~allsystem@86.121.2.97) Quit (Quit: Leaving)
[22:40] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) Quit (Quit: koleosfuscus)
[22:42] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[22:43] * DV (~veillard@libvirt.org) has joined #ceph
[22:44] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[22:45] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[22:50] * danieagle (~Daniel@179.178.77.57.dynamic.adsl.gvt.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[22:52] * dereky (~derek@129-2-129-154.wireless.umd.edu) Quit (Quit: dereky)
[22:59] * ircolle (~Adium@2601:1:8380:2d9:6177:a369:8027:1ea4) Quit (Quit: Leaving.)
[22:59] * markbby (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[23:05] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[23:06] * tdasilva (~thiago@dhcp-0-26-5a-b5-f8-68.cpe.townisp.com) has left #ceph
[23:11] * ikrstic (~ikrstic@178-221-100-206.dynamic.isp.telekom.rs) Quit (Quit: Konversation terminated!)
[23:14] * KaZeR (~kazer@64.201.252.132) Quit (Ping timeout: 480 seconds)
[23:19] * rweeks (~goodeats@192.169.20.75.static.etheric.net) Quit (Quit: Leaving)
[23:24] * haomaiwa_ (~haomaiwan@124.161.76.154) Quit (Ping timeout: 480 seconds)
[23:24] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) Quit (Quit: Leaving.)
[23:29] * KaZeR (~kazer@64.201.252.132) has joined #ceph
[23:31] * ScOut3R (~ScOut3R@5401C5FF.dsl.pool.telekom.hu) has joined #ceph
[23:31] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[23:32] * tdb (~tdb@willow.kent.ac.uk) has joined #ceph
[23:35] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Quit: sync && halt)
[23:36] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[23:36] * ScOut3R_ (~ScOut3R@netacc-gpn-7-136-171.pool.telenor.hu) has joined #ceph
[23:37] * Nacer (~Nacer@2a01:e35:8b92:70c0:60d7:5218:c143:8dbd) has joined #ceph
[23:38] * dneary (~dneary@87-231-145-225.rev.numericable.fr) Quit (Read error: Operation timed out)
[23:38] * ScOut3R_ (~ScOut3R@netacc-gpn-7-136-171.pool.telenor.hu) Quit (Read error: Connection reset by peer)
[23:38] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[23:39] * Nacer_ (~Nacer@lattes.rosello.eu) has joined #ceph
[23:39] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[23:43] * ScOut3R (~ScOut3R@5401C5FF.dsl.pool.telekom.hu) Quit (Ping timeout: 480 seconds)
[23:44] * hufman (~hufman@cpe-184-58-235-28.wi.res.rr.com) Quit (Quit: leaving)
[23:44] * fdmanana (~fdmanana@bl10-142-30.dsl.telepac.pt) Quit (Quit: Leaving)
[23:45] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[23:45] * Nacer (~Nacer@2a01:e35:8b92:70c0:60d7:5218:c143:8dbd) Quit (Ping timeout: 480 seconds)
[23:53] * bandrus1 (~Adium@98.238.176.251) has joined #ceph
[23:59] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.