#ceph IRC Log

Index

IRC Log for 2014-04-03

Timestamps are in GMT/BST.

[0:05] * vilobhmm (~vilobhmm@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) has joined #ceph
[0:06] * dmsimard (~Adium@70.38.0.246) Quit (Quit: Leaving.)
[0:07] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[0:09] * kiwigeraint (~kiwigerai@208.72.139.54) has joined #ceph
[0:17] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) has joined #ceph
[0:22] * mnash (~chatzilla@vpn.expressionanalysis.com) Quit (Read error: Connection reset by peer)
[0:23] * mnash (~chatzilla@66-194-114-178.static.twtelecom.net) has joined #ceph
[0:23] * BillK (~BillK-OFT@106-69-94-85.dyn.iinet.net.au) has joined #ceph
[0:23] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[0:23] * mtanski (~mtanski@69.193.178.202) Quit ()
[0:28] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:30] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[0:30] * haomaiwang (~haomaiwan@118.187.35.6) has joined #ceph
[0:36] * ircolle (~Adium@2601:1:8380:2d9:4803:934a:7a39:e56) Quit (Quit: Leaving.)
[0:38] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[0:38] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) has joined #ceph
[0:45] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[0:48] * c74d3 (~c74d3a4eb@2002:4404:712c:0:76de:2bff:fed4:2766) Quit (Read error: Connection reset by peer)
[0:49] * c74d3 (~c74d3a4eb@2002:4404:712c:0:76de:2bff:fed4:2766) has joined #ceph
[0:49] * sputnik1_ (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[0:50] * zack_dolby (~textual@p852cae.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[0:55] * dmick (~dmick@2607:f298:a:607:e003:851c:ba77:20d1) Quit (Remote host closed the connection)
[1:00] * vilobhmm (~vilobhmm@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) Quit (Quit: vilobhmm)
[1:01] * vilobhmm (~vilobhmm@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) has joined #ceph
[1:01] * dmick (~dmick@2607:f298:a:607:39ed:84cc:5a53:e057) has joined #ceph
[1:04] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[1:07] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) Quit (Read error: Operation timed out)
[1:08] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[1:16] * Nats_ (~Nats@telstr575.lnk.telstra.net) Quit (Read error: Connection reset by peer)
[1:16] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[1:16] * fdmanana (~fdmanana@bl9-168-27.dsl.telepac.pt) Quit (Quit: Leaving)
[1:18] * Gamekiller77 (~Gamekille@128-107-239-234.cisco.com) Quit (Quit: Leaving)
[1:26] * erice_ (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[1:27] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) Quit (Read error: Operation timed out)
[1:31] * diegows_ (~diegows@190.190.5.238) has joined #ceph
[1:42] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[1:44] <aarontc> I'd like to suggest the Ceph manual be updated with a section about logging - what config variables can be set, which daemons they affect, and how to decode the output
[1:47] * dmsimard (~Adium@108.163.152.66) has joined #ceph
[1:47] <aarontc> (Actually, the troubleshooting/log-and-debug page has all the variables, but comprehensive decoding information and a more in-depth explanation of the various variables would be nice)
[1:48] * Nats (~Nats@telstr575.lnk.telstra.net) has joined #ceph
[1:48] * Cube (~Cube@66-87-131-42.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[1:49] * zack_dolby (~textual@e0109-49-132-42-109.uqwimax.jp) has joined #ceph
[1:50] * Cube (~Cube@66-87-65-77.pools.spcsdns.net) has joined #ceph
[1:56] * zack_dol_ (~textual@e0109-49-132-42-109.uqwimax.jp) has joined #ceph
[1:56] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit (Remote host closed the connection)
[1:57] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[1:58] * Pedras1 (~Adium@216.207.42.132) Quit (Read error: Operation timed out)
[2:02] * zack_dolby (~textual@e0109-49-132-42-109.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[2:05] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[2:08] * zack_dol_ (~textual@e0109-49-132-42-109.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[2:10] * zack_dolby (~textual@e0109-49-132-42-109.uqwimax.jp) has joined #ceph
[2:13] * KaZeR (~KaZeR@64.201.252.132) Quit (Remote host closed the connection)
[2:18] * zack_dolby (~textual@e0109-49-132-42-109.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[2:18] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:19] * LeaChim (~LeaChim@host86-162-2-97.range86-162.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:19] * KaZeR (~KaZeR@172.56.38.194) has joined #ceph
[2:20] * sz0 (~user@208.72.139.54) has joined #ceph
[2:22] * zack_dolby (~textual@pw126253200185.6.panda-world.ne.jp) has joined #ceph
[2:23] * zack_dolby (~textual@pw126253200185.6.panda-world.ne.jp) Quit (Read error: Connection reset by peer)
[2:31] * sjusthm (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[2:35] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) Quit (Quit: Leaving.)
[2:36] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) has joined #ceph
[2:36] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:40] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) has joined #ceph
[2:44] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[2:46] * Nats (~Nats@telstr575.lnk.telstra.net) Quit (Read error: Connection reset by peer)
[2:47] * vilobhmm (~vilobhmm@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) Quit (Quit: vilobhmm)
[2:47] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) has joined #ceph
[2:48] * Cube (~Cube@66-87-65-77.pools.spcsdns.net) Quit (Quit: Leaving.)
[2:51] * Nats (~Nats@telstr575.lnk.telstra.net) has joined #ceph
[2:57] * KaZeR (~KaZeR@172.56.38.194) Quit (Remote host closed the connection)
[2:59] * zack_dol_ (~textual@em114-51-144-15.pool.e-mobile.ne.jp) has joined #ceph
[3:00] * shang (~ShangWu@175.41.48.77) has joined #ceph
[3:02] * dmsimard (~Adium@108.163.152.66) Quit (Quit: Leaving.)
[3:04] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[3:04] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[3:05] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) has joined #ceph
[3:05] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[3:06] * fghaas (~florian@sccc-66-78-236-243.smartcity.com) has joined #ceph
[3:07] * zack_do__ (~textual@em114-51-144-15.pool.e-mobile.ne.jp) has joined #ceph
[3:07] * zack_dol_ (~textual@em114-51-144-15.pool.e-mobile.ne.jp) Quit (Read error: Connection reset by peer)
[3:09] * JeffK (~Jeff@38.99.52.10) Quit (Quit: JeffK)
[3:13] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[3:15] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (Ping timeout: 480 seconds)
[3:18] * Cube (~Cube@66-87-65-77.pools.spcsdns.net) has joined #ceph
[3:18] * sz0 (~user@208.72.139.54) Quit (Remote host closed the connection)
[3:24] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[3:25] * fghaas (~florian@sccc-66-78-236-243.smartcity.com) Quit (Quit: Leaving.)
[3:26] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[3:27] * Cube (~Cube@66-87-65-77.pools.spcsdns.net) Quit (Read error: Operation timed out)
[3:33] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[3:37] * diegows_ (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[3:43] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Read error: Connection reset by peer)
[3:43] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[3:49] * bandrus (~Adium@173.245.93.182) has joined #ceph
[3:49] * bandrus (~Adium@173.245.93.182) Quit ()
[3:57] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) has joined #ceph
[3:58] * zerick (~eocrospom@190.187.21.53) Quit (Remote host closed the connection)
[4:02] * haomaiwang (~haomaiwan@118.187.35.6) Quit (Remote host closed the connection)
[4:02] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) has joined #ceph
[4:04] * zack_do__ (~textual@em114-51-144-15.pool.e-mobile.ne.jp) Quit (Ping timeout: 480 seconds)
[4:08] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[4:08] * Cube (~Cube@66-87-65-77.pools.spcsdns.net) has joined #ceph
[4:10] * Boltsky (~textual@office.deviantart.net) Quit (Read error: Operation timed out)
[4:16] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:18] * bitblt (~don@128-107-239-234.cisco.com) Quit (Ping timeout: 480 seconds)
[4:19] * haomaiwa_ (~haomaiwan@117.79.232.243) has joined #ceph
[4:20] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[4:26] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) Quit (Ping timeout: 480 seconds)
[4:36] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) has joined #ceph
[4:41] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[4:51] * joshd (~joshd@2607:f298:a:607:3d90:8809:39c9:179a) Quit (Ping timeout: 480 seconds)
[4:54] * fedgoatbah (~fedgoat@cpe-24-28-22-21.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[4:54] * Guest5227 (~jeremy@ip23.67-202-99.static.steadfastdns.net) Quit (Remote host closed the connection)
[4:55] * tchmnkyz (~jeremy@ip23.67-202-99.static.steadfastdns.net) has joined #ceph
[4:55] * tchmnkyz is now known as Guest5275
[4:59] * joshd (~joshd@2607:f298:a:607:215f:ed7b:23ca:3763) has joined #ceph
[5:00] * sz0 (~user@208.72.139.54) has joined #ceph
[5:01] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) Quit (Read error: Connection reset by peer)
[5:03] * kiwigeraint (~kiwigerai@208.72.139.54) Quit (Remote host closed the connection)
[5:06] * kiwigeraint (~kiwigerai@208.72.139.54) has joined #ceph
[5:07] * thanhtran (~thanhtran@123.30.135.76) has joined #ceph
[5:08] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[5:09] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) has joined #ceph
[5:11] * Vacum_ (~vovo@i59F79988.versanet.de) has joined #ceph
[5:12] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[5:16] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:18] * Vacum (~vovo@i59F793BB.versanet.de) Quit (Ping timeout: 480 seconds)
[5:22] * JoeGruher (~JoeGruher@jfdmzpr02-ext.jf.intel.com) Quit (Remote host closed the connection)
[5:22] * Cube (~Cube@66-87-65-77.pools.spcsdns.net) Quit (Quit: Leaving.)
[5:24] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Ping timeout: 480 seconds)
[5:40] * chris_lu_ (~ccc2@bolin.Lib.lehigh.EDU) Quit (Ping timeout: 480 seconds)
[5:56] * shang (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[5:59] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[6:03] * vilobhmm (~vilobhmm@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) has joined #ceph
[6:03] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[6:07] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[6:07] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[6:11] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[6:16] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[6:17] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[6:17] * thanhtran (~thanhtran@123.30.135.76) Quit (Quit: Going offline, see ya! (www.adiirc.com))
[6:21] * shang (~ShangWu@175.41.48.77) has joined #ceph
[6:27] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:37] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) Quit (Quit: sprachgenerator)
[6:44] * julian (~julianwa@125.70.132.28) has joined #ceph
[6:55] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[6:57] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[6:59] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[7:02] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[7:10] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[7:21] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[7:21] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[7:22] * julian (~julianwa@125.70.132.28) Quit (Quit: afk)
[7:25] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit (Quit: Bye!)
[7:29] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[7:32] * kiwigeraint (~kiwigerai@208.72.139.54) Quit (Remote host closed the connection)
[7:33] * `jpg (~josephgla@ppp121-44-202-175.lns20.syd7.internode.on.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[7:37] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[7:40] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) has joined #ceph
[7:40] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[7:40] * haomaiwa_ (~haomaiwan@117.79.232.243) Quit (Remote host closed the connection)
[7:40] * kiwigeraint (~kiwigerai@208.72.139.54) has joined #ceph
[7:41] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) has joined #ceph
[7:45] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[7:51] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[7:53] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[8:00] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[8:01] * haomaiwa_ (~haomaiwan@117.79.232.153) has joined #ceph
[8:08] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[8:09] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) Quit (Read error: Operation timed out)
[8:13] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[8:15] <jerker> Anyone running Owncloud with RadosGW/Ceph backend?
[8:16] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[8:21] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:24] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[8:32] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[8:40] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[8:47] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) Quit (Quit: Leaving.)
[8:47] * BillK (~BillK-OFT@106-69-94-85.dyn.iinet.net.au) Quit (Read error: Operation timed out)
[8:47] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[8:48] * BillK (~BillK-OFT@106-69-32-229.dyn.iinet.net.au) has joined #ceph
[8:53] <aarontc> jerker: I was running owncloud with an RBD backend for a while
[8:53] <aarontc> then my cluster blew up :( heh
[8:56] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[9:01] <vilobhmm> aarontc: i am trying to use qemu-img to create image on rbd backend
[9:02] <vilobhmm> but somehow i am not able to create the image
[9:02] <vilobhmm> qemu-img: error connecting
[9:02] <vilobhmm> qemu-img: rbd:rbd/foo: Could not create image: Input/output error
[9:02] <aarontc> vilobhmm: I've never done that, but you probably need to make sure you have /etc/ceph/ceph.conf configured correctly as well as client keys in place
[9:02] <vilobhmm> can you shed more details
[9:03] * Sysadmin88 (~IceChat77@176.254.32.31) Quit (Quit: Download IceChat at www.icechat.net)
[9:03] <aarontc> vilobhmm: https://ceph.com/docs/master/rbd/libvirt/
[9:03] <vilobhmm> i have copied the ceph.conf from the mon node to the node where qemu is running
[9:03] <vilobhmm> as well client.admin.keyring on ceph mon and client are the same
[9:04] <aarontc> qemu has to be told what keyring to use, or something along those lines
[9:04] <aarontc> I don't know the details offhand
[9:04] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[9:04] * muhanpong (~povian@kang.sarang.net) Quit (Read error: Connection reset by peer)
[9:04] * muhanpong (~povian@kang.sarang.net) has joined #ceph
[9:04] <vilobhmm> ok
[9:04] <vilobhmm> how to tell qemu which keyring to use ?
[9:05] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) Quit (Remote host closed the connection)
[9:05] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[9:05] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) Quit (Read error: Connection reset by peer)
[9:05] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) has joined #ceph
[9:06] <aarontc> vilobhmm: I don't know, never done it. Try the ceph docs :)
[9:06] <vilobhmm> haha okay
[9:06] * c74d_ (~c74d@2002:4404:712c:0:76de:2bff:fed4:2766) has joined #ceph
[9:06] * wmat (wmat@wallace.mixdown.ca) Quit (Read error: Connection reset by peer)
[9:06] * wmat (wmat@wallace.mixdown.ca) has joined #ceph
[9:07] * wmat is now known as Guest5303
[9:07] * c74d3 (~c74d3a4eb@2002:4404:712c:0:76de:2bff:fed4:2766) Quit (Remote host closed the connection)
[9:08] * c74d3 (~c74d3a4eb@2002:4404:712c:0:76de:2bff:fed4:2766) has joined #ceph
[9:08] * vilobhmm (~vilobhmm@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) Quit (Quit: vilobhmm)
[9:09] * c74d (~c74d@2002:4404:712c:0:76de:2bff:fed4:2766) Quit (Ping timeout: 480 seconds)
[9:09] * lurbs_ (user@uber.geek.nz) has joined #ceph
[9:09] * lurbs (user@uber.geek.nz) Quit (Read error: Connection reset by peer)
[9:12] * zidarsk8 (~zidar@tm.78.153.58.217.dc.cable.static.telemach.net) has joined #ceph
[9:12] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[9:12] * mattt (~textual@94.236.7.190) has joined #ceph
[9:12] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[9:13] * zidarsk8 (~zidar@tm.78.153.58.217.dc.cable.static.telemach.net) has left #ceph
[9:13] * analbeard (~shw@support.memset.com) has joined #ceph
[9:13] * mattt (~textual@94.236.7.190) Quit ()
[9:14] * brambles (lechuck@s0.barwen.ch) Quit (Remote host closed the connection)
[9:14] * brambles (lechuck@s0.barwen.ch) has joined #ceph
[9:14] * glambert (~glambert@37.157.50.80) Quit (Ping timeout: 480 seconds)
[9:16] * Guest5303 (wmat@wallace.mixdown.ca) Quit (Remote host closed the connection)
[9:16] * wmat_ (wmat@wallace.mixdown.ca) has joined #ceph
[9:20] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[9:21] * glambert (~glambert@37.157.50.80) has joined #ceph
[9:21] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:26] * blue (~blue@irc.mmh.dk) has joined #ceph
[9:28] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[9:35] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[9:43] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[9:44] * oms101 (~oms101@charybdis-ext.suse.de) has joined #ceph
[9:45] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[9:45] * ChanServ sets mode +v andreask
[9:46] * dpippenger (~riven@cpe-198-72-157-189.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[9:51] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[9:59] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[10:00] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Ping timeout: 480 seconds)
[10:07] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[10:10] * ksingh (~Adium@2001:708:10:10:2465:f61:d9dd:a380) has joined #ceph
[10:11] * thb (~me@port-2250.pppoe.wtnet.de) has joined #ceph
[10:12] * thb is now known as Guest5313
[10:12] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[10:14] * devicenull (sid4013@id-4013.ealing.irccloud.com) Quit (Ping timeout: 480 seconds)
[10:15] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[10:16] * Guest5313 is now known as thb
[10:16] * devicenull (sid4013@id-4013.ealing.irccloud.com) has joined #ceph
[10:23] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[10:27] * allsystemsarego (~allsystem@79.115.62.238) has joined #ceph
[10:27] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[10:29] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[10:31] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[10:39] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[10:44] * LeaChim (~LeaChim@host86-162-1-71.range86-162.btcentralplus.com) has joined #ceph
[10:46] * leseb (~leseb@185.21.174.206) Quit (Killed (NickServ (Too many failed password attempts.)))
[10:47] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[10:50] * leseb (~leseb@185.21.174.206) has joined #ceph
[10:52] * thomnico (~thomnico@2a01:e35:8b41:120:6093:7fbb:c2db:e90f) has joined #ceph
[10:54] * fouxm (~foucault@ks3363630.kimsufi.com) Quit (Ping timeout: 480 seconds)
[10:54] * fouxm (~foucault@ks3363630.kimsufi.com) has joined #ceph
[10:54] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[11:03] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[11:07] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) Quit (Remote host closed the connection)
[11:10] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[11:15] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) has joined #ceph
[11:18] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[11:20] * sverrest (~sverrest@cm-84.208.166.184.getinternet.no) has joined #ceph
[11:23] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[11:25] * reed (~reed@HSI-KBW-46-237-220-33.hsi.kabel-badenwuerttemberg.de) has joined #ceph
[11:26] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[11:30] * Kioob (~kioob@2a01:e34:ec0a:c0f0:21e:8cff:fe07:45b6) Quit (Quit: Leaving.)
[11:32] * dpippenger (~riven@cpe-198-72-157-189.socal.res.rr.com) has joined #ceph
[11:34] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[11:41] <jerker> aarontc: interesting. I am thinking of the radosgw backend rather than running something on top of RBD.
[11:41] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[11:44] * christia1 (~christian@mintzer.imp.fu-berlin.de) Quit (Ping timeout: 480 seconds)
[11:45] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[11:46] * ajazdzewski (~quassel@2001:4dd0:ff00:9081:3926:9697:b8b7:61d1) has joined #ceph
[11:50] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[11:50] <classicsnail> rhel6.5 firefly rpms on gitbuilder used the wrong init script... using an ubuntu/debian style one instead of a rh style
[11:58] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[12:00] * gNetLabs (~gnetlabs@188.84.22.23) Quit ()
[12:03] <jerker> is the cache pool stuff in firefly stable for production use? I would be so cool to get the IOPS on the VM a boost without sacrificing storage volume.
[12:05] * ksingh (~Adium@2001:708:10:10:2465:f61:d9dd:a380) Quit (Ping timeout: 480 seconds)
[12:06] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[12:07] <Anticimex> hmm, is there per-rbd io throttling within ceph, or how do you achieve this (at client side, presumably)?
[12:07] <Anticimex> jerker: i want to know also :) neat feature
[12:09] * ksingh (~Adium@2001:708:10:91:f822:f69d:b9e5:cd50) has joined #ceph
[12:11] * ksingh1 (~Adium@a-v6-0013.vpn.csc.fi) has joined #ceph
[12:12] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Ping timeout: 480 seconds)
[12:13] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[12:14] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[12:15] <Anticimex> sebastian illuminates: http://www.sebastien-han.fr/blog/2013/12/23/openstack-ceph-rbd-and-qos/
[12:16] * rdas (~rdas@nat-pool-pnq-t.redhat.com) has joined #ceph
[12:17] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) Quit (Quit: jlogan)
[12:17] * ksingh (~Adium@2001:708:10:91:f822:f69d:b9e5:cd50) Quit (Ping timeout: 480 seconds)
[12:22] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[12:28] * ckranz (~ckranz@193.240.116.146) has joined #ceph
[12:30] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[12:30] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[12:36] * julian (~julianwa@125.70.132.28) has joined #ceph
[12:38] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[12:39] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Ping timeout: 480 seconds)
[12:40] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[12:40] * ChanServ sets mode +v andreask
[12:46] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[12:46] * dmsimard (~Adium@70.38.0.246) has joined #ceph
[12:48] * rdas (~rdas@nat-pool-pnq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[12:50] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[12:51] * reed (~reed@HSI-KBW-46-237-220-33.hsi.kabel-badenwuerttemberg.de) Quit (Ping timeout: 480 seconds)
[12:55] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[12:58] * i_m (~ivan.miro@gbibp9ph1--blueice2n2.emea.ibm.com) has joined #ceph
[12:58] * i_m (~ivan.miro@gbibp9ph1--blueice2n2.emea.ibm.com) Quit ()
[12:58] * i_m (~ivan.miro@gbibp9ph1--blueice2n2.emea.ibm.com) has joined #ceph
[13:02] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[13:07] * thomnico (~thomnico@2a01:e35:8b41:120:6093:7fbb:c2db:e90f) Quit (Quit: Ex-Chat)
[13:10] * BillK (~BillK-OFT@106-69-32-229.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[13:10] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[13:16] * ksingh (~Adium@2001:708:10:91:cdcc:afe3:5b1e:3beb) has joined #ceph
[13:17] * ksingh1 (~Adium@a-v6-0013.vpn.csc.fi) Quit (Ping timeout: 480 seconds)
[13:18] * ksingh1 (~Adium@a-v6-0009.vpn.csc.fi) has joined #ceph
[13:19] * chrisk (~ckranz@193.240.116.146) has joined #ceph
[13:24] * ghartz (~ghartz@ip-68.net-80-236-84.joinville.rev.numericable.fr) has joined #ceph
[13:24] * ksingh (~Adium@2001:708:10:91:cdcc:afe3:5b1e:3beb) Quit (Ping timeout: 480 seconds)
[13:25] * ckranz (~ckranz@193.240.116.146) Quit (Ping timeout: 480 seconds)
[13:27] * stewiem20001 (~stewiem20@195.10.250.233) Quit (Read error: Connection reset by peer)
[13:28] * stewiem2000 (~stewiem20@195.10.250.233) has joined #ceph
[13:31] * ksingh1 (~Adium@a-v6-0009.vpn.csc.fi) Quit (Ping timeout: 480 seconds)
[13:39] * thomnico (~thomnico@2a01:e35:8b41:120:6093:7fbb:c2db:e90f) has joined #ceph
[13:41] * diegows_ (~diegows@190.190.5.238) has joined #ceph
[13:42] * TheBittern (~thebitter@195.10.250.233) Quit (Read error: Connection reset by peer)
[13:42] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[13:45] * chrisk is now known as ckranz
[13:45] * yuriw (~Adium@c-71-202-126-141.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[13:53] * diegows_ (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[13:58] * ajazdzewski (~quassel@2001:4dd0:ff00:9081:3926:9697:b8b7:61d1) Quit (Ping timeout: 480 seconds)
[14:12] * julian (~julianwa@125.70.132.28) Quit (Quit: afk)
[14:33] * rdas (~rdas@nat-pool-pnq-t.redhat.com) has joined #ceph
[14:39] * Karcaw (~evan@96-41-200-66.dhcp.elbg.wa.charter.com) Quit (Ping timeout: 480 seconds)
[14:45] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[14:54] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[14:57] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[14:58] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[14:59] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[14:59] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) has joined #ceph
[15:01] * mancdaz (~mancdaz@2a00:1a48:7807:102:94f4:6b56:ff08:886c) Quit (Quit: ZNC - http://znc.in)
[15:01] * mancdaz (~mancdaz@2a00:1a48:7807:102:94f4:6b56:ff08:886c) has joined #ceph
[15:02] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[15:08] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[15:10] * BillK (~BillK-OFT@106-69-32-229.dyn.iinet.net.au) has joined #ceph
[15:18] * `jpg (~josephgla@ppp121-44-251-173.lns20.syd7.internode.on.net) has joined #ceph
[15:19] * BillK (~BillK-OFT@106-69-32-229.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[15:22] * Karcaw (~evan@96-41-200-66.dhcp.elbg.wa.charter.com) has joined #ceph
[15:23] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has left #ceph
[15:26] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[15:41] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[15:47] * jtangwk (~Adium@gateway.tchpc.tcd.ie) has joined #ceph
[15:49] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[15:54] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:57] * hughsaunders_ (~hughsaund@2001:4800:780e:510:fdaa:9d7a:ff04:4622) has joined #ceph
[15:57] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[16:01] * hughsaunders (~hughsaund@wherenow.org) Quit (Ping timeout: 480 seconds)
[16:14] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[16:15] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) has joined #ceph
[16:18] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[16:20] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[16:23] * rbuzzell (~rbuzzell@2620:8d:8000:e49:5054:ff:fe04:b198) has joined #ceph
[16:24] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[16:30] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) has joined #ceph
[16:32] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:32] * scuttlemonkey (~scuttlemo@c-107-5-193-244.hsd1.mi.comcast.net) has joined #ceph
[16:32] * ChanServ sets mode +o scuttlemonkey
[16:32] <glambert> how do I flush out the clock skew?
[16:35] <Fruit> make sure NTP is running properly, restart some mons, increase the mon clock drift allowed
[16:39] * fdmanana (~fdmanana@bl9-168-27.dsl.telepac.pt) has joined #ceph
[16:41] * KaZeR (~KaZeR@64.201.252.132) has joined #ceph
[16:42] * diegows_ (~diegows@190.216.51.2) has joined #ceph
[16:43] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[16:49] * rahatm1 (~rahatm1@CPE602ad089ce64-CM602ad089ce61.cpe.net.cable.rogers.com) has joined #ceph
[16:49] * rahatm1 (~rahatm1@CPE602ad089ce64-CM602ad089ce61.cpe.net.cable.rogers.com) Quit ()
[16:58] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[16:59] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[17:00] * bitblt (~don@128-107-239-234.cisco.com) has joined #ceph
[17:02] * yuriw (~Adium@c-71-202-126-141.hsd1.ca.comcast.net) has joined #ceph
[17:05] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Read error: Operation timed out)
[17:06] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:07] * allsystemsarego (~allsystem@79.115.62.238) Quit (Quit: Leaving)
[17:09] * xcrracer (~xcrracer@fw-ext-v-1.kvcc.edu) Quit (Remote host closed the connection)
[17:09] * xcrracer (~xcrracer@fw-ext-v-1.kvcc.edu) has joined #ceph
[17:09] * sprachgenerator (~sprachgen@130.202.135.191) has joined #ceph
[17:11] * thomnico (~thomnico@2a01:e35:8b41:120:6093:7fbb:c2db:e90f) Quit (Ping timeout: 480 seconds)
[17:17] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:21] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) Quit (Quit: Leaving.)
[17:21] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[17:23] * erwan_taf (~erwan@83.167.43.235) Quit (Remote host closed the connection)
[17:23] * thomnico (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) has joined #ceph
[17:24] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) Quit (Quit: Leaving.)
[17:30] * JeffK (~Jeff@38.99.52.10) has joined #ceph
[17:30] * leseb (~leseb@185.21.174.206) Quit (Killed (NickServ (Too many failed password attempts.)))
[17:31] * leseb (~leseb@185.21.174.206) has joined #ceph
[17:31] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[17:45] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[17:48] * b0e (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[17:49] * rmoe (~quassel@12.164.168.117) has joined #ceph
[17:55] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[17:59] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Remote host closed the connection)
[18:01] * JoeGruher (~JoeGruher@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[18:02] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[18:03] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[18:04] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[18:07] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[18:10] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:10] * shang (~ShangWu@175.41.48.77) Quit (Read error: Operation timed out)
[18:13] * fghaas (~florian@m960536d0.tmodns.net) has joined #ceph
[18:19] * bladejogger (~bladejogg@0001c1f3.user.oftc.net) has joined #ceph
[18:25] * ircolle (~Adium@2601:1:8380:2d9:e11d:1e7a:52e0:7f75) has joined #ceph
[18:25] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Remote host closed the connection)
[18:27] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Ping timeout: 480 seconds)
[18:27] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[18:34] * ircoftcnet (~larryliu@c-76-103-249-91.hsd1.ca.comcast.net) has joined #ceph
[18:34] * i_m (~ivan.miro@gbibp9ph1--blueice2n2.emea.ibm.com) Quit (Quit: Leaving.)
[18:36] * ircolle1 (~Adium@2601:1:8380:2d9:e11d:1e7a:52e0:7f75) has joined #ceph
[18:36] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[18:39] * ircolle (~Adium@2601:1:8380:2d9:e11d:1e7a:52e0:7f75) Quit (Ping timeout: 480 seconds)
[18:42] * ircoftcnet (~larryliu@c-76-103-249-91.hsd1.ca.comcast.net) has left #ceph
[18:43] * larryliu (~larryliu@c-76-103-249-91.hsd1.ca.comcast.net) has joined #ceph
[18:44] * dgbaley27 (~matt@c-98-245-167-2.hsd1.co.comcast.net) has joined #ceph
[18:45] * fghaas (~florian@m960536d0.tmodns.net) Quit (Ping timeout: 480 seconds)
[18:46] * zidarsk8 (~zidar@tm.78.153.58.217.dc.cable.static.telemach.net) has joined #ceph
[18:46] <loicd> aarontc: hi ! Will you be with us for the Ceph User Committee meeting ?
[18:47] <loicd> (in ~1h from now)
[18:57] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:58] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[19:00] <aarontc> loicd: I'll definitely try
[19:01] <loicd> cool :-)
[19:01] <aarontc> jerker: Interesting, I wasn't aware owncloud supported an S3-style backend
[19:03] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[19:03] <aarontc> loicd: we are meeting right here, correct? :)
[19:03] <loicd> correct
[19:04] <aarontc> awesome. I'll be here, just maybe somewhat distracted depending on what happens at the office :)
[19:04] <loicd> irc meetings are good for that ;-)
[19:06] <aarontc> True, true
[19:07] * ircolle (~Adium@2601:1:8380:2d9:e11d:1e7a:52e0:7f75) has joined #ceph
[19:08] * sjusthm (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) has joined #ceph
[19:10] * sputnik1_ (~sputnik13@207.8.121.241) has joined #ceph
[19:11] * ircolle1 (~Adium@2601:1:8380:2d9:e11d:1e7a:52e0:7f75) Quit (Ping timeout: 480 seconds)
[19:13] * Cube (~Cube@12.248.40.138) has joined #ceph
[19:13] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[19:15] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[19:16] * ircolle (~Adium@2601:1:8380:2d9:e11d:1e7a:52e0:7f75) Quit (Ping timeout: 480 seconds)
[19:18] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[19:20] <JoeGruher> i just deleted a bunch of pools but Ceph is not freeing the capacity... any ideas why and how to fix it? http://pastebin.com/aLhq1xHq
[19:23] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[19:24] <bladejogger> can civilians watch this committee meeting?
[19:24] <aarontc> bladejogger: yep
[19:24] <bladejogger> will civilians understand anything being spoken?
[19:24] <loicd> bladejogger: ceph user committtee is *all* about civilians !
[19:25] <aarontc> I was just going to say if I can understand it, I'm sure you can!
[19:25] <bladejogger> can't hurt to stick around :)
[19:26] <bladejogger> can I just ask, if ceph grows and assuming the market share for storage remains static, whose lunch does ceph eat??
[19:26] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[19:26] <aarontc> complete speculation, but I'd guess mostly swift and the big vendors selling petabyte-in-a-rack-enclosure systems
[19:27] <bladejogger> thanks
[19:28] <aarontc> bladejogger: but, I also think ceph solves a slightly different problem
[19:28] <aarontc> so a direct comparison isn't that fair
[19:28] <JoeGruher> plus the market is growing, it isn't entirely zero sum
[19:29] <bladejogger> right, I was just asking if it were zero sum. put differently, who are ceph's direct competitors?
[19:29] * dgbaley27 (~matt@c-98-245-167-2.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[19:29] <bladejogger> I just want to know what products they would displace if I, as a cto, decided to convert
[19:30] <aarontc> I'm curious, too. I don't know who else makes a system I can deploy myself and add whitebox disks/nodes to on demand
[19:30] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[19:30] <aarontc> (except switft)
[19:30] <nhm> bladejogger: Ceph can provide object, block, and distributed filesystem storage, so we compete in all of those areas.
[19:31] <nhm> bladejogger: right now object and block are what Inktank supports for production deployments.
[19:31] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[19:31] <bladejogger> and considering it's open source, is the strategy for inktank to be a sort of future red hat after adoptions increase?
[19:33] <bladejogger> lol too many questions. I just lurk. :P
[19:33] <aarontc> bladejogger: yes, I believe that is inktank's plan. They also (currently) offer commercial add-ons for enterprise customers as well
[19:34] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[19:34] <nhm> bladejogger: I guess I'd just think of Inktank as where many of the Ceph developers happen to be working and kind of the premier organization for providing production Ceph support, but Ceph itself is a community project.
[19:34] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[19:35] <bladejogger> thanks
[19:35] <nhm> It's why we have the CDS and try to keep a lot of the meetings open
[19:38] * Hakann (~kuresyten@88.234.56.103) has joined #ceph
[19:38] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) has joined #ceph
[19:38] * Hakann (~kuresyten@88.234.56.103) Quit (autokilled: Do not spam other people. Mail support@oftc.net if you feel this is in error. (2014-04-03 17:38:35))
[19:39] * kiwigeraint (~kiwigerai@208.72.139.54) Quit (Remote host closed the connection)
[19:42] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[19:42] * kiwigeraint (~kiwigerai@208.72.139.54) has joined #ceph
[19:46] * b0e (~aledermue@rgnb-5d8798cf.pool.mediaWays.net) has joined #ceph
[19:48] * xarses (~andreww@12.164.168.117) has joined #ceph
[19:51] * kiwigeraint (~kiwigerai@208.72.139.54) Quit (Remote host closed the connection)
[19:54] * kiwigeraint (~kiwigerai@208.72.139.54) has joined #ceph
[19:54] * Svedrin (svedrin@ketos.funzt-halt.net) Quit (Ping timeout: 480 seconds)
[19:54] * kiwigeraint (~kiwigerai@208.72.139.54) Quit (Remote host closed the connection)
[19:54] * kiwigeraint (~kiwigerai@208.72.139.54) has joined #ceph
[19:56] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) has joined #ceph
[19:58] <loicd> The Ceph User Committee monthly meeting (first edition) is about to begin, in 2 minutes :-) The agenda is:
[19:58] <loicd> * Meetups https://wiki.ceph.com/Community/Meetups
[19:58] <loicd> * Goodies https://ceph.myshopify.com/collections/all
[19:58] <loicd> * Documentation of the new Firefly feature (tiering, erasure code) http://ceph.com/docs/master/dev/
[19:58] <loicd> * Careers http://ceph.com/community/careers/
[20:00] <loicd> and we have kraken with us for entertainment ;-)
[20:00] <loicd> !norris CephUserCommittee
[20:00] <kraken> CephUserCommittee is the only person on the planet that can kick you in the back of the face.
[20:00] <loicd> here we go
[20:00] <loicd> This is the first meeting of the Ceph User Committee http://ceph.com/community/the-ceph-user-committee-is-born/ . All are welcome and the proposed agenda announced on the ceph mailing list ( http://www.spinics.net/lists/ceph-users/msg08743.html ) is flexible.
[20:01] <loicd> does someone want to add to the agenda ?
[20:01] <loicd> (I'll timeout questions after 1minute if there is no answer)
[20:02] <loicd> First topic : * Documentation of the new Firefly feature (tiering, erasure code) http://ceph.com/docs/master/dev/
[20:02] <loicd> The Ceph User Committee could be a source of inspiration for developers
[20:02] <loicd> For erasure code here is what happens :
[20:02] * beardo_ (~sma310@beardo.cc.lehigh.edu) Quit (Remote host closed the connection)
[20:03] <loicd> (I'm writing based on my experience as a user and developer)
[20:03] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) has joined #ceph
[20:03] <loicd> I'm not sure where people land when asking themselves : "let's try erasure code"
[20:03] <loicd> Google second link for https://www.google.com/search?q=erasure+code+ceph
[20:03] <loicd> is https://ceph.com/docs/master/dev/erasure-coded-pool/
[20:03] <nhm> loicd: One of the questions that has come up in the past for both Erasure coding and Tiering is ease of use.
[20:03] <janos> i don't imagine it's too beneficial for smaller users
[20:04] <loicd> nhm: I think it's easy to use but ... I'm biased
[20:04] <loicd> janos: how do you mean ?
[20:04] <nhm> loicd: developers are always biased. :) None of us are good test subjects. We need fresh users that have never tried it.
[20:04] <janos> it seems in general that it's an increase in CPU usage, without much benefit unless you are really needing space saving
[20:05] <loicd> nhm: right !
[20:05] <Vacum_> We will definitely try it as soon as FireFly is available
[20:05] <janos> i'm not saying it's a bad feature or anything. just asking about use-case
[20:05] <loicd> janos: it's an interesting perspective
[20:05] <janos> it's acool parity idea, no doubt
[20:06] <loicd> janos: if you have 3 machines (with 1 osd on each), then erasure code saves you space the same way RAID5 would (only with different machines instead of a single machine with 3 disks).
[20:06] <Vacum_> janos: as soon as you have a lot of seldomly or nearly never read data, its worth the cpu/io <-> space tradeoff
[20:06] <janos> Vacum_, good point
[20:06] <nhm> janos: space saving is definitely the big plus. Arguably it may at some point be faster for the same availability for large object writes, but will almost always be slower for reads and small object writes.
[20:07] <loicd> If CPU / performances is an issue, erasure code is indeed not a good choice
[20:07] <janos> is the load entirely on the OSD hosts? or do Mon's get involved much?
[20:07] <Vacum_> janos: together with the Tiered Storage and its rules, you automatically benefit from EC for "cold" objects
[20:07] <janos> beyond finding things
[20:08] <loicd> janos: the load is on the OSDs
[20:08] <janos> cool
[20:08] <loicd> Vacum_: right
[20:08] <janos> Vacum_, yeah i was thinking that after your first comment
[20:08] <janos> this little conversation definitely increeased my interest level
[20:08] <loicd> as a user, I don't see what I would not use erasure code as a second tier, because it reduces space without impacting performances
[20:09] <Vacum_> regarding EC: are there any plans for "glued objects". like adding a bunch of small objects together into one large blob, then EC that blob?
[20:09] * loicd notes for the record : create a ticket to clarify this use case in https://ceph.com/docs/master/dev/erasure-coded-pool/
[20:09] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[20:10] <loicd> Vacum_: not that I know. Erasuer code is not fit for small objects as it stands. But the solution to deal with it is yet to be defined, I think.
[20:10] <kraken> http://i.imgur.com/6E6n1.gif
[20:10] <Vacum_> loicd: regarding the documentation and the "10 DCs" example. It does not show the tradeoff of this solution: to read one object, you have to read from 6 DCs!
[20:10] <loicd> Vacum_: right
[20:10] * loicd notes for the record: "10 DCs" example. It does not show the tradeoff of this solution: to read one object, you have to read from 6 DCs! https://ceph.com/docs/master/dev/erasure-coded-pool/
[20:11] <Vacum_> loicd: honestly, a bit lazy right now :) are there examples for Tiered Storage based on "coldness" too?
[20:11] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[20:11] <loicd> :-)
[20:12] <Vacum_> a combined example might be great. EC with TS
[20:12] <loicd> I think https://ceph.com/docs/master/dev/cache-pool/ is what you're looking for
[20:12] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[20:12] <loicd> Vacum_: ^
[20:12] <loicd> it's not about EC but if you think of the second tier as EC, it's the same really
[20:12] <Vacum_> loicd: cache pools actually duplicate the data, right?
[20:12] <loicd> yes
[20:12] <Vacum_> oh
[20:13] <Vacum_> When I read about Tiered Storage on Ceph and possible rules, I imagined it would move data from one pool to the other
[20:13] <loicd> it does
[20:13] <loicd> data is duplicated and moved when the cache pool is full
[20:13] <loicd> or when it is dirty
[20:13] <loicd> Users got confused by the syntax change between 0.78 (which is the current version) and what is in master (which is what they get when compiling from sources)
[20:14] <Vacum_> mh
[20:14] <loicd> Vacum_: your line of question suggests the documentation should clarify this ;-)
[20:14] <Vacum_> :)
[20:15] * loicd notes for the record : clarify the relationships between tiering and erasure code because at the moment it looks like tiering is exclusively for caching
[20:15] * thomnico (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) Quit (Quit: Ex-Chat)
[20:15] <aarontc> (loicd: worth updating topic during CUCMM?)
[20:15] * emkt (~guri@15.165.202.84.customer.cdi.no) has joined #ceph
[20:15] <loicd> aarontc: yes, it would be worth having a bot to archive also ;-)
[20:15] <loicd> The unit tests show working examples https://github.com/ceph/ceph/blob/master/src/test/erasure-code/test-erasure-code.sh but users don't find them most of the time
[20:16] * Vacum_ is now known as Vacum
[20:16] <loicd> Unless there is more about tiering / erasure code, I propose we move to the next topic
[20:16] <aarontc> +1
[20:16] <Vacum> +1
[20:17] <loicd> fake /topic Tracker http://tracker.ceph.com/
[20:17] <loicd> People seem to have problems registering (have to try again)
[20:17] <loicd> How harmfull is it ? Some people don't report problems because of that.
[20:18] * diegows_ (~diegows@190.216.51.2) Quit (Ping timeout: 480 seconds)
[20:18] <aarontc> Would it be possible to allow anonymous bug reports? Would that be desirable if possible?
[20:18] <Vacum> loicd: yep, when I call http://tracker.ceph.com/account/register I get an Internal error
[20:18] <loicd> +1
[20:18] <Vacum> loicd: I created a ticket for this already on the tracker, after you created the account for me :)
[20:18] <janos> has there been any significant move toward production-supported CephFS?
[20:18] <loicd> :-)
[20:19] * loicd adds the topic to the agenda
[20:19] <loicd> I don't think there is more we can do regarding the tracker, except raise it for people to work on. I'm not sure who's tasked with this though.
[20:20] <Vacum> hasn't been touched yet, my ticket
[20:20] <loicd> houkouonchi-work: do you know who's in charge of bug fixing the tracker ?
[20:20] <aarontc> loicd: patrick is my guess
[20:20] <loicd> Vacum: could past the ticket URL ?
[20:20] <Vacum> http://tracker.ceph.com/issues/7609
[20:20] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) Quit (Quit: Ex-Chat)
[20:20] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[20:20] * ChanServ sets mode +v andreask
[20:20] <loicd> scuttlemonkey: are you cursed with this ? ;-)
[20:21] * loicd notes for the record : +3 on http://tracker.ceph.com/issues/7609 and figure out who needs help with this
[20:21] <emkt> yeah also would like to know the status of a production ready cephfs :-)
[20:21] <loicd> ok, let's move to this topic then :-)
[20:21] * aarontc would also like to know about production CephFS
[20:21] <loicd> fake /topic has there been any significant move toward production-supported CephFS?
[20:21] * Fruit aol
[20:21] <scuttlemonkey> wha?
[20:22] <janos> run away!
[20:22] <loicd> scuttlemonkey: :-)
[20:22] <scuttlemonkey> haha
[20:22] <loicd> my understanding, from the last CDS, is that CephFS is not production ready and no promise has been made
[20:22] <scuttlemonkey> no specific promise beyond "this year"
[20:22] <loicd> when I look at the work done, I'm truly impressed
[20:23] <janos> is there a solid list of show-stoppers to make it prod-ready?
[20:23] <loicd> but I have *no clue* how much is left to e done
[20:23] <scuttlemonkey> I think the rough estimate that was given (napkin sketch) was sometime in Q3
[20:23] <scuttlemonkey> but that's predicated on a lot of other things happening
[20:24] <scuttlemonkey> from a stability standpoint we're actually looking pretty good (at last observation)
[20:24] <Fruit> an fsck tool has yet to be developed I think?
[20:24] <scuttlemonkey> ^
[20:24] <loicd> out of curiosity, what use case do you have that needs CephFS ?
[20:24] <scuttlemonkey> loicd: there are surprising number of them
[20:24] <scuttlemonkey> several folks want to get busy using the hdfs->cephfs shim
[20:24] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[20:24] <loicd> are they listed somewhere ?
[20:24] <scuttlemonkey> hmmm
[20:24] <scuttlemonkey> don't think anyone has aggregated all the use cases
[20:24] <gregsfortytwo> fyi a real fsck is unlikely to make it before production, but there are a lot of manual repair tools we think are blocking it
[20:24] <aarontc> I want it to store filese!
[20:25] <gregsfortytwo> in addition to way more testing
[20:25] <aarontc> err, files!
[20:25] * loicd notes for the record : open a wiki page for people to list their CephFS use case
[20:25] <Vacum> aarontc: only store? :)
[20:25] <aarontc> Vacum: touche. Also I'd like to retrieve them, and list them, and modify them.
[20:25] <scuttlemonkey> aarontc: awww, and here I thought you'd just add an 's' ....fileses precious!
[20:25] <emkt> i am looking to use cephfs for hadoop and files for user as well as archiving
[20:25] <janos> haha
[20:25] <Fruit> web content for existing non-ceph-aware applications. that's thousands just there
[20:25] <loicd> ahah
[20:26] <aarontc> scuttlemonkey: I did consider it :)
[20:26] <loicd> Fruit: so the benefit would be to have a self-repairable posix compliant distributed FS in this case, right ? So you don't have to worry about backups ? Or so you don't have to worry about scale out issues ?
[20:26] <aarontc> CephFS is (for my uses) the best API to utilize Ceph - all my existing applications support filesystems
[20:26] <janos> for me it's largely scale out issues
[20:27] <aarontc> for me it's capacity... RAID arrays can only get so large
[20:27] <Fruit> loicd: the benefit would be a filesystem without a SPOF
[20:27] <Vacum> then rbd could already be sufficient?
[20:27] <scuttlemonkey> as far as use cases go I think the most popular are: hadoop, reexporting as cifs/nfs, backing existing tools that use FS, distributing images to be local for hypervisor nodes in openstack, SAN/NAS stuff....probably a few others I'm forgetting
[20:27] <janos> actually both you mentioned. backups + scale out. availability
[20:28] <aarontc> (and precious fileses)
[20:28] * loicd notes for the record seed the CephFS use case list with the content of the conversation
[20:28] <scuttlemonkey> always those :)
[20:28] <loicd> scuttlemonkey: hadoop, you means HDFS compatible access ?
[20:28] <scuttlemonkey> right
[20:28] <scuttlemonkey> hdfs replacement
[20:29] <loicd> Using custom rados classes to offload processing to the OSD is not very popular I assume.
[20:29] <aarontc> loicd: I would be interested in that, actually
[20:30] <loicd> http://noahdesu.github.io/2013/02/21/writing-cls-lua-handlers.html etc.
[20:30] <scuttlemonkey> loicd: yeah, I think it's largely too early for the RADOS classes
[20:30] <scuttlemonkey> people haven't quite gotten to the point where they can realize the hotness :)
[20:30] <aarontc> (if it makes sense - reprocessing images in batches, or assembling frames into videos)
[20:30] <loicd> right
[20:30] <Vacum> eh, that all only makes sense if one object is stored in one piece on one osd, right?
[20:30] <Vacum> as soon as it gets chunked: no dice
[20:31] <Vacum> ie image/video processing
[20:31] <aarontc> (Are files in CephFS stored as one file per object?)
[20:31] <nhm> scuttlemonkey: don't forgot all the HPC folks that want CephFS. ;)
[20:31] <loicd> Vacum: true. You get control over that though.
[20:31] <Vacum> (are they with radosgw?)
[20:31] <nhm> supercomputer scratch/project/home storage
[20:31] <aarontc> Vacum: they are not with radosgw, IIRC
[20:31] <scuttlemonkey> nhm: true
[20:32] <loicd> nhm: why do they specifically ?
[20:32] <scuttlemonkey> although I don't understand the workloads there...so saying "HPC applications" isn't helpful unless I can explain it
[20:32] <aarontc> scuttlemonkey: I have a buddy who could get more details for us if that's something worth pursuing
[20:32] * Boltsky (~textual@office.deviantart.net) has joined #ceph
[20:32] <nhm> loicd: primarily as a potential alternative if for some reason they don't want to use Lustre.
[20:33] <loicd> ok
[20:33] * loicd notes that we're past half of the meeting, 27 minutes left
[20:33] <Vacum> IMO the currently really interesting pricepoints $/GB all tend to already have an extreme HDD / CPU proportion
[20:33] <aarontc> loicd: is it beer time yet?
[20:33] <scuttlemonkey> aarontc: might be interesting...even more so if he's already playing with Ceph and can give us examples of where it would be cool
[20:33] <nhm> scuttlemonkey: If you want I can go through them with you. I used to help write our storage RFPs at MSI.
[20:33] <Vacum> so adding more CPU load to the OSD machines could be a problem.
[20:33] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[20:34] * loicd confess having a glass of wine already ;-)
[20:34] <aarontc> scuttlemonkey: I'll try to get some details for you
[20:34] <scuttlemonkey> nhm: probably not a bad idea to start getting my head around cephFS stuff
[20:34] <nhm> aarontc: Where does your buddy work btw? I used to be at the Minnesota Supercomputing Institute. Maybe I know him. :)
[20:34] <loicd> nhm: I don't know whwat RFPs mean, could you expand ?
[20:34] <scuttlemonkey> gave a talk last night to about 30 people here in Ann Arbor
[20:35] <scuttlemonkey> the most interest I had was from students working on filesystem stuff
[20:35] <aarontc> nhm: he's a postgrad student at a university in the UK somewhere... details escape me at the moment
[20:35] <nhm> loicd: Request For Proposal, ie a request for vendors to bid on a system with defined requirements
[20:35] <loicd> thanks ;-)
[20:36] <loicd> anything else on the CephFS topic ?
[20:36] <loicd> (1 minute timeout on this question)
[20:36] <janos> not from me
[20:36] <aarontc> just that it's awesome and I can't wait to use it :)
[20:36] <loicd> aarontc: haha
[20:36] <Serbitar> i really want cephfs for hpc and for general university storage
[20:36] <emkt> not sure if it is with cephfs or in general.. asynchronous replication
[20:36] <kraken> ???_???
[20:36] <loicd> Serbitar: could you explain your use case ?
[20:37] <Serbitar> really as an alternative to lustre, which while popular is vastly inferior design
[20:37] <loicd> ok :-)
[20:37] <Serbitar> we do a lot of different hpc jobs as it is both a learning and research resource
[20:37] <emkt> Serbitar: agree
[20:37] <nhm> Serbitar: What kind of applications are you guys running?
[20:37] <loicd> what kind of hpc job ? I'm curious.
[20:38] <loicd> :-)
[20:38] <Serbitar> unfortunately i only maintain the ysystem, my collegue is more familiar with the particular jobs
[20:38] <Serbitar> we do quote a lot of castep
[20:38] <loicd> castep ?
[20:38] <Serbitar> http://www.castep.org/
[20:38] * loicd looking & learning
[20:39] <Serbitar> though that is probably not really storage dependant
[20:39] <nhm> Serbitar: ah, not too familiar with that one. No idea what it's IO looks like.
[20:39] <loicd> or not learning ... this is too complicated for me ;-)
[20:39] <Serbitar> its probably all ram based
[20:39] <Serbitar> but there was another job that someone started using that made our storage cruy
[20:39] <Serbitar> cant remember what it was
[20:39] * loicd notes for the record add http://www.castep.org/ to the use case list
[20:40] <Serbitar> one of the worst is gaussian
[20:40] <nhm> Serbitar: I was jsut going to say gaussian!
[20:40] <Serbitar> but it wasnt that, this particular task the user was trying to be conservative with his quota
[20:40] <Serbitar> so he had the job write its data down to disk, then he gzipped it
[20:41] <nhm> Serbitar: probably just lots of small random reads/writes.
[20:41] <Vacum> that would be something for the OSD classes: gzip on demand
[20:41] <nhm> Serbitar: especially if he was doing direct IO.
[20:41] <Vacum> ie identifiying if its worth it, if yes, store gzipped
[20:41] <loicd> Vacum: cls_gzip :-)
[20:41] <Serbitar> Vacum: very difficult if you are doing lots of random io
[20:41] <aarontc> loicd: I'd like to see more documentation with regard to decoding the log messages from Ceph daemons, and "better" explanations of all the configuration parameters
[20:42] <nhm> Vacum: I think so far we just rely on the underlying OSD filesystem to handle any compression.
[20:42] <Serbitar> possibly you could uses cache tiering to say "this slow data can be compressed now"
[20:42] <Serbitar> btrfs does zlib and lzo compression
[20:42] * loicd notes for the record more documentation with regard to decoding the log messages from Ceph daemons
[20:42] <Vacum> nhm: fair enough!
[20:42] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[20:42] <Serbitar> but my other use case, would be to replace our netapp
[20:42] * loicd notes "better" explanations of all the configuration parameters (config_opts.h)
[20:42] <Vacum> loicd: ah, regarding logging. the mon node frequently FLOODs its log with always the same message
[20:43] <loicd> Vacum: which one ?
[20:43] <Vacum> loicd: can that be made so it will back-off after like 20 in the same second and then only count, like syslog?
[20:43] <nhm> Vacum: afaik you can disable all logging if you want.
[20:43] <Vacum> nhm: I do not want to disable it. but yesterday we had the same message coming in like 1000 times per second
[20:43] * loicd notes : make it so logger it will back-off after like 20 in the same second and then only count, like syslog?
[20:43] <nhm> Vacum: that's a good idea probably
[20:43] <emkt> btrfs random io is not at par yet although it supports compression
[20:43] <Vacum> loicd: I'm at home right now ./
[20:44] <Vacum> loicd: so no access to the logs
[20:44] <loicd> Vacum: I get what you're saying though ;-)
[20:44] <loicd> I guess that means we're out of the CephFS topic
[20:44] <janos> yeah
[20:44] <emkt> asynchronous replication :-)
[20:44] <loicd> fake /topic misc whishlist
[20:45] <jerker> for me now speed is more important than compression. ceph is already cheaper than the alternatives, measured in hardware. So speed and stability. :) SSD cache is very good. Is it stable for production?
[20:45] <loicd> emkt: like what radosgw does ?
[20:45] <Vacum> loicd: "7f432467e700 1 mon.csdeveubs-u01mon01@0(leader).paxos(paxos acti
[20:45] <Vacum> ve c 697530..698142) is_readable now=2014-04-02 17:24:28.278147 lease_expire=0.0
[20:45] <nhm> emkt: indeed, that would be nice. :)
[20:45] <Vacum> 00000 has v0 lc 698142"
[20:45] <Vacum> that one :)
[20:45] <emkt> yeah for cephfs
[20:45] <Vacum> +1 for asynchronous replication on _rados_ level. perhaps based on crushmap rules?
[20:45] <nhm> jerker: firefly will be the first realse with the tiering layer, but you can use SSDs locally on the OSD for Ceph Jourals and/or bcache/flashcache
[20:45] <loicd> jerker: firefly will havea tiering and provide SSD cache. emperor (the current version) does not.
[20:46] <emkt> as in hdfs you have async replication whereas ceph has to wait for both replica to be acked
[20:46] <emkt> both or more based on settings*
[20:46] <jerker> nhm: yes i am for journals, have not tried bcache/flashcache yet. Am using 8 GB flash hybrid drives though.
[20:46] * Svedrin (svedrin@ketos.funzt-halt.net) has joined #ceph
[20:46] * loicd notes : asynchronous replication on _rados_ level
[20:46] <Fruit> random wishlist item: bandwidth reservations (probably really hard)
[20:46] <Fruit> bandwidth/iops
[20:46] <jerker> loicd: i am looking forward to it
[20:46] <loicd> Vacum: that's unlikely to happen soon though, it's complicated
[20:46] <Vacum> loicd: I imagine :)
[20:47] <Vacum> wishlist item: new release of rados-java :D
[20:47] <kraken> AbstractHibernateSchedulerExtractionCommand
[20:47] <loicd> Fruit: cold you expand on bandwidth reservations (probably really hard) ?
[20:47] <jerker> loicd: can one not do something ugly with normal iptables at the VM host?
[20:48] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit (Remote host closed the connection)
[20:48] <jerker> loicd: for bandwidth limitation
[20:48] <Serbitar> do we know of anyone using cephs as a backstore for samba?
[20:48] <aarontc> Serbitar: I was doing that until my MDS exploded
[20:48] <Fruit> loicd: it would be nice to guarantee that pools have a certain amount of iops/throughput available even if other pools are hammering the storage system
[20:48] <jerker> Serbitar: i have tried, then my cephfs crashed. and then i reinstalled cluster.
[20:48] <loicd> jerker: I suspect not. But there are various levels of throttling you can tweak, depending on what you're after.
[20:48] <Serbitar> these two commends make me sda panda
[20:49] <nhm> we've had some folks interested in trying Samba on top of RBD
[20:49] * rdas (~rdas@nat-pool-pnq-t.redhat.com) Quit (Quit: Leaving)
[20:49] * loicd notes : guarantee that pools have a certain amount of iops/throughput available even if other pools are hammering the storage system
[20:49] <nhm> But the samba/cephfs work is definitely more interesting.
[20:49] <jerker> nhm: samba/netatalk on top of RBD is stable, a bit slow though, need more IOPS
[20:49] <aarontc> loicd: +1 on that, I would like to see better insight into what is causing the ceph load, and ways to throttle certain clients/tasks/etc
[20:50] <janos> i'd like solid samba. i export to windows machines as shares
[20:50] <nhm> jerker: was that with RBD cache?
[20:50] <jerker> nhm: RBD cache in KVM
[20:50] <loicd> janos: solid as in fast ? or do you find samba on top of RBD + fs fragile for some reason ?
[20:51] <emkt> loic: asynchronous repliaction for cephfs too.. as i am interested in using it with hadoop
[20:51] <janos> it feels clunky to me. it works, but would be nice to be more direct
[20:51] <janos> i do that combo right now
[20:51] <mjevans> So in other news... ceph 0.72 with debian's bleeding edge 3.14-rc7 kernel fails btrfs corruption even when a 'very small' ceph cluster has only 3 guest VMs running the phoronix-test-suite disk test on it.
[20:51] <nhm> jerker: Ok. I'd be curious what the IO patterns for samba look like.
[20:51] <jerker> nhm: I get 100% IO-utilization when running TIme Machine from a couple of Mac clients to netatalk/Ext4/SL6/KVM/RBD
[20:51] * tracphil (~tracphil@130.14.71.217) has joined #ceph
[20:51] <loicd> emkt: I'm not sure if that makes sense to me ? Assuming async replication is available at the rados level, what would it mean for cephfs to have async replication ?
[20:51] <jerker> nhm: but not so much usable bandwidth as I would like
[20:52] <nhm> jerker: was it primarily IOPS or also slow throughput?
[20:52] <loicd> mjevans: is there a ticket for that ?
[20:52] <mjevans> loicd: I have no idea, but I'm already too behind schedule to even look up where to look up such a ticket... so XFS it is.
[20:52] <emkt> how does the hadoop-cephfs plugin communicate with ceph now is it using cephfs or talking to rados directly .. as i am playing around these days with this plugin
[20:53] * loicd notes there's only 7 minutes left, will pause in 2 minutes for conclusions
[20:53] <nhm> mjevans: probably for the best right now. RBD with BTRFS backed OSDs will fragment extremely quickly with small writes.
[20:53] <mjevans> loicd: also, supposedly, 3.15 has more btrfs fixes
[20:53] <jerker> nhm: I have not measured very much, but when reading 100 Mbit/s from the clients over netatalk I get about 100% IO-utilization from the virtual machine.. But the ceph cluster can handle a bit more, gigabit with two nodes, four osd. Not a large cluster :-)
[20:53] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[20:53] <loicd> mjevans: ok :-)
[20:53] <nhm> mjevans: and the btrfs defrag tools don't work well with snapshots. Josef said they'll probably have it fixed some time this summer.
[20:53] <Vacum> loicd: wishlist :) : "hierarchical near" backfilling, based on crush location. ie 4 replicas, 2 in each rack. instead backfill from primary: backfill from OSD in the same rack
[20:53] <loicd> Vacum: +1 !
[20:53] <emkt> Vacum +1
[20:53] * loicd notes : "hierarchical near" backfilling, based on crush location. ie 4 replicas, 2 in each rack. instead backfill from primary: backfill from OSD in the same rack
[20:54] <tracphil> Is anyone using Ceph with Cloudstack and XenServer?
[20:54] <loicd> wido: is
[20:54] <loicd> tracphil: ^
[20:54] <janos> Vacum, i like that idea
[20:54] <nhm> Vacum: that ties in to the read from near replica requests we've gotten periodically.
[20:54] <loicd> a few seconds left for misc / whishlist before we move to the conclusion
[20:55] <Serbitar> similar to Vacum's idea is it possible monitor hot blocks and keep copies on more hosts for performance
[20:55] <Vacum> nhm: near replica reads are available with firefly
[20:55] <nhm> Vacum: ha, shows how up to date I am. ;)
[20:55] <loicd> fake /topic first Ceph User Committee meeting conclusion
[20:56] <loicd> that did not go as I imagined it would *at all*
[20:56] <loicd> it was much better ;-)
[20:56] <janos> haha
[20:56] <janos> i liked it!
[20:56] <emkt> :-)
[20:56] * xmltok (~xmltok@216.103.134.250) has joined #ceph
[20:56] <janos> thank you for managing it
[20:56] <Vacum> loicd: a bit on the "feature" side, but fine :)
[20:56] <loicd> I'll skip the t-shirt / meetup thing which is not interesting in this context
[20:56] <emkt> really useful and thanks loicd for arranging it
[20:57] <Vacum> loicd: any plans on repeating this? frequency?
[20:57] <loicd> let's surf on what triggers discussion. I'll work on writing a summary based on the log and post it tomorrow on the ceph user list.
[20:57] <Serbitar> when is the next one
[20:57] <loicd> we'll do it monthly
[20:57] <Vacum> sounds great!
[20:57] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[20:57] <loicd> so... that would be may 3rd ?
[20:57] * jerker was not really aware i was in the middle of the meeting :)
[20:57] <Vacum> thats a saturday
[20:57] <loicd> saturdya
[20:57] <loicd> may 2nd better ?
[20:57] * fghaas (~florian@63.239.94.10) has joined #ceph
[20:57] <Vacum> IMO yes
[20:58] <loicd> jerker: ahaha
[20:58] <loicd> ok, let say may 2nd then
[20:58] * vilobh (~vilobhmm@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) has joined #ceph
[20:58] <loicd> anything we should do differently next time ?
[20:59] <loicd> ok then, I guess we're adjourned. Thanks a thousand time everyone !
[20:59] <Vacum> thank you!
[20:59] <aarontc> loicd: I think having a topic would help people coming and going be aware that there is a meeting going on and they are welcome to participate :)
[20:59] <janos> thank you loicd
[20:59] <emkt> let it be adhoc,random and chaotic like this.. so that it will be creative like this one :)
[20:59] <Fruit> loicd: thanks!
[20:59] <loicd> cool
[20:59] <jerker> Thanks!
[20:59] <Vacum> loicd: perhaps announce the next one in the topic a week before?
[20:59] <loicd> Vacum: +1
[21:00] * loicd notes announce the next one in the topic a week before?
[21:00] <Vacum> Well, that way I won't forget it myself :)
[21:00] <aarontc> +1
[21:00] <loicd> :-D
[21:00] <loicd> bbl
[21:00] <aarontc> cya loicd
[21:01] <vilobh> I am facing problem when trying to use create an image using qemu-img on CEPH storage
[21:01] <vilobh> "qemu-img create -f raw rbd:rbd/foo 1G" (Qemu->librbd->librados).
[21:01] <vilobh> I am using QEMU version 1.7.1 ; librbd : librbd1-0.67.7-0.el6.x86_64 ; librados : librados2-0.67.7-0.el6.x86_64.
[21:01] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Remote host closed the connection)
[21:01] <vilobh> Whereas when I run rbd create rbd/foo --size 1024 on the MON server I can create a raw device on the OSD.
[21:01] <vilobh> I copied the keyring.admin file from the MON server /etc/ceph/admin.keyring to the node where QEMU is running /etc/ceph/admin.keyring. Also I have the same ceph.conf file present on MON server as well as node where QEMU is running. I also tried creating a new user client say ???client.qemu??? on MON server and copying the key to the QEMU node that also didn???t work.
[21:01] <vilobh> Am I missing something here ? Any suggestions are highly welcome.
[21:02] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[21:02] * emkt (~guri@15.165.202.84.customer.cdi.no) has left #ceph
[21:03] <mjevans> nhm: I'm starting to believe btrfs will be fixed sometime when tux3 finally becommes 'stable'
[21:05] * beardo (~sma310@beardo.cc.lehigh.edu) has joined #ceph
[21:05] <janos> is that about the time our sun dies?
[21:06] <vilobh> anyone there ^^ ? Am i missing something ?
[21:06] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[21:07] <nhm> mjevans: It'll be interesting to see if the pace changes at all now that Chris and Josef are at fb.
[21:13] <vilobh> nhm, mjevans : you there ?
[21:14] <saturnine> Am I likely to see many performance gains on RBD by tuning the stripe settings?
[21:14] <saturnine> Getting ~50MB/s reads from 4 SSD based OSDs on two nodes.
[21:16] <houkouonchi-work> loicd: you can assign that to me. It looks like its a bug with redmine and has to do with browser/server character encoding
[21:16] <nhm> vilobh: hello!
[21:17] <nhm> saturnine: sequential reads?
[21:17] <vilobh> nhm : hi ,can you please have a look at my query which i wrote up? Thanks!
[21:18] <nhm> vilobh: hrm, I'm not sure. What is the error message?
[21:19] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[21:19] <saturnine> nhm: yup, just dd if=test of=/dev/null bs=1M count=1000 iflag=direct
[21:19] <vilobh> nhm: qemu-img create -f rbd rbd:data/foo 1G
[21:19] <vilobh> Formatting 'rbd:data/foo9', fmt=rbd size=1073741824 cluster_size=0
[21:19] <vilobh> qemu-img: error connecting
[21:19] <vilobh> qemu-img: rbd:data/foo: Could not create image: Input/output error
[21:20] <saturnine> Getting faster writes than reads.
[21:20] <nhm> saturnine: try increasing readahead on the OSDs and maybe the RBD image itself and see if that helps
[21:20] <saturnine> 1Gig technically limits it to 125MB/s, but I figure I should be doing closer to 100MB/s
[21:20] <saturnine> 50 is way low
[21:22] <nhm> saturnine: also, that's only a single op at a time since you are using direct, which means that you aren't hitting all of the SSDs concurrently
[21:23] <nhm> saturnine: You'll almost certainly see better aggregate performance if you launch a couple of concurrent dd processes
[21:24] * valeech (~valeech@50.242.62.166) has joined #ceph
[21:24] <saturnine> nhm: I've heard tuning read_ahead_kb in the VM itself helps, already tried that one
[21:24] <saturnine> Haven't tried tuning the OSDs themselves with that yet.
[21:25] <saturnine> nhm: True, but even to one OSD I should be seeing better than 50MB/s over a one gig link to an SSD.
[21:25] * lofejndif (~lsqavnbok@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[21:25] * KaZeR (~KaZeR@64.201.252.132) Quit (Remote host closed the connection)
[21:25] <nhm> vilobh: yeah, not sure what's wrong. I'd check the logs and maybe ask on the mailing list and see if josh or someone responds
[21:26] * KaZeR (~KaZeR@64.201.252.132) has joined #ceph
[21:26] <vilobh> i can't see the request even reaching osd's
[21:26] * fghaas (~florian@63.239.94.10) Quit (Quit: Leaving.)
[21:26] <saturnine> With VM block devices I would hope for reads > 100MB/s to compete with most SANs :D
[21:26] <vilobh> nhm : which logs should i look for any suggestions ?
[21:27] * The_Bishop_ (~bishop@2001:470:50b6:0:e9ff:1085:40ee:5837) Quit (Ping timeout: 480 seconds)
[21:27] <nhm> vilobh: there might be something in the client logs on the node you are trying to execute the command, not sure.
[21:27] <nhm> vilobh: in /var/log/ceph
[21:27] <vilobh> ok
[21:28] <nhm> vilobh: dumb question: do you have the ceph libs and ceph.conf in /etc/ceph on that node?
[21:28] <nhm> (ceph libs in general, not in /etc/ceph)
[21:28] <vilobh> i have ceph.conf at /etc/ceph
[21:28] <vilobh> ceph libs i don't have under /etc/ceph
[21:29] <nhm> yeah, sorry, not in /etc/ceph, just installed
[21:29] <vilobh> which exact ceph libs are you talking about can you name them ?
[21:29] <vilobh> i have librbd
[21:29] <vilobh> librados
[21:30] <nhm> vilobh: ceph health and such works from that node?
[21:30] <vilobh> ceph health is running on my mon server
[21:30] <nhm> vilobh: I'm wondering if the node you are trying to do the qemu-img on can communicate with things properly
[21:30] <vilobh> i am running qemu on a separate node where i have /usr/lib64/librados.so -> librados.so.2.0.0 and /usr/lib64/librbd.so -> librbd.so.1.0.0 present
[21:31] <nhm> yes, that's the node I was wondering if the health check worked on
[21:32] <vilobh> i did trace route the node where qemu is running is able to communicate with MON server where health check worked
[21:32] <vilobh> i have also setup the keyring files
[21:32] <nhm> can you run the ceph or rados command on the qemu-img machine?
[21:33] <nhm> like rados lspools or ceph health or something, just to see if it's talking properly.
[21:33] <vilobh> nope
[21:33] <kraken> http://i.imgur.com/ErtgS.gif
[21:33] <vilobh> i can't
[21:34] <nhm> vilobh: not installed or doesn't work?
[21:34] <vilobh> the command ceph heatlh doesn't work i mean it gets hanged
[21:34] <vilobh> ceph health
[21:34] <vilobh> ....
[21:34] <vilobh> its still running
[21:34] <nhm> vilobh: ah ok. got anything blocking any ports?
[21:35] <vilobh> Error connecting to cluster: Error
[21:35] <nhm> iptables?
[21:35] <nhm> you could try telneting to the mon server on the mon port you are using
[21:35] <nhm> from that host
[21:36] <vilobh> how do i know the mon port number i can't see it up in /etc/ceph/ceph.conf
[21:37] <lurbs_> Very probably it's 6789.
[21:37] * lurbs_ is now known as lurbs
[21:37] <vilobh> ok cool
[21:37] <lurbs> 'sudo netstat -nltp | grep ceph-mon[n]' on the monitor host should tell you.
[21:38] <lurbs> Bleh.
[21:38] <lurbs> 'sudo netstat -nltp | grep ceph-mo[n]' on the monitor host should tell you.
[21:38] * lurbs <- pre-coffee.
[21:38] <vilobh> telnet 68.142.237.36 6789
[21:38] <vilobh> Trying 68.142.237.36...
[21:38] <vilobh> its just trying
[21:38] <vilobh> i can ping the MON server front he qemu node though
[21:39] <nhm> vilobh: sounds like either the port is being blocks or the mon isn't listening on the right port/interface
[21:39] * diegows_ (~diegows@190.190.5.238) has joined #ceph
[21:40] <vilobh> sudo netstat -nltp | grep ceph-mo[n]
[21:40] <vilobh> tcp 0 0 68.142.237.36:6789 0.0.0.0:* LISTEN 14897/ceph-mon
[21:40] <vilobh> looks like CEPH-MON is listening on 6789
[21:41] <nhm> vilobh: can you telnet to it from your OSD nodes?
[21:41] <nhm> on that port?
[21:41] <vilobh> telnet MON from OSD nodes will try
[21:43] <vilobh> i think i can because when i run commands on MON server like rbd create rbd/foo --size 1024 on the MON server I can create a raw device on the OSD it works
[21:43] <nhm> vilobh: ok, ceph health would be a quick check too.
[21:44] <vilobh> ceph health on MON server return OK
[21:44] <vilobh> ceph health
[21:44] <vilobh> HEALTH_OK
[21:44] <nhm> vilobh: in any event, it seems like there's something blocking access on the qemu-img machine.
[21:44] <nhm> Maybe try disabling iptables, double checking network configuration, settings in ceph.conf, etc.
[21:45] * valeech (~valeech@50.242.62.166) Quit (Quit: valeech)
[21:45] <vilobh> blocking access from qemu-img node to MON/OSD's right
[21:45] <vilobh> sure
[21:49] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[21:55] * mcms (~mcms@46.224.209.194) has joined #ceph
[21:56] * larryliu (~larryliu@c-76-103-249-91.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[22:00] <saturnine> nhm: Yeah, read_ahead_kb tweaks on OSDs had no effect.
[22:00] <saturnine> Looks like it's actually hard capping at 39MB/s for whatever reason
[22:04] <saturnine> Running Ceph bench I'm getting 40MB/s with one read thread.
[22:05] <saturnine> Getting 80MB/s with 4 threads (4 OSDs across 2 nodes), increasing it more doesn't help.
[22:05] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) Quit (Read error: Permission denied)
[22:05] * themgt (~themgt@24-181-212-170.dhcp.hckr.nc.charter.com) has joined #ceph
[22:05] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) has joined #ceph
[22:06] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[22:06] * ChanServ sets mode +v andreask
[22:07] <saturnine> Maybe I just need more OSDs. Seems like I should be getting more than 40MB/s with a single thread though. Hmmm.
[22:11] <Fruit> default rbd block size is 4M
[22:11] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[22:11] <Fruit> so you won't be getting a lot of parallelism if you only have one thread
[22:14] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) Quit (Quit: Leaving)
[22:15] <nhm> saturnine: I'm more concerned with only 80MB/s from 4
[22:17] <nhm> saturnine: see any difference with 4MB IOs instead of 1MB IOs?
[22:17] <nhm> probably won't, just curious
[22:17] * joerocklin (~joe@cpe-75-186-9-154.cinci.res.rr.com) has joined #ceph
[22:18] <saturnine> nhm: Let me check.
[22:18] * joerocklin (~joe@cpe-75-186-9-154.cinci.res.rr.com) Quit ()
[22:18] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[22:19] * b0e (~aledermue@rgnb-5d8798cf.pool.mediaWays.net) Quit (Quit: Leaving.)
[22:19] * joerocklin (~joe@cpe-75-186-9-154.cinci.res.rr.com) has joined #ceph
[22:20] <saturnine> nhm: Oh, on rados bench I was using the default 4M size
[22:20] * leseb (~leseb@185.21.174.206) Quit (Killed (NickServ (Too many failed password attempts.)))
[22:20] * leseb (~leseb@185.21.174.206) has joined #ceph
[22:22] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit (Quit: Quitte)
[22:23] * theactualwarrenusui (~Warren@2607:f298:a:607:41b4:ea21:c110:1f70) has joined #ceph
[22:25] * zidarsk8 (~zidar@tm.78.153.58.217.dc.cable.static.telemach.net) has left #ceph
[22:27] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[22:27] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) Quit (Read error: Permission denied)
[22:27] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) has joined #ceph
[22:28] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[22:28] * mcms (~mcms@46.224.209.194) Quit (Ping timeout: 480 seconds)
[22:29] <saturnine> This is fun. I'm actually getting better performance from my HDDs than the SSD only pool
[22:30] <joef> how many HDDs
[22:30] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[22:30] <saturnine> 3 nodes, 8HDDs per node, SSD journals
[22:30] * tracphil (~tracphil@130.14.71.217) Quit (Quit: leaving)
[22:30] <saturnine> SSD pool is 2 nodes, 2 SSDs per node, journal on same disk
[22:30] * theonceandfuturewarrenusui (~Warren@2607:f298:a:607:e823:c14:13b4:5152) Quit (Ping timeout: 480 seconds)
[22:31] <nhm> saturnine: what kind of SSDs btw?
[22:31] * vilobh (~vilobhmm@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) Quit (Quit: vilobh)
[22:32] <saturnine> Intel 530s, 180GB
[22:33] <saturnine> In theory I should be able to saturate the link with just 1 I/O thread.
[22:33] * The_Bishop_ (~bishop@f050147209.adsl.alicedsl.de) has joined #ceph
[22:34] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[22:41] * The_Bishop_ (~bishop@f050147209.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[22:43] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) has joined #ceph
[22:43] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[22:47] * The_Bishop_ (~bishop@f050147209.adsl.alicedsl.de) has joined #ceph
[23:00] * odi (~quassel@2a00:12c0:1015:136::9) has joined #ceph
[23:04] * godog (~filo@0001309c.user.oftc.net) has left #ceph
[23:08] * JoeGruher (~JoeGruher@jfdmzpr04-ext.jf.intel.com) Quit (Ping timeout: 480 seconds)
[23:12] * The_Bishop_ (~bishop@f050147209.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[23:20] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[23:25] * giorgis (~oftc-webi@46-93-243.adsl.cyta.gr) has joined #ceph
[23:25] <giorgis> hello people!!! Can someone help me to list the contents of my buckets with the python script??
[23:26] * The_Bishop (~bishop@f050147209.adsl.alicedsl.de) has joined #ceph
[23:27] * odi (~quassel@2a00:12c0:1015:136::9) Quit (Remote host closed the connection)
[23:28] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) Quit (Ping timeout: 480 seconds)
[23:33] * c74d_ (~c74d@2002:4404:712c:0:76de:2bff:fed4:2766) Quit (Remote host closed the connection)
[23:34] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[23:34] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[23:35] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[23:43] * nhm_ (~nhm@184-97-144-70.mpls.qwest.net) has joined #ceph
[23:43] * mcms (~mcms@46.224.152.64) has joined #ceph
[23:43] * Cube1 (~Cube@12.248.40.138) has joined #ceph
[23:45] * nhm (~nhm@174-20-103-90.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[23:45] * houkouonchi-work (~linux@12.248.40.138) Quit (Read error: Operation timed out)
[23:45] * Cube2 (~Cube@12.248.40.138) has joined #ceph
[23:47] * jeff-YF (~jeffyf@67.23.117.122) Quit (Quit: jeff-YF)
[23:47] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has left #ceph
[23:50] * Cube (~Cube@12.248.40.138) Quit (Ping timeout: 480 seconds)
[23:51] * Cube1 (~Cube@12.248.40.138) Quit (Ping timeout: 480 seconds)
[23:55] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[23:59] * houkouonchi-work (~linux@12.248.40.138) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.