#ceph IRC Log

Index

IRC Log for 2014-03-06

Timestamps are in GMT/BST.

[0:00] * piezo (~kvirc@107-197-220-222.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[0:00] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[0:01] * xarses (~andreww@nat-dip5.cfw-a-gci.corp.yahoo.com) has joined #ceph
[0:02] * reed (~reed@75-101-54-131.dsl.static.sonic.net) has joined #ceph
[0:02] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[0:03] * BillK (~BillK-OFT@58-7-73-145.dyn.iinet.net.au) has joined #ceph
[0:06] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[0:09] * al (d@niel.cx) Quit (Ping timeout: 480 seconds)
[0:10] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[0:12] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[0:12] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[0:13] * kaizh (~kaizh@128-107-239-234.cisco.com) Quit (Remote host closed the connection)
[0:16] * rturk-away is now known as rturk
[0:18] * kaizh (~kaizh@128-107-239-235.cisco.com) has joined #ceph
[0:20] * linuxkidd (~linuxkidd@rtp-isp-nat1.cisco.com) Quit (Quit: Leaving)
[0:22] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[0:22] * vata (~vata@2607:fad8:4:6:a1d0:1440:7619:29f3) has joined #ceph
[0:27] * c74d (~c74d@2002:4404:712c:0:60e9:dd8f:9eee:74ec) has joined #ceph
[0:36] <scuttlemonkey> just under 30m until day2 of CDS starts in #ceph-summit
[0:43] * carif (~mcarifio@pool-173-76-155-34.bstnma.fios.verizon.net) has joined #ceph
[0:47] * Machske (~Bram@d5152D87C.static.telenet.be) has joined #ceph
[0:48] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[0:49] <Machske> hi guys, I've got an issue with the radosgw component. The first user I create, can create buckets and store data in it, and obviously can authenticate with the generated access en secret key. But for any additional user, I'm always getting an access denied, just on making the connection to rados and for example try to list buckets
[0:49] <Machske> Cannot find any clues, radosgw-admin user info gives for both users the same data back apart from the keys ofcourse
[0:50] <Machske> so I'm a little puzzled to why the first user works and the second user cannot even log in, though created in the exact same manner
[0:51] * erice_ (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[0:56] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[0:59] * kaizh (~kaizh@128-107-239-235.cisco.com) Quit (Remote host closed the connection)
[1:02] * yanzheng (~zhyan@134.134.139.74) Quit (Remote host closed the connection)
[1:02] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[1:05] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:06] * fdmanana (~fdmanana@bl13-134-213.dsl.telepac.pt) Quit (Quit: Leaving)
[1:10] * xarses (~andreww@nat-dip5.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[1:10] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[1:15] * xarses (~andreww@nat-dip5.cfw-a-gci.corp.yahoo.com) has joined #ceph
[1:16] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[1:16] * yanzheng (~zhyan@134.134.139.76) has joined #ceph
[1:16] * philips (~philips@ec2-23-22-175-220.compute-1.amazonaws.com) Quit (Quit: http://ifup.org)
[1:19] * philips (~philips@ec2-23-22-175-220.compute-1.amazonaws.com) has joined #ceph
[1:21] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) has joined #ceph
[1:31] * sprachgenerator (~sprachgen@130.202.135.181) Quit (Quit: sprachgenerator)
[1:43] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[1:44] * ivotron_ (~ivotron@eduroam-238-49.ucsc.edu) has joined #ceph
[1:45] * sarob (~sarob@2001:4998:effd:600:2144:3dab:d3c3:48ed) Quit (Remote host closed the connection)
[1:45] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) has joined #ceph
[1:45] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[1:48] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[1:48] * ivotron_ (~ivotron@eduroam-238-49.ucsc.edu) Quit (Read error: Operation timed out)
[1:49] * sarob_ (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) has joined #ceph
[1:49] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[1:52] * ivotron (~ivotron@dhcp-59-219.cse.ucsc.edu) Quit (Ping timeout: 480 seconds)
[1:56] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[1:57] * ircolle (~Adium@2601:1:8380:2d9:4d17:65d8:6f86:3d40) Quit (Quit: Leaving.)
[1:57] * ircolle (~Adium@2601:1:8380:2d9:4d17:65d8:6f86:3d40) has joined #ceph
[1:58] * ircolle (~Adium@2601:1:8380:2d9:4d17:65d8:6f86:3d40) Quit ()
[1:58] * ircolle (~Adium@2601:1:8380:2d9:4d17:65d8:6f86:3d40) has joined #ceph
[1:59] * ircolle (~Adium@2601:1:8380:2d9:4d17:65d8:6f86:3d40) Quit ()
[2:00] * sarob (~sarob@2001:4998:effd:600:59b1:7ba7:63e3:4d87) has joined #ceph
[2:01] * sarob_ (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) Quit (Read error: Operation timed out)
[2:01] * vata (~vata@2607:fad8:4:6:a1d0:1440:7619:29f3) Quit (Quit: Leaving.)
[2:02] * Cube (~Cube@38.122.20.226) Quit (Quit: Leaving.)
[2:02] * xarses (~andreww@nat-dip5.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[2:03] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) Quit (Quit: Leaving.)
[2:05] * xianxia (~chatzilla@119.39.124.239) has joined #ceph
[2:08] * sarob (~sarob@2001:4998:effd:600:59b1:7ba7:63e3:4d87) Quit (Ping timeout: 480 seconds)
[2:08] * alram (~alram@38.122.20.226) Quit (Read error: Operation timed out)
[2:08] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) has joined #ceph
[2:14] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) Quit (Quit: Leaving.)
[2:15] <sherry> would I be able to change the metada pool, or it always choose pool_ID=1 as a metadata pool?!
[2:15] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) has joined #ceph
[2:16] * yguang11_ (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit ()
[2:16] * yguang11 (~yguang11@2406:2000:ef96:e:39ab:2c42:7eb1:6ee6) has joined #ceph
[2:16] <jcsp> sherry: you can reset your filesystem to use different pools using the "ceph mds newfs" command
[2:20] <sherry> thanks jcsp
[2:22] * yanzheng (~zhyan@134.134.139.76) Quit (Remote host closed the connection)
[2:23] * keeperandy (~textual@c-71-200-84-53.hsd1.md.comcast.net) has joined #ceph
[2:28] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Read error: Connection reset by peer)
[2:28] * jcsp (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[2:30] * rmoe_ (~quassel@12.164.168.117) Quit (Read error: Operation timed out)
[2:31] * thb (~me@0001bd58.user.oftc.net) Quit (Quit: Leaving.)
[2:32] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) Quit (Remote host closed the connection)
[2:33] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) has joined #ceph
[2:34] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[2:35] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) has joined #ceph
[2:38] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) has joined #ceph
[2:38] * LeaChim (~LeaChim@host86-166-182-74.range86-166.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:40] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) Quit (Read error: Operation timed out)
[2:40] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) Quit (Quit: sprachgenerator)
[2:43] * haomaiwang (~haomaiwan@117.79.232.198) has joined #ceph
[2:43] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) has joined #ceph
[2:43] * haomaiwang (~haomaiwan@117.79.232.198) Quit (Remote host closed the connection)
[2:44] * haomaiwang (~haomaiwan@49.4.189.45) has joined #ceph
[2:45] * zerick (~eocrospom@190.187.21.53) Quit (Remote host closed the connection)
[2:45] * haomaiwang (~haomaiwan@49.4.189.45) Quit (Read error: Connection reset by peer)
[2:45] * haomaiwang (~haomaiwan@117.79.232.198) has joined #ceph
[2:46] * haomaiwang (~haomaiwan@117.79.232.198) Quit (Remote host closed the connection)
[2:46] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) has joined #ceph
[2:48] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[2:50] <sherry> jcsp: if I use more than 1 data pool, How can I use that command?
[2:51] <jcsp> you do newfs with one, and then add the others with mds add_data_pool
[2:51] * Pedras (~Adium@216.207.42.132) Quit (Ping timeout: 480 seconds)
[2:51] * talonisx (~talonisx@pool-108-18-97-131.washdc.fios.verizon.net) has joined #ceph
[2:51] <sherry> thanks :)
[2:51] <talonisx> hi everyone
[2:55] * alram (~alram@cpe-76-167-50-51.socal.res.rr.com) has joined #ceph
[2:59] * xarses (~andreww@12.164.168.117) has joined #ceph
[2:59] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:01] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[3:05] * JoeGruher (~JoeGruher@jfdmzpr01-ext.jf.intel.com) has joined #ceph
[3:07] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[3:08] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Remote host closed the connection)
[3:08] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[3:11] * erkules (~erkules@port-92-193-86-125.dynamic.qsc.de) has joined #ceph
[3:18] * Cube (~Cube@66-87-64-144.pools.spcsdns.net) has joined #ceph
[3:18] * yanzheng (~zhyan@134.134.139.74) has joined #ceph
[3:18] * erkules_ (~erkules@port-92-193-104-121.dynamic.qsc.de) Quit (Ping timeout: 480 seconds)
[3:24] * hijacker (~hijacker@bgva.sonic.taxback.ess.ie) Quit (Ping timeout: 480 seconds)
[3:25] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[3:28] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[3:29] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Ping timeout: 480 seconds)
[3:32] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[3:33] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[3:35] * jcsp (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[3:35] * cm (~chatzilla@119.39.124.239) has joined #ceph
[3:38] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit (Quit: valeech)
[3:43] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) Quit (Quit: sprachgenerator)
[3:44] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[3:46] * hijacker (~hijacker@213.91.163.5) has joined #ceph
[3:46] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[3:46] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Ping timeout: 480 seconds)
[3:50] * kaizh (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) has joined #ceph
[3:51] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[3:55] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[3:55] * kaizh (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[3:56] * kaizh (~kaizh@128-107-239-234.cisco.com) has joined #ceph
[3:56] * glzhao (~glzhao@220.181.11.232) has joined #ceph
[3:57] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[3:57] * sarob (~sarob@2601:9:7080:13a:e029:5935:2d32:4ef0) has joined #ceph
[3:57] * kaizh (~kaizh@128-107-239-234.cisco.com) Quit (Read error: Connection reset by peer)
[3:58] * kaizh (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) has joined #ceph
[3:58] * tsnider (~tsnider@198.95.226.236) has joined #ceph
[3:58] * tsnider_ (~oftc-webi@198.95.226.236) has joined #ceph
[3:58] * bandrus (~Adium@adsl-75-5-250-121.dsl.scrm01.sbcglobal.net) Quit (Quit: Leaving.)
[4:03] * lianghaoshen (~slhhust@119.39.124.239) has joined #ceph
[4:04] <tsnider> ok newbie question here. I have kernel mounted RBD devices on 'real' hardware not VMS. I'm using vdbench to get some rough performance data on them. Writes to the devices seem ok - I see disk activity, thruput and I/O rates are sane. However reads are a different situation. vdbench reports unreasonably great thruput numbers and I see no disk activity -- as if all the data is being cached. What
[4:04] <tsnider> / where can I start looking to see why the read requests aren't hitting the RBD devices (and the underlying storage)? thx
[4:04] <JoeGruher> RBDs are thin provisioned, try writing in the whole length of the RBD using FIO or DD and then performing reads
[4:05] <JoeGruher> I don't know for sure but I suspect it doesn't bother to hit the disks if you try to read an area in the RBD it knows has never been written
[4:05] <JoeGruher> it probably just sends you zeros or some junk out of memory
[4:05] <dmick> what JoeGruher said
[4:05] * sarob (~sarob@2601:9:7080:13a:e029:5935:2d32:4ef0) Quit (Ping timeout: 481 seconds)
[4:07] <tsnider> JoeGruher -- yeah that came to mind. I'll see if that makes a difference. I'm running writes and reads alternately using vdbench- I thought that might take care of it but obviously not.
[4:07] <dmick> tsnider you can do "rbd info" to get the base part of the object names that make up the rbd image and then "rados ls | grep" to find those objects; you'll notice that, after create, there are no objects created
[4:07] <dmick> after a set of random writes, there will be a sparse set of objects created, probably
[4:07] <tsnider> dmick: thx -- I look at that also
[4:08] * xianxia_ (~chatzilla@119.39.124.239) has joined #ceph
[4:12] * xianxia (~chatzilla@119.39.124.239) Quit (Ping timeout: 480 seconds)
[4:12] * xianxia_ is now known as xianxia
[4:28] * markbby (~Adium@168.94.245.2) has joined #ceph
[4:32] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[4:35] * haomaiwa_ (~haomaiwan@219-87-173-15.static.tfn.net.tw) has joined #ceph
[4:35] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) Quit (Read error: Connection reset by peer)
[4:37] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[4:37] * erice_ (~erice@c-98-245-48-79.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[4:39] * carif (~mcarifio@pool-173-76-155-34.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[4:44] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[4:44] * erice_ (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[4:45] * ircolle (~Adium@2601:1:8380:2d9:742c:f28b:de79:2add) has joined #ceph
[4:52] * tsnider_ (~oftc-webi@198.95.226.236) Quit (Remote host closed the connection)
[4:58] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[4:59] * tsnider (~tsnider@198.95.226.236) Quit (Ping timeout: 480 seconds)
[4:59] * tsnider (tsnider@ip68-102-128-87.ks.ok.cox.net) has joined #ceph
[5:00] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[5:02] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[5:03] * sarob_ (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[5:06] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:14] * haomaiwang (~haomaiwan@117.79.232.201) has joined #ceph
[5:15] * haomaiwang (~haomaiwan@117.79.232.201) Quit (Remote host closed the connection)
[5:15] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) has joined #ceph
[5:17] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[5:18] * haomaiwa_ (~haomaiwan@219-87-173-15.static.tfn.net.tw) Quit (Ping timeout: 480 seconds)
[5:18] * markbby (~Adium@168.94.245.2) Quit (Remote host closed the connection)
[5:18] * tsnider (tsnider@ip68-102-128-87.ks.ok.cox.net) Quit (Ping timeout: 480 seconds)
[5:23] * Vacum_ (~vovo@88.130.211.172) has joined #ceph
[5:26] * BillK (~BillK-OFT@58-7-73-145.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[5:27] * BillK (~BillK-OFT@124-148-67-206.dyn.iinet.net.au) has joined #ceph
[5:29] * keeperandy (~textual@c-71-200-84-53.hsd1.md.comcast.net) Quit (Quit: Textual IRC Client: www.textualapp.com)
[5:30] * Vacum (~vovo@i59F7A76A.versanet.de) Quit (Ping timeout: 480 seconds)
[5:31] * alram (~alram@cpe-76-167-50-51.socal.res.rr.com) Quit (Quit: leaving)
[5:45] * sarob_ (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[5:45] * sarob (~sarob@2601:9:7080:13a:81d4:865e:96ca:9bcd) has joined #ceph
[5:53] * sarob (~sarob@2601:9:7080:13a:81d4:865e:96ca:9bcd) Quit (Ping timeout: 480 seconds)
[5:53] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[5:54] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) has joined #ceph
[5:55] * xmltok_ (~xmltok@cpe-76-90-130-65.socal.res.rr.com) Quit (Remote host closed the connection)
[5:55] * xmltok (~xmltok@cpe-76-90-130-65.socal.res.rr.com) has joined #ceph
[6:00] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit (Quit: valeech)
[6:00] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[6:01] * xianxia (~chatzilla@119.39.124.239) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 24.0/20130910160258])
[6:01] * ivotron (~ivotron@50-0-125-100.dsl.dynamic.sonic.net) has joined #ceph
[6:01] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[6:08] * ivotron_ (~ivotron@50-0-125-109.dsl.dynamic.sonic.net) has joined #ceph
[6:10] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[6:11] * cm (~chatzilla@119.39.124.239) Quit (Ping timeout: 480 seconds)
[6:13] * lianghaoshen (~slhhust@119.39.124.239) Quit (Ping timeout: 480 seconds)
[6:15] <winston-d> /win 3
[6:15] * ivotron (~ivotron@50-0-125-100.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[6:16] * The_Bishop (~bishop@2001:470:50b6:0:f997:65f0:1a92:e01d) Quit (Ping timeout: 480 seconds)
[6:27] * ivotron_ (~ivotron@50-0-125-109.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[6:30] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[6:30] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[6:30] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[6:31] * cm (~chatzilla@222.240.177.42) has joined #ceph
[6:31] * cm (~chatzilla@222.240.177.42) Quit ()
[6:31] * Cube (~Cube@66-87-64-144.pools.spcsdns.net) Quit (Quit: Leaving.)
[6:35] * ivotron (~ivotron@50-0-124-88.dsl.dynamic.fusionbroadband.com) has joined #ceph
[6:35] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[6:38] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[6:42] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Ping timeout: 480 seconds)
[6:42] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[6:43] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[6:44] * Fetch (fetch@gimel.cepheid.org) Quit (Read error: Operation timed out)
[6:46] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[6:46] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[6:48] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:49] * talonisx (~talonisx@pool-108-18-97-131.washdc.fios.verizon.net) has left #ceph
[6:50] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) has joined #ceph
[6:52] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) Quit ()
[6:53] * geraintjones (~geraint@222-152-77-45.jetstream.xtra.co.nz) has joined #ceph
[6:53] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[6:53] <geraintjones> Yo - anyone able to give me a hand with a pretty unbalanced cluster ?
[6:53] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[6:55] <geraintjones> http://pastebin.com/DxZ4NQu3
[6:55] <geraintjones> all the ~2.8 weighted OSDs are 3tb so 2.8tb formatted
[6:56] <scuttlemonkey> geraintjones: many folks are involved in wrapping up our online developer summit at the moment
[6:56] <scuttlemonkey> and will probably retire to bed and/or work once we are finished
[6:56] <geraintjones> the 8.2 OSDs are 9TB RAIDs so 8.2TB formatted.
[6:56] <geraintjones> @scuttlemonkey yeah - i will just post and see if anyone looks at it later
[6:56] <cephalobot> geraintjones: Error: "scuttlemonkey" is not a valid command.
[6:57] <geraintjones> just driving me crazy
[6:57] <scuttlemonkey> cool, just didn't want you to get radio silence
[6:57] <geraintjones> i reweight an OSD down, another OSD fills
[6:57] <geraintjones> its like trying to get the bubble out of wall paper
[6:57] <geraintjones> :)
[6:58] <mikedawson> geraintjones: do you use pools with 3 replicas?
[6:58] <geraintjones> nope - x2 I beleive
[6:58] <geraintjones> is there a way to verify ?
[6:59] <geraintjones> I have inherited this rather than built from new
[6:59] <mikedawson> geraintjones: 'ceph osd dump | grep pool'
[6:59] <geraintjones> pool 3 'volumes' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 800 pgp_num 800 last_change 1602 owner 0
[6:59] <geraintjones> pool 4 'images' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 800 pgp_num 800 last_change 2232 owner 0
[6:59] <geraintjones> pool 5 'vms' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 800 pgp_num 800 last_change 10333 owner 0
[6:59] <geraintjones> pool 9 'vms-staging' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 100 pgp_num 100 last_change 14137 owner 0
[6:59] <geraintjones> so nope - x2's
[7:02] * The_Bishop (~bishop@2001:470:50b6:0:f997:65f0:1a92:e01d) has joined #ceph
[7:02] * rturk is now known as rturk-away
[7:03] * ivotron (~ivotron@50-0-124-88.dsl.dynamic.fusionbroadband.com) Quit (Remote host closed the connection)
[7:04] * tiger (~textual@58.213.102.114) has joined #ceph
[7:05] <mikedawson> geraintjones: you could a) add more storage, or b) try 'ceph osd reweight-by-utilization [threshold]' http://ceph.com/docs/master/rados/operations/control/#osd-subsystem
[7:06] <geraintjones> a) is in hand but ~ 10 days away
[7:06] <geraintjones> will look at b now :)
[7:08] * ivotron_ (~ivotron@50-0-124-88.dsl.dynamic.fusionbroadband.com) has joined #ceph
[7:10] * ivotron_ (~ivotron@50-0-124-88.dsl.dynamic.fusionbroadband.com) Quit (Read error: Connection reset by peer)
[7:10] * ivotron__ (~ivotron@50-0-124-88.dsl.dynamic.fusionbroadband.com) has joined #ceph
[7:12] * ivotron (~ivotron@2601:9:2700:178:18b2:d9a5:4cc2:61d5) has joined #ceph
[7:12] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[7:13] * ivotron (~ivotron@2601:9:2700:178:18b2:d9a5:4cc2:61d5) Quit (Remote host closed the connection)
[7:14] * ivotron (~ivotron@2601:9:2700:178:18b2:d9a5:4cc2:61d5) has joined #ceph
[7:17] * reed (~reed@75-101-54-131.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[7:19] * ivotron__ (~ivotron@50-0-124-88.dsl.dynamic.fusionbroadband.com) Quit (Ping timeout: 480 seconds)
[7:20] <geraintjones> thanks mikedawson its rebalancing now
[7:20] <geraintjones> hopefully will buy me the 10days until I can get to the DC with the new storage nodes
[7:21] <mikedawson> geraintjones: good luck, you may see quite a bit of churn as PGs are remapped
[7:21] <geraintjones> yeah - getting used to that
[7:22] <geraintjones> I think the fact its so heavily filling 6 out of 27 osd's may indicate a bug in the algo used to balance things
[7:22] <geraintjones> I dunno.
[7:27] <mikedawson> geraintjones: there have been some placement improvements in newer releases. I think maybe -> http://lists.ceph.newdream.net/pipermail/ceph-commit-ceph.newdream.net/2013-February/018502.html
[7:27] <geraintjones> looks tatsy
[7:28] <geraintjones> we will be up to ~200TB soon and its getting hard to swallow "we are 70% full, we need more room"
[7:28] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 27.0.1/20140212131424])
[7:29] * ircolle (~Adium@2601:1:8380:2d9:742c:f28b:de79:2add) Quit (Quit: Leaving.)
[7:31] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[7:31] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) has joined #ceph
[7:31] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) Quit ()
[7:32] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) has joined #ceph
[7:32] <yguang11> you can use 'sudo ceph pg dump | grep "osdstat" -A 10000 | grep -vP "osdstat|sum" | awk -F" " '{print $1, $2, $3, $4, ($2/$4)*100}' | sort -k 5 -n'
[7:32] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) Quit ()
[7:33] <yguang11> to check the osd usage (sorted).
[7:34] * zyluo (~zyluo@fmdmzpr02-ext.fm.intel.com) has joined #ceph
[7:34] * zyluo (~zyluo@fmdmzpr02-ext.fm.intel.com) has left #ceph
[7:35] * haomaiwa_ (~haomaiwan@118.186.151.59) has joined #ceph
[7:39] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[7:40] * xmltok_ (~xmltok@cpe-76-90-130-65.socal.res.rr.com) has joined #ceph
[7:40] * xmltok (~xmltok@cpe-76-90-130-65.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[7:41] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[7:42] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) Quit (Ping timeout: 480 seconds)
[7:43] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) has joined #ceph
[7:43] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) Quit ()
[7:48] * zviratko (~zviratko@241-73-239-109.cust.centrio.cz) has joined #ceph
[7:48] <zviratko> hi there
[7:49] <zviratko> is anybody here familiar with EMC ScaleIO and how it compares to ceph? information is a little scarce
[7:49] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[7:52] * mattt (~textual@94.236.7.190) has joined #ceph
[7:53] * tiger (~textual@58.213.102.114) Quit (Quit: Textual IRC Client: www.textualapp.com)
[7:54] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[7:59] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[8:00] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[8:00] * mattt_ (~textual@94.236.7.190) has joined #ceph
[8:00] * mattt (~textual@94.236.7.190) Quit (Ping timeout: 480 seconds)
[8:00] * mattt_ is now known as mattt
[8:04] * haomaiwa_ (~haomaiwan@118.186.151.59) Quit (Remote host closed the connection)
[8:04] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) has joined #ceph
[8:06] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[8:24] * erice__ (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[8:24] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[8:24] * erice_ (~erice@c-98-245-48-79.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[8:25] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[8:29] * sjusthm (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[8:31] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[8:33] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[8:34] * hijacker (~hijacker@213.91.163.5) Quit (Remote host closed the connection)
[8:36] * ivotron_ (~ivotron@50-0-124-42.dsl.dynamic.fusionbroadband.com) has joined #ceph
[8:39] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:44] * ivotron__ (~ivotron@2601:9:2700:178:d0c1:de81:696:fca2) has joined #ceph
[8:44] * ivotron (~ivotron@2601:9:2700:178:18b2:d9a5:4cc2:61d5) Quit (Ping timeout: 480 seconds)
[8:45] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[8:45] * hijacker (~hijacker@bgva.sonic.taxback.ess.ie) has joined #ceph
[8:47] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) Quit (Quit: Leaving.)
[8:49] * fghaas (~florian@91-119-140-244.dynamic.xdsl-line.inode.at) has joined #ceph
[8:50] * ivotron_ (~ivotron@50-0-124-42.dsl.dynamic.fusionbroadband.com) Quit (Ping timeout: 480 seconds)
[8:57] * kaizh (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[8:57] * erice__ (~erice@c-98-245-48-79.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[8:57] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[9:07] * imriz (~imriz@82.81.163.130) has joined #ceph
[9:07] <imriz> Hi. I am building a ceph cluster, with 3 radosgw servers. I want to load balance them with haproxy. My question is if I need to configure session stickiness, or do the servers somehow share session information between them?
[9:10] * fred` (fred@earthli.ng) Quit (Ping timeout: 480 seconds)
[9:14] * Sysadmin88 (~IceChat77@176.254.32.31) Quit (Read error: Connection reset by peer)
[9:16] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:18] * erice (~erice@c-50-183-189-30.hsd1.co.comcast.net) has joined #ceph
[9:18] * erice_ (~erice@c-50-183-189-30.hsd1.co.comcast.net) has joined #ceph
[9:22] * rendar (~s@host252-182-dynamic.3-79-r.retail.telecomitalia.it) has joined #ceph
[9:24] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Quit: Leaving.)
[9:25] * erice_ (~erice@c-50-183-189-30.hsd1.co.comcast.net) Quit (Read error: Operation timed out)
[9:25] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[9:26] * erice_ (~erice@c-50-183-189-30.hsd1.co.comcast.net) has joined #ceph
[9:26] * erice__ (~erice@c-50-183-189-30.hsd1.co.comcast.net) has joined #ceph
[9:28] * thb (~me@port-12566.pppoe.wtnet.de) has joined #ceph
[9:28] * erice (~erice@c-50-183-189-30.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[9:31] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[9:31] * hjjg (~hg@p3EE31CCB.dip0.t-ipconnect.de) has joined #ceph
[9:32] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[9:33] * erice__ (~erice@c-50-183-189-30.hsd1.co.comcast.net) Quit (Read error: Operation timed out)
[9:33] * JML_ (~oftc-webi@193.252.138.241) Quit (Quit: Page closed)
[9:35] * erice_ (~erice@c-50-183-189-30.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[9:39] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[9:40] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[9:41] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[9:41] * erice_ (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[9:41] * alexxy[home] (~alexxy@79.173.81.171) has joined #ceph
[9:42] * sarob (~sarob@2601:9:7080:13a:795f:ae41:8364:6a20) has joined #ceph
[9:44] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Ping timeout: 480 seconds)
[9:44] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[9:50] * sarob (~sarob@2601:9:7080:13a:795f:ae41:8364:6a20) Quit (Ping timeout: 480 seconds)
[9:59] * erice_ (~erice@c-98-245-48-79.hsd1.co.comcast.net) Quit (Remote host closed the connection)
[10:03] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[10:03] * Siva (~sivat@generalnat.eglbp.corp.yahoo.com) has joined #ceph
[10:09] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[10:13] <jerker> I am running RBD (kernel module) fine on a machine. Today when I re
[10:13] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[10:13] * allsystemsarego (~allsystem@188.25.129.255) has joined #ceph
[10:15] * yanzheng (~zhyan@134.134.139.74) Quit (Remote host closed the connection)
[10:16] * ivotron__ (~ivotron@2601:9:2700:178:d0c1:de81:696:fca2) Quit (Remote host closed the connection)
[10:19] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) has joined #ceph
[10:21] * bens (~ben@c-71-231-52-111.hsd1.wa.comcast.net) Quit (Read error: Connection reset by peer)
[10:22] <jerker> I was running Cephfs and RBD fine on a machine. But today when I rebooted it Cephfs stopped mounting. (RBD still works fine). I get the "mount error 5 = Input/output error". As far as I know I mount the same way as yesterday. Is there any particular log I should look into to?
[10:23] * TMM (~hp@c97185.upc-c.chello.nl) Quit (Remote host closed the connection)
[10:23] * jtangwk (~Adium@gateway.tchpc.tcd.ie) Quit (Quit: Leaving.)
[10:24] * LeaChim (~LeaChim@host86-166-182-74.range86-166.btcentralplus.com) has joined #ceph
[10:25] * Cube (~Cube@66-87-64-62.pools.spcsdns.net) has joined #ceph
[10:25] * bens (~ben@c-71-231-52-111.hsd1.wa.comcast.net) has joined #ceph
[10:28] * abique (~abique@time2market1.epfl.ch) has left #ceph
[10:33] * sarob (~sarob@2601:9:7080:13a:ce6:62f8:6b6d:ec9) has joined #ceph
[10:35] * schmee (~quassel@41.78.129.253) Quit (Remote host closed the connection)
[10:36] * xianxia (~chatzilla@202.197.9.8) has joined #ceph
[10:36] * jtangwk (~Adium@gateway.tchpc.tcd.ie) has joined #ceph
[10:36] <xianxia> hi,Who knows how to view the history of irc?
[10:39] * leochilll (~leochill@nyc-333.nycbit.com) Quit (Ping timeout: 480 seconds)
[10:39] * piezo (~kvirc@107-197-220-222.lightspeed.irvnca.sbcglobal.net) Quit (Read error: Connection reset by peer)
[10:41] * sarob (~sarob@2601:9:7080:13a:ce6:62f8:6b6d:ec9) Quit (Ping timeout: 480 seconds)
[10:41] * Cube (~Cube@66-87-64-62.pools.spcsdns.net) Quit (Quit: Leaving.)
[10:45] * Cube (~Cube@66-87-64-62.pools.spcsdns.net) has joined #ceph
[10:47] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[10:51] * schmee (~quassel@phobos.isoho.st) has joined #ceph
[10:59] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[11:03] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[11:04] <joao> jerker, most likely you'll want to crank up 'debug mds = 10' and 'debug ms = 1' on the mds, and look at the mds's log
[11:05] * isodude (~isodude@kungsbacka.oderland.com) Quit (Quit: Leaving)
[11:05] * isodude (~isodude@kungsbacka.oderland.com) has joined #ceph
[11:06] * xmltok (~xmltok@cpe-76-90-130-65.socal.res.rr.com) has joined #ceph
[11:06] * xmltok_ (~xmltok@cpe-76-90-130-65.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[11:06] * lofejndif (~lsqavnbok@bolobolo2.torservers.net) has joined #ceph
[11:09] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) Quit (Quit: Leaving.)
[11:11] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[11:11] * Cube (~Cube@66-87-64-62.pools.spcsdns.net) Quit (Quit: Leaving.)
[11:12] * sleinen (~Adium@2001:620:0:26:50d7:2c4a:45c1:fd2b) has joined #ceph
[11:15] * capri (~capri@212.218.127.222) has joined #ceph
[11:18] * leochill (~leochill@nyc-333.nycbit.com) has joined #ceph
[11:21] * b0e1 (~aledermue@juniper1.netways.de) has joined #ceph
[11:21] * xianxia (~chatzilla@202.197.9.8) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 24.0/20130910160258])
[11:25] * b0e (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[11:26] * al (d@niel.cx) has joined #ceph
[11:27] * fghaas (~florian@91-119-140-244.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[11:28] * lofejndif (~lsqavnbok@8JQAAGSTQ.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[11:33] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[11:34] * dvanders (~dvanders@dvanders-air.cern.ch) has joined #ceph
[11:37] * dvanders (~dvanders@dvanders-air.cern.ch) has left #ceph
[11:37] * dvanders (~dvanders@dvanders-air.cern.ch) has joined #ceph
[11:40] * fghaas (~florian@91-119-140-244.dynamic.xdsl-line.inode.at) has joined #ceph
[11:42] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[11:44] * sarob (~sarob@2601:9:7080:13a:352c:c709:d043:4beb) has joined #ceph
[11:44] * schmee (~quassel@phobos.isoho.st) Quit (Quit: No Ping reply in 180 seconds.)
[11:45] * yipwp_ (~yipwp@nusnet-194-201.dynip.nus.edu.sg) has joined #ceph
[11:46] * schmee (~quassel@phobos.isoho.st) has joined #ceph
[11:47] * al (d@niel.cx) Quit (Remote host closed the connection)
[11:48] * yipwp_ is now known as yipwp
[11:52] <yipwp> hi joao, wondering if you are around?
[11:52] * sarob (~sarob@2601:9:7080:13a:352c:c709:d043:4beb) Quit (Ping timeout: 480 seconds)
[11:52] <joao> yipwp, I am
[11:52] <joao> what's up?
[11:52] <yipwp> i've posted an issue about mon to the mailing list, someone said you might be able to shed some light on it.
[11:53] <yipwp> my mons are crashing on startup because of an error, i'm wondering if i can 'bypass' it to make the mon start again
[11:53] <joao> what's the email subject?
[11:53] <yipwp> Help! All ceph mons crashed.
[11:54] <joao> which list?
[11:54] <yipwp> ceph-users
[11:54] <joao> ah, thunderbird was still fetching stuff
[11:54] <joao> kay, looking
[11:54] <yipwp> wow, thanks. :)
[11:55] <joao> huh, I've seen this before
[11:55] <joao> can't figure out when or why
[11:55] <yipwp> i suspect it has to do with me doing 'rados cppool <old> <new>'
[11:56] <yipwp> and then renaming the new pool to the old name. however, snapshots aren't copied and when the image is deleted, the mon crashed trying to delete a non-existent snap
[11:57] <joao> http://tracker.ceph.com/issues/7210
[11:57] * al (quassel@niel.cx) has joined #ceph
[11:57] <joao> looks like this one fell through the cracks
[11:57] <yipwp> yeah, i found that bug
[11:58] <joao> hmm, so they crash on startup?
[11:58] <yipwp> yes, it just tries to do that op and can't start
[11:59] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[11:59] <joao> well, without a hotfix I doubt I can fix it for you; however, it looks like you should only be bit by this if you are issuing that command
[12:00] <joao> this is not updating from the on-disk state
[12:00] <joao> it's rather handling a message
[12:00] <joao> stop sending that message and the mon won't crash
[12:00] <joao> this is not a fix by all means, just a temporary workaround to get your mons going
[12:01] <yipwp> how do I stop sending that message?
[12:01] <joao> there's something trying to remove that snapshot, any idea what might be?
[12:02] <yipwp> oh. might be openstack. let me see
[12:02] <joao> usually, clients will keep retrying a message until they get an a-okay from the monitors
[12:02] <yipwp> i see
[12:02] <joao> if the monitors crash in the meantime, the client will retry once it reconnects to the monitor
[12:03] <yipwp> got it
[12:03] <yipwp> i understand how it works now.
[12:03] <joao> I'll try to reproduce it in the lab
[12:03] <joao> and get a fix going
[12:04] <joao> yipwp, mind updating 7210?
[12:04] <yipwp> ok sure :)
[12:05] * garphy`aw is now known as garphy
[12:06] * Cube (~Cube@66-87-64-62.pools.spcsdns.net) has joined #ceph
[12:06] * Siva (~sivat@generalnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[12:07] * haomaiwa_ (~haomaiwan@49.4.189.43) has joined #ceph
[12:07] * al (quassel@niel.cx) Quit (Remote host closed the connection)
[12:07] <yipwp> thanks lots for your help!
[12:08] <joao> yipwp, let me know if that helps
[12:08] <yipwp> joao, you are awesome. mons started again.
[12:08] <joao> cool
[12:08] <joao> I'll get a fix going for that
[12:08] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[12:11] * al (quassel@niel.cx) has joined #ceph
[12:11] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) Quit (Ping timeout: 480 seconds)
[12:12] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[12:13] <hybrid512> Hi everyone
[12:14] <hybrid512> I'm new to Ceph and I would like to do something but I'm not sure how to do that ...
[12:14] * Cube (~Cube@66-87-64-62.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[12:14] <hybrid512> here is what I want :
[12:14] <hybrid512> I have 3 nodes with 2 hard drives each : one SSD and one mechanical SATA
[12:15] <hybrid512> I'dl like to create 2 pools of storage : one "fast" which would store data on the SSDs and one "slow" which would use the mechanical disks, I don't want to mix SSDs and mechanical disks with the same storage pool
[12:16] <hybrid512> I created one OSD per disk on each node now, how can I aggregate them the way I want ?
[12:17] * diegows (~diegows@190.190.5.238) has joined #ceph
[12:17] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[12:18] * Siva (~sivat@generalnat.eglbp.corp.yahoo.com) has joined #ceph
[12:21] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[12:22] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[12:24] <yipwp> hybrid512: have you read http://ceph.com/docs/master/rados/operations/crush-map/ ?
[12:27] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[12:27] <hybrid512> well, I'm on it and also http://ceph.com/docs/master/rados/operations/placement-groups/ which seems to correspond quite well but there are many concepts that I don't fully understand right now
[12:28] <hybrid512> I'm used to GlusterFS or Sheepdog which are a lot simpler than Ceph but Ceph seems to provide all the features I need in one place ... but its a lot harder to comprehend ;)
[12:29] <hybrid512> I muest leave now, get back in 1h
[12:30] * fghaas (~florian@91-119-140-244.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[12:31] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[12:32] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[12:34] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[12:35] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[12:36] * al (quassel@niel.cx) Quit (Remote host closed the connection)
[12:36] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) has joined #ceph
[12:36] * al (quassel@niel.cx) has joined #ceph
[12:36] * al (quassel@niel.cx) Quit (Remote host closed the connection)
[12:39] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[12:41] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[12:41] * yipwp (~yipwp@nusnet-194-201.dynip.nus.edu.sg) Quit (Remote host closed the connection)
[12:42] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[12:46] * yguang11_ (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[12:49] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[12:49] * Siva (~sivat@generalnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[12:50] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[12:51] * yguang11 (~yguang11@2406:2000:ef96:e:39ab:2c42:7eb1:6ee6) Quit (Ping timeout: 480 seconds)
[12:52] * some0ne (~plynch@108-69-236-14.lightspeed.iplsin.sbcglobal.net) has joined #ceph
[12:52] <mattch> hybrid512: It might be worth having a read of http://www.sebastien-han.fr/blog/2012/12/07/ceph-2-speed-storage-with-crush/ - which is similar to what you're trying to achieve.
[12:52] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[12:53] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[12:53] <some0ne> Hi all. I have a problem that I have been spending way too much time on. It invovles a ps that is 'incomplete' and causing stuck I/O.
[12:54] <some0ne> I have read a bunch about it and figure that it is mostly a lost cause, but there is some data there, just not everything.
[12:54] <some0ne> I would like to just kick it back into complete even if I have to pad whatever with all zeros.
[12:54] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[12:58] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[12:58] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[12:59] * i_m (~ivan.miro@deibp9eh1--blueice2n1.emea.ibm.com) has joined #ceph
[13:00] * Cube (~Cube@66-87-64-62.pools.spcsdns.net) has joined #ceph
[13:01] <some0ne> anyone here?
[13:03] * glzhao (~glzhao@220.181.11.232) Quit (Quit: leaving)
[13:03] * Cube (~Cube@66-87-64-62.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[13:04] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[13:07] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) Quit (Remote host closed the connection)
[13:08] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[13:08] * bogus (~bogus@hetznar.boguspackets.net) Quit (Quit: lucampos )
[13:08] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[13:11] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) Quit (Ping timeout: 480 seconds)
[13:13] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[13:13] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[13:17] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[13:18] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[13:18] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) has joined #ceph
[13:22] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[13:26] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[13:26] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[13:27] <some0ne> hello
[13:27] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[13:27] <some0ne> I guess this channel is dead or something
[13:28] <cowbar> lots of PST people in here.. and its 4:30am
[13:28] <some0ne> good point
[13:29] <some0ne> i've been up all night i suppose.
[13:29] <cowbar> but I don't have any idea on your problem.
[13:30] <mo-> bonus: its lunch time in EU
[13:31] <some0ne> do you know how to get the index of all of the /var/lib/ceph/ individual files from a pg? or at least what they are all suppose to be?
[13:31] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[13:32] <some0ne> or a way to accurately see what files the ceph-osd process is trying to access at any given point in time?
[13:35] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Operation timed out)
[13:35] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[13:35] * fghaas (~florian@91-119-140-244.dynamic.xdsl-line.inode.at) has joined #ceph
[13:36] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[13:40] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[13:41] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[13:42] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[13:42] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[13:42] * mattt (~textual@94.236.7.190) Quit (Read error: Operation timed out)
[13:45] <sputnik13> noob question
[13:45] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[13:45] <sputnik13> is there a practical limit for how many spindles a ceph server should have?
[13:45] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[13:46] <jcsp1> if each spindle is one OSD (usually true), then it mainly depends on the amount of RAM in the server
[13:46] <mo-> iirc the its like 512MB of memory and 1GHz on 1 core per OSD
[13:46] <sputnik13> I have a storage cluster with 17 servers each with dual quad core xeon e5 and about 24GB of ram, I'd like to add more spindles rather than add more servers
[13:46] <mo-> -the
[13:46] * mattt (~textual@94.236.7.190) has joined #ceph
[13:46] * dpippenger (~riven@cpe-198-72-157-189.socal.res.rr.com) has joined #ceph
[13:46] <jcsp1> the guideline is between 1-2GB per OSD. Less will *seem* to work, but memory usage increases dramatically during some recovery scenarios.
[13:47] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) has joined #ceph
[13:47] <sputnik13> ic, does that change depending on the size of the osd?
[13:48] <mo-> https://ceph.com/docs/master/start/hardware-recommendations/ there we go
[13:48] <jcsp1> it's not linear with the osd size, no
[13:49] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[13:49] <sputnik13> ok, I get the RAM per OSD instance, is there a scaling guideline for cpu?
[13:49] <mo-> further down on the link I gave
[13:49] <mo-> actually nvm, it doesnt say nything bout OSDs
[13:50] <mo-> I do remember it mentioning 1GHz on 1 core per OSD, but I cannot find that back
[13:50] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[13:50] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[13:50] <sputnik13> mo-: right, there's an example configuration but I'm talking about a scenario well past that example
[13:51] <sputnik13> we have R710xd's as storage servers with 12x 3TB disks each
[13:51] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[13:52] <mo-> also the link goes into detail about the ram during recovery
[13:52] <mo-> "however, during recovery they need significantly more RAM (e.g., ~1GB per 1TB of storage per daemon)"
[13:52] <sputnik13> when we get to a point where we need more storage I'd prefer to add more spindles by using a 4U disk chassis rather than add additional servers... adding ram isn't an issue we have lots of free slots
[13:52] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Quit: Leaving.)
[13:52] <mo-> you may want to reserve some disk controller slots for journal SSDs
[13:53] <sputnik13> CPU I think will be the limiting factor... the 4U chassis can hold up to 72 drives
[13:53] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[13:53] <sputnik13> not that we'll fill that any time soon
[13:53] <mo-> hm actually, hmm
[13:54] <sputnik13> I'm guessing I can also reduce some of the impact by using an HBA to RAID some of the drives to reduce the # of OSDs
[13:54] * al (quassel@niel.cx) has joined #ceph
[13:54] <sputnik13> is that correct?
[13:54] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[13:54] <mo-> can do that, but is not recommended, since ceph does what a RAID would
[13:55] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[13:55] <mo-> there is a practical setup from some university in france
[13:55] <mo-> the article says they run 12 3TB disks on 64GB in a R720xd
[13:55] <sputnik13> I get that would negate some of the advantages of having a 1:1 mapping of OSD to spindle but if it means I don't have to upgrade all of my storage servers, it might be worth it
[13:56] * fghaas (~florian@91-119-140-244.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[13:56] <sputnik13> well, I guess the best thing is to set up a test system and run some tests if and when we get to the point of needing additional drives
[13:56] <mo-> however apparantly they went somewhat overboard since they grouped the 12 disks into 4 RAID5s
[13:56] <sputnik13> :\
[13:56] <sputnik13> mo-: ic
[13:57] <mo-> err. 3 RAID5s, my bad
[13:57] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) has joined #ceph
[13:57] * Vacum_ is now known as Vacum
[13:57] <sputnik13> mo-: so in that example they could have gone with specs to fit 3x OSDs rather than 12
[13:57] <mo-> http://dachary.org/?p=2087 maybe youll find something interesting in that writeup
[13:57] <sputnik13> mo-: I'll take a look, thank you for the pointers
[13:58] <sputnik13> one more noob question :)
[13:58] <sputnik13> is there a schedule for when a "stable" release is put out?
[13:58] <mo-> assuming this 1GB/1TB during rescue would make 16 3TB disks a possibility, assuming the cpu could handle that
[13:58] <sputnik13> like how ubuntu has an LTS every 2 years, and openstack has no LTS since everything is deprecated 12 months after release
[13:59] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[13:59] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[13:59] <mo-> http://wiki.ceph.com/Planning/Roadmap/Schedule
[14:00] <mo-> according to previous talks in this channel, D F and H would be inktank-supported releases
[14:00] <mo-> not sure about the last bit tho
[14:00] <sputnik13> sounds like every other letter
[14:00] <mo-> yep
[14:01] <mo-> just not sure if its the ones I gave or the others
[14:01] <sputnik13> which would make firefly a supported release if true
[14:01] <sputnik13> with erasure coding and all
[14:01] <mo-> lemme see, this channel has an archived backlog
[14:01] <sputnik13> :)
[14:02] <Vacum> I also have the information that Inktank Enterprise Support covers Dumpling and (once available) Firefly. But not Emporer
[14:02] <mo-> found the irc log part: "since emperor was a dev/community release the next enterprise release would be firefly"
[14:03] <mo-> and its every other release prior and probably after
[14:03] <mo-> also "(ie. firefly launch -> 1 mo of testing -> Inktank Ceph Enterprise "Firefly" release)"
[14:04] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[14:05] <sputnik13> is anyone using ceph enterprise?
[14:06] <sputnik13> I mean is anyone *here* using ceph enterprise
[14:07] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[14:08] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[14:12] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[14:12] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[14:13] <some0ne> anyone know how to force incomplete pgs to go active?
[14:13] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[14:17] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[14:17] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[14:18] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[14:22] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[14:22] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[14:26] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[14:27] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[14:28] * jcsp (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[14:28] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Read error: Connection reset by peer)
[14:32] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[14:33] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[14:35] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[14:36] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[14:36] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[14:38] * The_Bishop (~bishop@2001:470:50b6:0:f997:65f0:1a92:e01d) Quit (Ping timeout: 481 seconds)
[14:40] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[14:41] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[14:44] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[14:44] * The_Bishop (~bishop@2001:470:50b6:0:d837:b2d2:bc1d:8329) has joined #ceph
[14:44] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[14:45] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Remote host closed the connection)
[14:45] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[14:49] <jtangwk> leseb: the radosgw role is looking good
[14:49] <jtangwk> i would have though sticking with the stock versions of apache would have been a better choice
[14:49] <jtangwk> thought
[14:50] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[14:50] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[14:52] * hjjg_ (~hg@p3EE32208.dip0.t-ipconnect.de) has joined #ceph
[14:53] <leseb> jtangwk: well I haven't tried yet but this might solve my package issues
[14:53] * hjjg (~hg@p3EE31CCB.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[14:55] <leseb> jtangwk: did you try the branch
[14:55] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[14:56] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[14:57] <leseb> jtangwk: ?
[14:57] * mattt_ (~textual@94.236.7.190) has joined #ceph
[14:58] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[14:59] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[15:00] * mattt (~textual@94.236.7.190) Quit (Ping timeout: 480 seconds)
[15:00] * mattt_ is now known as mattt
[15:00] <jtangwk> leseb: not yet, i just spent a small bit of time reviewing it
[15:00] <jtangwk> im going to try later on this evening when im home
[15:01] <jtangwk> i dont have my laptop which is more powerful than my desktop to run the vms
[15:01] <jtangwk> also, i was wondering if you'd accept patches to make the vagrant/test system a bit smaller, it seems excesive to have 3 mons and 3 osd's etc..
[15:01] <jtangwk> it might be better to test with 1mon and 2osd nodes so its a little more light weight
[15:02] <jtangwk> or colocate all the services on three vm's or something like that
[15:02] <leseb> jtangwk: yes sure, I believe 3 VMs are ok (mons/osds/mds) that what I do on my laptop
[15:02] * fdmanana (~fdmanana@bl9-171-73.dsl.telepac.pt) has joined #ceph
[15:03] <leseb> jtangwk: but at some point having more VMs allows us to handle more usecase and to cover more them too
[15:03] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[15:03] <jtangwk> yea thats true, i could just bootup a bunch of vm's in opennebula to do that ;)
[15:03] <leseb> jtangwk: h??h?? :)
[15:03] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[15:04] <jtangwk> also, are you testing on debian?
[15:04] <jtangwk> or ubuntu or both?
[15:04] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[15:04] <leseb> jtangwk: mainly on ubuntu but for new features I always try on both
[15:05] <leseb> jtangwk: I tested the initial commit on both though
[15:05] <leseb> jtangwk: I still haven't tried the rgw on debian yet
[15:05] <jtangwk> ok, was jsut curious, for us the ceph experience has been better on ubuntu
[15:05] <jtangwk> ceph with debian/rhel/centos hasn't been great for us
[15:06] <jtangwk> well at least that was the case a year ago
[15:06] <jtangwk> leseb: in relation to the cephx keys, i see that you're pulling them down to the ansible 'master'
[15:07] <jtangwk> then pushing it back out again, is there any reason why you dont just copy the keys to the osds, mds's directly from the mon[0] ?
[15:07] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) has joined #ceph
[15:07] <jtangwk> i guess doing that would mean that you need to sort out ssh keys and agent forwarding
[15:08] * markbby (~Adium@168.94.245.3) has joined #ceph
[15:09] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[15:09] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[15:10] <leseb> jtangwk: I just put them on the ansible server because I consider it to be a safe location and because they need to be present somehow
[15:11] <leseb> jtangwk: let say you have 3 vms mon and 3 vms osd, you need a get the bootstrap keys on your osds, and in the meantime I don't want to put the admin key everywhere
[15:11] <jtangwk> you could probably constrain that copy to only mon[1]???[n]
[15:12] * sroy (~sroy@207.96.182.162) has joined #ceph
[15:12] <jtangwk> i should probably submit my haproxy stuff to that repo once the radosgw stuff is done
[15:12] <leseb> jtangwk: yes please do :)
[15:12] <jtangwk> that would be a nice complete example
[15:13] <jtangwk> we use haproxy in front of radosgw here, and its quite nice
[15:13] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[15:13] <leseb> jtangwk: I don't copy them directly because they might not exist
[15:13] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[15:13] <leseb> jtangwk: well this is what I do already with "when: ansible_hostname == hostvars[groups['mons'][0]]['ansible_hostname'] and cephx" right?
[15:13] <jtangwk> leseb: you could probably setup a notification handler to do the copy once the keys are generated
[15:14] <leseb> jtangwk: yes we could do this as well :)
[15:14] <jtangwk> leseb: yeap
[15:15] <jtangwk> i think you're probably going to recreate a bunch of functionality that exists already in ceph-deploy once you are done with these scripts
[15:15] <leseb> jtangwk: well the way ceph-deploy does the thing is certainly the good one so yes :)
[15:16] <leseb> jtangwk: but ceph-deploy only does ceph where ansible does everything :)
[15:17] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Remote host closed the connection)
[15:18] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[15:18] <alfredodeza> actually, one of the initial intentions of ceph-deploy was to allow a user to see what it does so that then they could translate that into a configuration manager
[15:19] <alfredodeza> so all in all, having paved the way for best practices with ceph-deploy it is completely OK to have the same behavior 'translated' into a configuration management tool like ansible
[15:19] <leseb> alfredodeza: :D
[15:21] * xmltok (~xmltok@cpe-76-90-130-65.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[15:22] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[15:23] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[15:25] <leseb> alfredodeza: btw is this one ok for you? https://github.com/ceph/ceph-ansible/pull/4
[15:25] <alfredodeza> leseb: merged
[15:26] <leseb> alfredodeza: thanks :)
[15:27] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[15:27] * tsnider (~tsnider@216.240.30.25) has joined #ceph
[15:29] * some0ne (~plynch@108-69-236-14.lightspeed.iplsin.sbcglobal.net) Quit (Quit: leaving)
[15:31] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[15:31] * diegows (~diegows@200.68.116.185) has joined #ceph
[15:31] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[15:32] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[15:34] <jtangwk> alfredodeza: yea i kinda got that impression
[15:34] <jtangwk> we we're wrapping ceph-deploy at somepoint in ansible
[15:35] <alfredodeza> now that is not a good idea
[15:35] <jtangwk> its not bad of a tool for deploying ceph ;)
[15:35] <leseb> jtangwk: some puppet modules do this as well
[15:35] <jtangwk> yea we're lazy and dont have time
[15:35] <jtangwk> i kinda need to review my ceph_facts module for ansible as well
[15:35] <jtangwk> my original idea was to write an ansible module to deploy ceph, but i never got around to it
[15:36] <leseb> jtangwk: did you build customer facts?
[15:36] <leseb> jtangwk: I had the same idea for ceph-disk
[15:36] * sarob (~sarob@2601:9:7080:13a:c537:d52d:53b6:5879) has joined #ceph
[15:36] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[15:37] <jtangwk> leseb: it wrapped the json output from various ceph commands
[15:37] <jtangwk> it was pretyt useful for gathering statistics and reacting to changes to the system
[15:37] <jtangwk> i think the ceph commands have changed since i last used my facts module
[15:37] <jtangwk> i kinda feel that the ceph-api looks more promising and useful
[15:37] <leseb> jtangwk: oh this is why you were mentioning the ceph-rest-api also?
[15:37] <leseb> jtangwk: :)
[15:38] <jtangwk> yea
[15:38] <jtangwk> i'd like to be able to create/destroy entities in ceph via the api
[15:39] <jtangwk> but first i'd need the api running, this is something thats on our roadmap to look at in 6-12months time for our project needs
[15:39] <jtangwk> well 'our roadmap' is an exageration, its more like 'stuff i want to look at' to make our system more robust
[15:40] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[15:40] <jtangwk> leseb: https://github.com/jcftang/ansible-ceph_facts
[15:40] <leseb> jtangwk: FYI the radosgw playbook works fine while removing the 'optimized' packages :D
[15:40] <jtangwk> its old, and probably doesnt work properly any more
[15:40] <leseb> jtangwk: ok gonna have a look at it
[15:41] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[15:42] <jtangwk> heh "ceph osd tree --format=json"
[15:43] <jtangwk> thats a nice one to be using
[15:44] <leseb> jtangwk: I'm gonna push a first version without the optimized packages I guess, what do you think?
[15:44] <jtangwk> leseb: go for it! i feel that using the stock apache versions is probably better than trying to use the optimized ones
[15:44] * sarob (~sarob@2601:9:7080:13a:c537:d52d:53b6:5879) Quit (Ping timeout: 480 seconds)
[15:44] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[15:45] <jtangwk> i wouldnt feel comfortable with packages being forced installed on my system as a user
[15:45] <leseb> jtangwk: well the optimized one completely breaks
[15:45] <jtangwk> im not sure what other users feel
[15:46] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[15:46] * jtangwk reads the ceph-ansible thread
[15:49] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[15:50] <jtangwk> nice, there are others interested
[15:50] <leseb> jtangwk: yup
[15:50] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[15:51] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[15:51] <leseb> jtangwk: I think the next step (after rgw and distro awareness) is to provide some test
[15:51] <svg> jtangwk: oon what list is that ceph-ansible thread?
[15:52] <leseb> svg: ceph-dev
[15:52] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[15:52] <svg> thx
[15:52] <leseb> svg: http://www.spinics.net/lists/ceph-devel/msg18227.html
[15:52] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[15:52] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Remote host closed the connection)
[15:52] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[15:52] <jtangwk> leseb: on some of our internal playbooks, i just use the uri module to try and do a get request on the service
[15:53] <jtangwk> so after an install/upgrade it checks, if it fails, it doesnt continue on
[15:53] <jtangwk> i of course run upgrades with a serial: of 1 or some small number
[15:53] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:53] <leseb> jtangwk: ok but I meant other playbooks to check that all the services are up and running and functionnal as expect
[15:54] <jtangwk> leseb: that was the reason i started the ceph_facts module
[15:54] <leseb> jtangwk: oh I see
[15:54] <jtangwk> as i wanted to see the state of the cluster from ansible
[15:54] <svg> w00t, official Ceph ansible-roles and all
[15:54] <svg> wonderfull
[15:54] <jtangwk> amongst other things
[15:54] <svg> I'm only on the user list, so I missed that
[15:55] <jtangwk> leseb: actually, i should submit a PR to get that module put into the ceph-ansible repo
[15:55] <leseb> svg: interesting thinhs are also happening on the dev one :)
[15:55] <leseb> jtangwk: yes probably :)
[15:55] <jtangwk> once its in, you could just use it to get data from the mons, then react accordingly
[15:55] <leseb> jtangwk: we need a way to test the setup
[15:56] <jtangwk> i should get my co-worker involved in this
[15:56] <svg> I already struggle to understand everythoing on -user :)
[15:57] <leseb> svg: :)
[15:57] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[15:59] <svg> leseb: at first glance, I'd think, the stuff you put in group_vars would better fit within the roles, as defaults/
[15:59] <svg> The key for distributing ansible components are roles, then stuff in other places, like inventory, group_vars etc, are merely implenetation examples
[15:59] <svg> (FWIT)
[15:59] <svg> (FWITW)
[16:00] <leseb> svg: well usually host sticks with roles where host are connected to group_vars
[16:01] <leseb> svg: but yes things could also happen on a var section with the playbooks
[16:01] <svg> no, I'm talking about setting default vars in roles/$role/defaults/main.yml
[16:02] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[16:02] <svg> that way everyone can overrule them from their own inventory
[16:02] <svg> defaults are nice for eg use with vagrant
[16:03] <svg> but when you integrate this in are real infrastructure, you probably want to use the roles, overrule certain vars from inventory, etc
[16:03] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[16:03] <svg> (I'm mostly talking about the default vars you set in group_vars/all
[16:04] <svg> But just a first impression/idea, as I said, FWIW
[16:04] <svg> thanks for it
[16:05] <leseb> svg: you're welcome, what you're saying is interesting, could please write a feature request or enhancement on the github?
[16:05] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[16:05] <leseb> svg: so we can keep track of this https://github.com/ceph/ceph-ansible/issues?state=open
[16:07] * linuxkidd (~linuxkidd@2001:420:2100:2258:39d3:de25:be2d:1e03) has joined #ceph
[16:07] <svg> sure, I''ll first take some time to play with them though, to make it more funded :)
[16:07] <leseb> svg: ok no problem thanks for testing it :)
[16:08] <svg> tw, are you aware of ansible galaxy? that's where roles get exchanged, which also shows, roles are the key part that should be easily distributed, more so than playbooks and inventory things
[16:09] <leseb> svg: yes I am
[16:09] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) Quit (Quit: Leaving.)
[16:09] <leseb> svg: i'm gonna have a look
[16:10] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[16:10] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[16:11] <svg> https://galaxy.ansible.com/
[16:12] <leseb> svg: just subscribed :), actually I had a quick look for a ceph playbook and since nothing was there I deciced to make it
[16:12] <leseb> svg: but very brief look :p
[16:12] <svg> now you have to tell me leseb what's the deal with the chicken?
[16:13] <svg> (on your bloh)
[16:13] <svg> blog*
[16:14] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[16:15] <leseb> svg: haha oh ok, well it's more about the cow than the chicken, but I was looking for a picture for this blog (always the funniest part for me btw :p). Then I though about the cowsay (since ansible can work with it) and then thought about the cow and chicken tv show
[16:15] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[16:16] <tsnider> Can some|any|every one tell me the status of erasure code changes for Ceph? Are they available and somewhat stable, available but buggy, buggy and still in development? If they're still buggy / in development what's the latest target date? Thx
[16:16] <leseb> svg: but afterthought this picture might not be interpreted correctly ^^
[16:16] <leseb> tsnider: this is planned for Firefly
[16:16] <leseb> tsnider: so should be available on stable
[16:17] <svg> cow's are nice
[16:17] <svg> there should be more moo'ing in life
[16:17] <leseb> haha
[16:18] <tsnider> leseb: thx -- I'll look at Firefly
[16:18] <leseb> tsnider: should be release soon :)
[16:18] <jtangwk> leseb: svg: yea role defaults would be nice
[16:18] <zviratko> sorry for repeating my question, but does anybody here have experience with EMC ScaleIO and how it compares to ceph?
[16:18] <jtangwk> i override stuff in my inventory as well
[16:18] <leseb> zviratko: I don't sorry
[16:18] <jtangwk> i guess im a typical ansible user :P
[16:20] * sarob (~sarob@2601:9:7080:13a:80e9:192c:4841:4e03) has joined #ceph
[16:21] <svg> leseb: jtangwk I might have a stab at it and show you what I mean by means of a PR
[16:23] <leseb> svg: yes sure, after looking at the galaxy this seems the way to go
[16:23] <leseb> svg: so pleaase go ahead with a PR :)
[16:25] <svg> that will change me from writing stuff to manage hava apps :)
[16:25] <svg> Java
[16:25] <jtangwk> leseb: seems that old ceph_facts module still works
[16:25] <jtangwk> at least it didnt give me an error
[16:25] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[16:25] <jtangwk> PR is submitted
[16:25] <leseb> jtangwk: cool
[16:25] * thb (~me@0001bd58.user.oftc.net) Quit (Quit: Leaving.)
[16:28] * sarob (~sarob@2601:9:7080:13a:80e9:192c:4841:4e03) Quit (Ping timeout: 480 seconds)
[16:29] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[16:32] <leseb> svg: stupid question, when you put variables in vars/main.yml (inside your role), does it mean that you have to through the file to change a var or is there a simple way to override with a more high-level file in the hierarchy?
[16:32] <jtangwk> leseb: you can set hostvars in the inventory
[16:33] <jtangwk> and override the roles
[16:33] <jtangwk> the best thing to do is create a roles/foo/defaults/main.yml to store defaults
[16:33] <svg> if you want to have default that can be overridden, you need to put them in defaults/main
[16:33] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[16:33] <jtangwk> that will ensure it will have the lowest priority
[16:33] <svg> vars in vars/main cannot be overridden
[16:33] <jtangwk> then either override them in group_vars/all or your hosts file
[16:34] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[16:34] <svg> I'd avoid using the hosts files for vars though
[16:34] * fghaas (~florian@91-119-140-244.dynamic.xdsl-line.inode.at) has joined #ceph
[16:34] <jtangwk> svg: yea well, it depends on what you are storing in the hosts file
[16:35] <jtangwk> i just store passwords myself in the hosts file
[16:35] <jtangwk> as i know that only exists in one place and i never commit the hosts file to my playbook repos
[16:35] <svg> makes sense
[16:36] <jtangwk> the whole variable hierarachy in ansible is a bit of a mess tbh
[16:36] <jtangwk> but its still nice to use
[16:36] <svg> there are more things that are a bit of a mess, but yes :)
[16:37] <svg> lots of things that were developed piece by piece
[16:37] * ajazdzewski (~quassel@lpz-66.sprd.net) Quit (Remote host closed the connection)
[16:37] <svg> the key is to avoid using too much places for your vars
[16:37] <svg> use role defauls, avoid role vars
[16:38] <jtangwk> yea, agreed
[16:38] <leseb> svg: jtangwk I agree, the good thing from group_vars/all or osds is that you don't need to go through all the roles dirs and then edit the vars file
[16:39] <leseb> but using is probably cleaner vars/main.yml
[16:39] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[16:39] <leseb> I was just thinking as a "params.pp" like puppet does
[16:40] <leseb> but this doesn't seem to be possible with ansible
[16:42] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Operation timed out)
[16:44] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) has joined #ceph
[16:44] * sprachgenerator (~sprachgen@173.150.128.251) has joined #ceph
[16:53] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[16:57] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[16:58] <jtangwk> leseb: group_vars/all is probably the closest thing i guess
[17:03] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:06] * markbby (~Adium@168.94.245.3) has joined #ceph
[17:06] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[17:07] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[17:08] <svg> leseb: any idea why vagrant puts files like disk-0-0.vdi in the repo? the generated disks are the vmdk's that live in the virtualbox folder
[17:11] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[17:11] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[17:15] <leseb> jtangwk: ok thanks
[17:15] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Remote host closed the connection)
[17:15] <leseb> svg: no but I've found this really annoying as well
[17:16] * BManojlovic (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[17:16] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[17:18] * diegows (~diegows@200.68.116.185) Quit (Remote host closed the connection)
[17:18] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[17:19] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[17:20] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[17:21] * sarob_ (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[17:25] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[17:25] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[17:25] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[17:26] * capri (~capri@212.218.127.222) Quit (Ping timeout: 480 seconds)
[17:27] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Remote host closed the connection)
[17:27] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[17:28] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:28] * leochill (~leochill@nyc-333.nycbit.com) Quit (Ping timeout: 480 seconds)
[17:28] * leochill (~leochill@nyc-333.nycbit.com) has joined #ceph
[17:29] * sarob_ (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:30] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[17:30] * fred` (fred@2001:4dd0:ff00:8ea1:2010:abec:24d:2500) has joined #ceph
[17:31] * ircolle (~Adium@2601:1:8380:2d9:b5e6:4d17:d2e1:4159) has joined #ceph
[17:33] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) has joined #ceph
[17:33] * ivotron (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[17:35] * markbby (~Adium@168.94.245.3) has joined #ceph
[17:37] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[17:39] * imriz (~imriz@82.81.163.130) Quit (Read error: Operation timed out)
[17:40] <JoeGruher> are there conditions where the underlying objects in an RBD could change their placement group mappings? rebalancing or recovering from failure or if the number of PGs in the pool was increased? or are the object to PG mappings totally static?
[17:44] <via> as i understand it they are totally static wrt the crush map
[17:44] <JoeGruher> hmm
[17:45] <via> they can be temporarily remapped, but the home nodes will stay the same
[17:45] <JoeGruher> what about the PG to OSD mappings then, those can change?
[17:46] <JoeGruher> object will always have the same PGs but PG could end up point to different OSDs after (for example) a HDD failure?
[17:47] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:50] * bandrus (~Adium@adsl-75-5-250-121.dsl.scrm01.sbcglobal.net) has joined #ceph
[17:50] <jcsp> JoeGruher: that's right. Aside from the case you mentioned above where number of PGs is increased, and the PGs are "split" with some objects in old PGs and some in new.
[17:53] * joef (~Adium@2620:79:0:131:c8b0:3332:538e:4f79) has joined #ceph
[17:53] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[18:01] * b0e1 (~aledermue@juniper1.netways.de) Quit (Remote host closed the connection)
[18:03] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Read error: Operation timed out)
[18:03] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[18:03] <JoeGruher> thx!
[18:03] * mattt (~textual@94.236.7.190) Quit (Ping timeout: 480 seconds)
[18:04] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[18:06] * i_m (~ivan.miro@deibp9eh1--blueice2n1.emea.ibm.com) Quit (Quit: Leaving.)
[18:06] * sprachgenerator (~sprachgen@173.150.128.251) Quit (Quit: sprachgenerator)
[18:15] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[18:20] * sarob (~sarob@2601:9:7080:13a:314c:c151:f728:ebfd) has joined #ceph
[18:23] * xarses (~andreww@12.164.168.117) has joined #ceph
[18:23] * allsystemsarego (~allsystem@188.25.129.255) Quit (Ping timeout: 480 seconds)
[18:26] * kaizh (~kaizh@128-107-239-234.cisco.com) has joined #ceph
[18:27] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[18:27] * allsystemsarego (~allsystem@188.25.129.255) has joined #ceph
[18:28] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:28] * sarob (~sarob@2601:9:7080:13a:314c:c151:f728:ebfd) Quit (Ping timeout: 480 seconds)
[18:32] * hjjg_ (~hg@p3EE32208.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[18:32] * orion195 (~oftc-webi@213.244.168.133) has joined #ceph
[18:33] <orion195> hi guys, what repository do I have to use on a fc20?
[18:33] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[18:34] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) has joined #ceph
[18:36] * thb (~me@port-12566.pppoe.wtnet.de) has joined #ceph
[18:36] * thb is now known as Guest2418
[18:37] * ivotron_ (~ivotron@50-0-68-214.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[18:38] * ivotron (~ivotron@2601:9:2700:178:4f4:dc45:5d7c:74af) has joined #ceph
[18:40] * Guest2418 is now known as thb
[18:42] * garphy is now known as garphy`aw
[18:46] * nwat (~textual@eduroam-252-224.ucsc.edu) has joined #ceph
[18:47] * capri (~capri@212.218.127.222) has joined #ceph
[18:52] * geekmush (~Adium@cpe-66-68-198-33.rgv.res.rr.com) Quit (Quit: Leaving.)
[18:53] * geekmush (~Adium@cpe-66-68-198-33.rgv.res.rr.com) has joined #ceph
[18:58] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[18:59] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has left #ceph
[19:00] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[19:01] * reed (~reed@50-0-92-79.dsl.dynamic.sonic.net) has joined #ceph
[19:02] * sleinen (~Adium@2001:620:0:26:50d7:2c4a:45c1:fd2b) Quit (Quit: Leaving.)
[19:02] * sleinen (~Adium@130.59.94.188) has joined #ceph
[19:04] * ivotron_ (~ivotron@50-0-125-166.dsl.dynamic.sonic.net) has joined #ceph
[19:05] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[19:06] * ivotron__ (~ivotron@c-50-150-124-250.hsd1.ca.comcast.net) has joined #ceph
[19:09] * fridudad (~oftc-webi@p5DD4FD8D.dip0.t-ipconnect.de) has joined #ceph
[19:10] * sleinen (~Adium@130.59.94.188) Quit (Ping timeout: 480 seconds)
[19:11] * ivotron (~ivotron@2601:9:2700:178:4f4:dc45:5d7c:74af) Quit (Ping timeout: 480 seconds)
[19:13] * ivotron_ (~ivotron@50-0-125-166.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[19:16] * scalability-junk (uid6422@id-6422.ealing.irccloud.com) Quit ()
[19:16] * ircolle (~Adium@2601:1:8380:2d9:b5e6:4d17:d2e1:4159) Quit (Quit: Leaving.)
[19:19] * dmsimard (~Adium@ap03.wireless.co.mtl.iweb.com) has joined #ceph
[19:20] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[19:27] * Cube (~Cube@66-87-64-62.pools.spcsdns.net) has joined #ceph
[19:29] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) Quit (Quit: Leaving.)
[19:30] * sjusthm (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) has joined #ceph
[19:30] <fghaas> anyone around here to help me interpret this wtf moment that my cluster is giving me? https://gist.github.com/fghaas/9396338
[19:31] <fghaas> those are three ceph -s calls in rapid succession. nothing else is touching the cluster. what would cause the permission error in the middle?
[19:32] <gregsfortytwo> do you have 3 mons configured?
[19:32] <gregsfortytwo> almost looks like maybe they screwed up and created two quorums
[19:33] <fghaas> well yeah the one mon (the one running on this same machine) does indeed look like it's acting up, yes
[19:33] * vote-for-choice (~quantumst@ool-457ca46c.dyn.optonline.net) has joined #ceph
[19:33] <vote-for-choice> Vote agaist systemd, Please Second this proposal: https://lists.debian.org/debian-vote/2014/03/msg00000.html
[19:34] <vote-for-choice> Vote for init system choice.
[19:34] <fghaas> gregsfortytwo: that mon is evidently flooding my logs with "2014-03-06 19:32:33.940897 7fb23fcab700 1 mon.ubuntu-ceph2@0(leader).paxos(paxos active c 1..11) is_readable now=2014-03-06 19:32:33.940906 lease_expire=0.000000 has v0 lc 11"
[19:35] <gregsfortytwo> that line doesn't seem like it should be a problem, but I don't remember
[19:35] <gregsfortytwo> what version is it?
[19:35] <gregsfortytwo> joao might be able to help you out, he went through some issues with this recently on the ceph-deploy tests
[19:35] <fghaas> dumpling
[19:36] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) has joined #ceph
[19:36] <vote-for-choice> please vote for that proposal
[19:36] <fghaas> oh lord, now the anti-lennart crowd is taking to irc bots? groan.
[19:36] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[19:37] * `10 (~10@69.169.91.14) has joined #ceph
[19:37] <fghaas> but how on earth would a mon respond, then say "there's a quorum but I'm not part of it"=
[19:37] <fghaas> that seems nonsensical
[19:39] <fghaas> this is rather apparently related, but still don't know what to make of it: "2014-03-06 19:37:42.174244 7f95b3208700 0 cephx server client.admin: unexpected key: req.key=fa3c960ad61b77bc expected_key=fbce97cccae7f3f2"
[19:43] * `10_ (~10@juke.fm) Quit (Ping timeout: 480 seconds)
[19:47] * `10` (~10@69.169.91.14) has joined #ceph
[19:49] * asadpanda (~asadpanda@2001:470:c09d:0:20c:29ff:fe4e:a66) Quit (Ping timeout: 480 seconds)
[19:50] * asadpanda (~asadpanda@67.231.236.80) has joined #ceph
[19:52] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[19:53] * JC (~JC@2601:9:5980:39b:2d6a:dea0:5dd1:ea40) has joined #ceph
[19:53] * JC (~JC@2601:9:5980:39b:2d6a:dea0:5dd1:ea40) Quit ()
[19:53] * JCL1 (~JCL@2601:9:5980:39b:f93b:20a1:62d8:8759) Quit (Quit: Leaving.)
[19:54] * `10 (~10@69.169.91.14) Quit (Ping timeout: 480 seconds)
[19:54] * JCL (~JCL@2601:9:5980:39b:392d:8934:ebb1:9fdd) has joined #ceph
[20:05] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) has joined #ceph
[20:07] * vote-for-choice (~quantumst@ool-457ca46c.dyn.optonline.net) Quit (Ping timeout: 480 seconds)
[20:15] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: Textual IRC Client: www.textualapp.com)
[20:15] * ajazdzewski (~quassel@2001:4dd0:ff00:9081:bd76:3c51:bf1f:617f) has joined #ceph
[20:25] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[20:27] * sarob (~sarob@ip-64-134-228-129.public.wayport.net) has joined #ceph
[20:32] * ivotron__ (~ivotron@c-50-150-124-250.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[20:38] * ivotron_ (~ivotron@2601:9:2700:178:a961:1dea:3b74:7c97) has joined #ceph
[20:39] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Quit: sync && halt)
[20:41] * vata (~vata@2607:fad8:4:6:33:43e5:3624:6097) has joined #ceph
[20:42] * Boltsky (~textual@office.deviantart.net) has joined #ceph
[20:44] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[20:48] <bens> i created a bad key, and when I run it again, i get a "doesn't match" error.
[20:48] <bens> i don't quite understand cephx, so i am flying blind here.
[20:48] <bens> Error EINVAL: key for client.glance exists but cap mon does not match
[20:48] <bens> created it with ceph auth get-or-create client.glance
[20:48] <bens> now i want to add the paremter to it or delete that key
[20:48] <bens> ceph auth has no delet options
[20:51] <bens> how do i add caps? of course they don't match
[20:51] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[20:51] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[20:51] * JCL (~JCL@2601:9:5980:39b:392d:8934:ebb1:9fdd) Quit (Quit: Leaving.)
[20:51] * illya (~illya_hav@111-88-133-95.pool.ukrtel.net) has joined #ceph
[20:51] <illya> hello
[20:52] <bens> hi
[20:52] <illya> I had 9 osd's
[20:52] <illya> and I removed manually 5
[20:52] <bens> did you accidenly the whole cluster?
[20:52] <illya> following http://ceph.com/docs/master/rados/operations/add-or-rm-osds/
[20:52] <illya> no all ok
[20:52] <illya> very simple question
[20:52] <illya> http://pastebin.com/acQctWqH
[20:53] <illya> any way to remove old nodes from the tree ?
[20:53] <bens> i only know how to manually remove them from the cursh map
[20:53] <bens> crush map
[20:54] <bens> i know there is a way to do it without exporting and reiporting it, but I'm not sure
[20:54] <illya> ok
[20:54] <illya> I would like to avoid this if possible
[20:55] <illya> or leave as it is for now
[20:55] * diegows (~diegows@200.68.116.185) has joined #ceph
[20:57] * illya (~illya_hav@111-88-133-95.pool.ukrtel.net) has left #ceph
[21:04] <bens> I solved my problem with ceph-authtool.. I don't see how to get the caps though
[21:04] <bens> ceph auth list.
[21:04] <bens> ignore me.
[21:05] * dennis (dennis@tilaa.krul.nu) has joined #ceph
[21:15] * sleinen (~Adium@2001:620:0:26:cd23:54f5:32fd:a0f6) has joined #ceph
[21:26] * allsystemsarego (~allsystem@188.25.129.255) Quit (Quit: Leaving)
[21:32] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[21:41] * diegows (~diegows@200.68.116.185) Quit (Ping timeout: 480 seconds)
[21:42] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has left #ceph
[21:48] * kaizh (~kaizh@128-107-239-234.cisco.com) Quit (Remote host closed the connection)
[21:53] * jharley (~jharley@76-10-151-146.dsl.teksavvy.com) has joined #ceph
[21:57] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) has joined #ceph
[21:59] * fridudad (~oftc-webi@p5DD4FD8D.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[21:59] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[22:00] * ircolle (~Adium@2601:1:8380:2d9:1893:b260:8be1:49e8) has joined #ceph
[22:07] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[22:07] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) has joined #ceph
[22:09] * sputnik13 (~sputnik13@client64-80.sdsc.edu) has joined #ceph
[22:12] * sputnik13 (~sputnik13@client64-80.sdsc.edu) Quit ()
[22:13] * sputnik13 (~sputnik13@client64-80.sdsc.edu) has joined #ceph
[22:14] * dpippenger (~riven@cpe-198-72-157-189.socal.res.rr.com) Quit (Remote host closed the connection)
[22:15] * laviandra (~lavi@eth-seco21th2-46-193-64-32.wb.wifirst.net) has joined #ceph
[22:15] * ajazdzewski (~quassel@2001:4dd0:ff00:9081:bd76:3c51:bf1f:617f) Quit (Remote host closed the connection)
[22:18] * kaizh (~kaizh@128-107-239-233.cisco.com) has joined #ceph
[22:19] * brambles (lechuck@s0.barwen.ch) Quit (Ping timeout: 480 seconds)
[22:21] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[22:21] * brambles (lechuck@s0.barwen.ch) has joined #ceph
[22:23] * sleinen (~Adium@2001:620:0:26:cd23:54f5:32fd:a0f6) Quit (Ping timeout: 480 seconds)
[22:27] * taras (~taras@vps.glek.net) has joined #ceph
[22:27] <taras> are there non-hw companies that do managed ceph and bare metal openstack, we are looking at some combo of the 2 at mozilla and it looks like too much to handle internally
[22:28] * Sysadmin88 (~IceChat77@176.254.32.31) has joined #ceph
[22:29] <dennis> taras: well, what about inktank?
[22:29] <taras> i was hoping for a combo deal
[22:29] <taras> to have less vendors, inktank is on our list as an option
[22:30] <taras> maybe inktank has some partners for this
[22:31] <dennis> yea you should ask them
[22:35] <dennis> how many vm's do you need to deploy? isn't openstack+rbd a bit of overkill for a "private cloud"?
[22:36] <taras> we have ~3K machines
[22:36] <taras> plus up to 1500vms
[22:36] <taras> the machines are not cloud-managed atm
[22:36] <taras> mostly interested in ceph for object storage
[22:36] <taras> rbd is not intersting for us
[22:37] <geraintjones> Hi guys, is there a formula to work out how many MONs you should have >
[22:37] <dennis> ah in that case, why go the openstack route, just build a ceph cluster
[22:37] <dennis> and let your vm stuff be as is
[22:38] <darkfader> dennis: most openstack deployments are <100 core, people seem to really like overkill
[22:38] <dennis> darkfader: i really don't understand that, at all :)
[22:38] <geraintjones> haha i wouldn't call openstack + ceph/rbd overkill
[22:38] <dennis> darkfader: technically it's interesting and challenging and all that, but from a business perspective it doesn't make sense imo
[22:39] <geraintjones> it makes a lot of sense
[22:39] <dennis> openstack is hardly a finished product you can just install and use
[22:39] <geraintjones> AWS and the likes are incredibly expensive
[22:39] <darkfader> dennis: i agree
[22:39] <geraintjones> you can easily replace a $30k a month spend with $100k of hardware
[22:39] <taras> heh
[22:40] <geraintjones> that is the business case and works really well
[22:40] <taras> we are replacing 120K a month of aws spend
[22:40] <geraintjones> yea
[22:40] <taras> with 30k aws aspend
[22:40] <taras> probly lower
[22:40] <taras> it's not that expensive compared to overhead of managing own hw
[22:40] <taras> if done right
[22:43] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[22:43] <geraintjones> we did 30k and moved to openstack
[22:43] <geraintjones> we had 96 cores initally
[22:43] <geraintjones> we are up to 300 now
[22:43] <geraintjones> once you start to see all the benefits, you expand pretty quickly
[22:43] <geraintjones> plus we have 10gb facing Rackspace
[22:43] <geraintjones> so we can burst there if we really have to
[22:44] * jharley (~jharley@76-10-151-146.dsl.teksavvy.com) Quit (Quit: jharley)
[22:45] <dennis> geraintjones: do you use other stuff besides compute and object store?
[22:45] <lurbs> geraintjones: I believe that there's no real value in going over 5 mons, so either 3 or 5.
[22:46] <geraintjones> we use nova, cinder, glance, quantum (now neutron)
[22:46] <geraintjones> we also use rbd directly inside the KVM VMs
[22:46] <geraintjones> koding.com
[22:47] <geraintjones> we build 16bg KVM VMs and then run ~20 LXC based VMs inside them
[22:47] <geraintjones> those are using RBD+AuFS
[22:48] <geraintjones> lures - but no harm ?
[22:48] <gregsfortytwo> wait, geraintjones, are you actually bursting?
[22:48] <gregsfortytwo> I've never actually met somebody who does bursting for real
[22:48] <geraintjones> we have 8 - because our puppet script decided to setup all our initial compute nodes as MONs :)
[22:49] <geraintjones> gregsfortytwo: yup
[22:49] <dennis> how does that work?
[22:50] <gregsfortytwo> is it bursting for compute and you don't have much data that's required in either location?
[22:50] * fghaas (~florian@91-119-140-244.dynamic.xdsl-line.inode.at) has left #ceph
[22:51] <lurbs> 8 mons seems dodgy to me. You want an odd number, because quorum is a majority vote.
[22:52] <taras> geraintjones: cool setup
[22:52] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[22:52] <geraintjones> we burst into the nearest place we can - we had a big promo over the last 2 weeks
[22:52] <taras> our next step is to multiplex vms with lxc
[22:52] <geraintjones> so we were bursting into Softlayer San Jose
[22:52] <geraintjones> we have 10gb to Equinix, and peer with them there
[22:53] <geraintjones> then it was just an IPSec tunnel from them to us
[22:53] <geraintjones> and then we used Ceph as if it was on the LAN
[22:53] <geraintjones> no issues :)
[22:53] <gregsfortytwo> oh wow
[22:53] <gregsfortytwo> so your private DC is just fairly close to the public one, then
[22:53] <gregsfortytwo> nice
[22:53] <geraintjones> yea :)
[22:54] <lurbs> What sort of speeds are you able to get over the IPsec? Are you doing in hardware?
[22:54] <dennis> geraintjones: well to conclude, in your case it seems perfectly reasonable to run openstack, but that's not a typical deployment:)
[22:54] <dennis> off to zzZ
[22:54] <geraintjones> done in software with vyatta - the link was limiting us, not the encryption
[22:55] <geraintjones> dual quad box on each end with vyatta.
[22:55] <geraintjones> softlayer do a box with 10gb
[22:55] <geraintjones> you put that on the same subnet as your other instances
[22:55] <geraintjones> and just change their route to use it :)
[22:57] <taras> it doesn't sound too different from what we would do
[22:57] <geraintjones> we are looking to hire ops people - in case this has whetted your appetite :)
[22:57] <taras> geraintjones: how many people manage the openstack portion of this?
[22:57] <geraintjones> umm
[22:57] <geraintjones> me
[22:57] <geraintjones> hence the above :)
[22:57] <taras> so you have 300cores how many machines?
[22:58] <geraintjones> 22
[22:58] <taras> ok
[22:58] <geraintjones> 128gb ram in each
[22:58] <taras> we want to bare metal 3000 machines
[22:58] <taras> with openstack
[22:58] <taras> it's kind of a mess
[22:58] <geraintjones> how many instances ?
[22:58] <taras> around that many
[22:58] <taras> we run perf testing workloads
[22:58] <taras> so cant use vms
[22:59] <geraintjones> ah okay
[22:59] <taras> might be able to cut down our perf testing to 200 machines
[22:59] <taras> and convert rest into vms if we are lucky
[22:59] <taras> but that's long term
[22:59] <geraintjones> so ironic ?
[23:00] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[23:02] * sroy (~sroy@207.96.182.162) Quit (Quit: Quitte)
[23:03] * linuxkidd (~linuxkidd@2001:420:2100:2258:39d3:de25:be2d:1e03) Quit (Quit: Leaving)
[23:09] * garphy`aw is now known as garphy
[23:12] * ivotron_ (~ivotron@2601:9:2700:178:a961:1dea:3b74:7c97) Quit (Remote host closed the connection)
[23:12] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) Quit (Quit: Leaving.)
[23:17] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[23:20] * sputnik13 (~sputnik13@client64-80.sdsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[23:21] * mkoderer (uid11949@id-11949.ealing.irccloud.com) Quit (Quit: Connection closed for inactivity)
[23:22] * rendar (~s@host252-182-dynamic.3-79-r.retail.telecomitalia.it) Quit ()
[23:22] * xbox360 (~ghiigo@94.185.85.106) has joined #ceph
[23:22] <xbox360> free pc game http://tinyurl.com/mawsqmg
[23:22] * thb (~me@0001bd58.user.oftc.net) Quit (Quit: Leaving.)
[23:25] * gi675ii (~ghiigo@31.192.111.210) has joined #ceph
[23:25] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[23:26] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[23:26] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) Quit (Quit: Leaving)
[23:27] * gi675ii (~ghiigo@31.192.111.210) Quit (autokilled: Do not spam. mail support@oftc.net (2014-03-06 22:26:48))
[23:27] * sputnik13 (~sputnik13@client64-80.sdsc.edu) has joined #ceph
[23:28] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[23:30] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[23:30] * xbox360 (~ghiigo@94.185.85.106) Quit (Ping timeout: 480 seconds)
[23:37] * sputnik13 (~sputnik13@client64-80.sdsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[23:39] * sputnik13 (~sputnik13@client64-80.sdsc.edu) has joined #ceph
[23:42] * sarob (~sarob@ip-64-134-228-129.public.wayport.net) Quit (Remote host closed the connection)
[23:43] * sarob (~sarob@ip-64-134-228-129.public.wayport.net) has joined #ceph
[23:43] * garphy is now known as garphy`aw
[23:45] * KevinPerks (~Adium@74.122.167.244) has joined #ceph
[23:51] * sarob (~sarob@ip-64-134-228-129.public.wayport.net) Quit (Ping timeout: 480 seconds)
[23:52] * sputnik13 (~sputnik13@client64-80.sdsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[23:58] * tsnider (~tsnider@216.240.30.25) Quit (Ping timeout: 480 seconds)
[23:58] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[23:59] * sputnik13 (~sputnik13@client64-80.sdsc.edu) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.