#ceph IRC Log

Index

IRC Log for 2014-03-20

Timestamps are in GMT/BST.

[0:00] * sarob_ (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[0:00] * sarob (~sarob@2001:4998:effd:600:6d51:f02a:9b80:403d) has joined #ceph
[0:05] * sarob_ (~sarob@2001:4998:effd:600:ceb:3ea2:d3b4:889c) has joined #ceph
[0:08] * sarob (~sarob@2001:4998:effd:600:6d51:f02a:9b80:403d) Quit (Ping timeout: 480 seconds)
[0:08] * yuriw (~Adium@c-71-202-126-141.hsd1.ca.comcast.net) has joined #ceph
[0:09] * yuriw1 (~Adium@c-71-202-126-141.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[0:10] * xmltok_ (~xmltok@216.103.134.250) Quit (Quit: Bye!)
[0:10] * yuriw (~Adium@c-71-202-126-141.hsd1.ca.comcast.net) has left #ceph
[0:11] * Cube (~Cube@66-87-67-206.pools.spcsdns.net) has joined #ceph
[0:14] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[0:16] * dis (~dis@109.110.66.126) Quit (Ping timeout: 480 seconds)
[0:20] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[0:23] * lluis (~oftc-webi@pat.hitachigst.com) Quit (Quit: Page closed)
[0:25] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[0:26] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[0:26] * flaxy (~afx@78.130.174.164) Quit (Quit: WeeChat 0.4.3)
[0:26] * flaxy (~afx@78.130.174.164) has joined #ceph
[0:31] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) has joined #ceph
[0:31] * sarob_ (~sarob@2001:4998:effd:600:ceb:3ea2:d3b4:889c) Quit (Remote host closed the connection)
[0:31] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[0:36] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) Quit (Quit: Computer has gone to sleep.)
[0:36] * rudolfsteiner (~federicon@201-246-85-69.baf.movistar.cl) has joined #ceph
[0:37] <bens> how do I set a default crush_ruleset
[0:38] <bens> it is presently 0 and i have to execute
[0:38] <bens> sudo ceph osd pool set $POOL crush_ruleset 3
[0:38] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[0:38] <bens> or is there a way to set the ruleset at pool creation time
[0:38] * yuriw (~Adium@c-71-202-126-141.hsd1.ca.comcast.net) has joined #ceph
[0:38] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[0:39] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[0:42] * dxd828 (~dxd828@dsl-dynamic-77-44-45-38.interdsl.co.uk) has joined #ceph
[0:43] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[0:44] * bandrus1 (~Adium@75.5.246.77) has joined #ceph
[0:45] <bens> everyone is /away
[0:45] * bandrus (~Adium@adsl-75-5-250-197.dsl.scrm01.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[0:45] <bens> if you guys don't come back soon, I'm going to have to start rapping.
[0:46] <dmick> there is an option, osd_pool_default_crush_replicated_ruleset
[0:46] <bandrus1> dmick: how does that differ from osd_pool_default_crush_rule?
[0:46] <dmick> OPTION(osd_pool_default_crush_rule, OPT_INT, -1) // deprecated for osd_pool_default_crush_replicated_ruleset
[0:47] <dmick> (for EC)
[0:47] <bandrus1> thanks!
[0:47] <dmick> so probably pre-firefly, the other one is right
[0:47] <dmick> (I was looking at master)
[0:49] <houkouonchi-home> ok so on apt-mirror just waiting for the DC to swap the disks
[0:59] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) has joined #ceph
[1:00] * xarses (~andreww@173-12-165-153-oregon.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[1:09] * Pedras (~Adium@216.207.42.132) Quit (Ping timeout: 480 seconds)
[1:11] * dxd828 (~dxd828@dsl-dynamic-77-44-45-38.interdsl.co.uk) Quit (Quit: Textual IRC Client: www.textualapp.com)
[1:16] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:18] <houkouonchi-home> ok apt-mirro back up
[1:19] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[1:19] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit ()
[1:20] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[1:20] * kaizh (~kaizh@128-107-239-233.cisco.com) Quit (Remote host closed the connection)
[1:26] * olc- (~olecam@paola.glou.fr) has joined #ceph
[1:26] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[1:28] * olc (~olecam@paola.glou.fr) Quit (Ping timeout: 480 seconds)
[1:29] * dis (~dis@109.110.66.238) has joined #ceph
[1:29] * keeperandy (~textual@68.55.0.244) has joined #ceph
[1:29] * dis (~dis@109.110.66.238) Quit ()
[1:30] * dis (~dis@109.110.66.238) has joined #ceph
[1:33] * ChrisNBlum1 (~Adium@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Quit: Leaving.)
[1:35] <ponyofdeath> hi, reading this doc https://ceph.com/docs/master/rbd/libvirt/ it says to set the target dev for rbd device to bus=ide. is this better performing over virtio?
[1:39] * yanzheng (~zhyan@134.134.139.76) has joined #ceph
[2:05] * yguang11 (~yguang11@2406:2000:ef96:e:9cbc:1724:3285:4ca4) has joined #ceph
[2:10] * JeffK (~Jeff@38.99.52.10) Quit (Quit: JeffK)
[2:14] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[2:16] * bazli999 (bazli@d.clients.kiwiirc.com) has joined #ceph
[2:19] * yuriw (~Adium@c-71-202-126-141.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[2:19] * yuriw (~Adium@c-71-202-126-141.hsd1.ca.comcast.net) has joined #ceph
[2:22] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) Quit (Quit: Siva)
[2:26] * joshd1 (~jdurgin@2602:306:c5db:310:2cc2:45e:99b:6a1b) Quit (Quit: Leaving.)
[2:33] * rudolfsteiner (~federicon@201-246-85-69.baf.movistar.cl) Quit (Quit: rudolfsteiner)
[2:33] * Boltsky (~textual@office.deviantart.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[2:33] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[2:36] * jeff-YF (~jeffyf@pool-173-66-76-78.washdc.fios.verizon.net) has joined #ceph
[2:37] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[2:39] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) Quit (Quit: Connection closed for inactivity)
[2:41] * zerick (~eocrospom@190.187.21.53) Quit (Remote host closed the connection)
[2:43] * LeaChim (~LeaChim@host86-159-235-225.range86-159.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:45] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[2:49] * Guest3866 (~me@2a02:2028:149:4510:6267:20ff:fec9:4e40) Quit (Ping timeout: 480 seconds)
[3:07] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) has joined #ceph
[3:11] * BillK (~BillK-OFT@106-69-36-6.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[3:12] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[3:12] * BillK (~BillK-OFT@106-69-94-125.dyn.iinet.net.au) has joined #ceph
[3:17] * keeperandy (~textual@68.55.0.244) Quit (Quit: Textual IRC Client: www.textualapp.com)
[3:19] * xarses (~andreww@173-12-165-153-oregon.hfc.comcastbusiness.net) has joined #ceph
[3:21] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) has joined #ceph
[3:24] * BillK (~BillK-OFT@106-69-94-125.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[3:26] * BillK (~BillK-OFT@124-148-226-72.dyn.iinet.net.au) has joined #ceph
[3:28] * bitblt (~don@128-107-239-234.cisco.com) Quit (Quit: Leaving)
[3:31] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) has joined #ceph
[3:32] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:37] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) Quit (Quit: Siva)
[3:39] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) has joined #ceph
[3:40] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) Quit ()
[3:40] * cofol1986 (~xwrj@110.90.119.113) has joined #ceph
[3:42] * jeff-YF_ (~jeffyf@67.23.123.228) has joined #ceph
[3:44] * jeff-YF (~jeffyf@pool-173-66-76-78.washdc.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[3:44] * jeff-YF_ is now known as jeff-YF
[3:46] * _Gumby (~gumby@ppp121-45-246-83.lns20.per2.internode.on.net) has joined #ceph
[3:47] <_Gumby> hi, having a few problems with v0.77 and radosgw
[3:47] <_Gumby> radosgw-admin user stats --uid=blah gives "ERROR: can't read user header: (95) Operation not supported"
[3:47] <_Gumby> is there some new caps or something radosgw needs in 0.77?
[3:54] * jeff-YF (~jeffyf@67.23.123.228) Quit (Quit: jeff-YF)
[4:00] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[4:02] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit (Quit: Bye!)
[4:08] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:13] * Siva (~sivat@50-76-52-235-ip-static.hfc.comcastbusiness.net) has joined #ceph
[4:16] * jksM (~jks@3e6b5724.rev.stofanet.dk) Quit (Ping timeout: 480 seconds)
[4:25] * kiwnix (~kiwnix@00011f91.user.oftc.net) Quit (Ping timeout: 480 seconds)
[4:32] * Siva_ (~sivat@nat-dip6.cfw-a-gci.corp.yahoo.com) has joined #ceph
[4:37] * Siva (~sivat@50-76-52-235-ip-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[4:37] * Siva_ is now known as Siva
[4:39] * kaizh (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) has joined #ceph
[4:49] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[4:57] * bandrus1 (~Adium@75.5.246.77) Quit (Quit: Leaving.)
[4:57] * bandrus (~Adium@adsl-75-5-246-77.dsl.scrm01.sbcglobal.net) has joined #ceph
[4:58] * Siva_ (~sivat@50-76-52-232-ip-static.hfc.comcastbusiness.net) has joined #ceph
[4:58] * bandrus (~Adium@adsl-75-5-246-77.dsl.scrm01.sbcglobal.net) Quit ()
[4:59] * Siva (~sivat@nat-dip6.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[4:59] * Siva_ is now known as Siva
[5:02] * Siva_ (~sivat@nat-dip6.cfw-a-gci.corp.yahoo.com) has joined #ceph
[5:07] * Siva (~sivat@50-76-52-232-ip-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[5:07] * Siva_ is now known as Siva
[5:09] * wrale (~wrale@cpe-107-9-20-3.woh.res.rr.com) Quit (Quit: Leaving...)
[5:09] * oblu (~o@62.109.134.112) has joined #ceph
[5:11] * Vacum_ (~vovo@88.130.203.71) has joined #ceph
[5:14] * yanzheng (~zhyan@134.134.139.76) Quit (Remote host closed the connection)
[5:18] * Vacum (~vovo@i59F7A71E.versanet.de) Quit (Ping timeout: 480 seconds)
[5:19] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[5:20] * haomaiwang (~haomaiwan@117.79.232.185) Quit (Remote host closed the connection)
[5:20] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) has joined #ceph
[5:21] * thuc_ (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[5:29] * thuc_ (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:31] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Ping timeout: 480 seconds)
[5:39] * Siva (~sivat@nat-dip6.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[5:43] * pingu (~christian@mail.ponies.io) has joined #ceph
[5:43] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) has joined #ceph
[5:44] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) Quit ()
[5:44] <pingu> Just released: http://hackage.haskell.org/package/rados-haskell
[5:44] <pingu> Bindings to librados, for haskell
[5:44] * glzhao (~glzhao@123.125.124.17) Quit (Remote host closed the connection)
[5:44] * glzhao (~glzhao@123.125.124.17) has joined #ceph
[5:46] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[5:48] * bazli999 (bazli@d.clients.kiwiirc.com) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[5:50] * madkiss (~madkiss@bzq-218-11-179.cablep.bezeqint.net) Quit (Quit: Leaving.)
[5:51] * haomaiwa_ (~haomaiwan@117.79.232.240) has joined #ceph
[5:53] * kaizh (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[5:58] * kaizh (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) has joined #ceph
[5:58] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) Quit (Ping timeout: 480 seconds)
[6:08] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[6:12] * joshd (~joshd@2607:f298:a:607:193f:fe7:3490:fbc6) Quit (Ping timeout: 480 seconds)
[6:18] * dpippenger (~riven@cpe-198-72-157-189.socal.res.rr.com) Quit (Quit: Leaving.)
[6:18] * dpippenger (~riven@cpe-198-72-157-189.socal.res.rr.com) has joined #ceph
[6:19] * dpippenger (~riven@cpe-198-72-157-189.socal.res.rr.com) Quit ()
[6:19] * dpippenger (~riven@cpe-198-72-157-189.socal.res.rr.com) has joined #ceph
[6:22] * joshd (~joshd@2607:f298:a:607:9912:8c49:6358:f812) has joined #ceph
[6:28] * xarses (~andreww@173-12-165-153-oregon.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[6:53] * dpippenger (~riven@cpe-198-72-157-189.socal.res.rr.com) Quit (Remote host closed the connection)
[6:55] * dpippenger (~riven@cpe-198-72-157-189.socal.res.rr.com) has joined #ceph
[6:57] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[6:59] * haomaiwang (~haomaiwan@106.38.255.122) has joined #ceph
[7:03] * haomaiwa_ (~haomaiwan@117.79.232.240) Quit (Ping timeout: 480 seconds)
[7:04] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[7:06] * sherry (~sherry@mike-alien.esc.auckland.ac.nz) has joined #ceph
[7:06] * shahrzad (~sherry@mike-alien.esc.auckland.ac.nz) Quit (Read error: Connection reset by peer)
[7:21] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[7:28] * gaveen (~gaveen@220.247.234.28) has joined #ceph
[7:38] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) Quit (Ping timeout: 480 seconds)
[7:39] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[7:50] * jtangwk (~Adium@gateway.tchpc.tcd.ie) Quit (Ping timeout: 480 seconds)
[7:54] * madkiss (~madkiss@212.150.252.55) has joined #ceph
[8:09] * kaizh (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[8:17] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) has joined #ceph
[8:19] * b0e (~aledermue@juniper1.netways.de) Quit (Quit: Leaving.)
[8:19] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[8:23] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[8:32] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[8:37] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[8:37] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[8:37] * ChanServ sets mode +v andreask
[8:48] * jks (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[8:52] * _Gumby (~gumby@ppp121-45-246-83.lns20.per2.internode.on.net) Quit (Quit: .)
[9:01] * yguang11 (~yguang11@2406:2000:ef96:e:9cbc:1724:3285:4ca4) Quit ()
[9:04] * Sysadmin88 (~IceChat77@176.254.32.31) Quit (Quit: Oops. My brain just hit a bad sector)
[9:05] * rendar (~s@host220-176-dynamic.22-79-r.retail.telecomitalia.it) has joined #ceph
[9:08] * alexbligh1 (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) Quit (Ping timeout: 480 seconds)
[9:11] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:13] * alexbligh1 (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) has joined #ceph
[9:18] * sleinen1 (~Adium@2001:620:0:26:9cbe:155:2b93:2175) has joined #ceph
[9:18] * sleinen1 (~Adium@2001:620:0:26:9cbe:155:2b93:2175) Quit (Quit: Leaving.)
[9:23] * dmick (~dmick@2607:f298:a:607:d55a:4245:c90e:d3e4) Quit (Ping timeout: 480 seconds)
[9:24] * garphy`aw is now known as garphy
[9:26] * shahrzad (~sherry@mike-alien.esc.auckland.ac.nz) has joined #ceph
[9:26] * sherry (~sherry@mike-alien.esc.auckland.ac.nz) Quit (Read error: Connection reset by peer)
[9:32] * dmick (~dmick@2607:f298:a:607:a841:1201:8b16:f72e) has joined #ceph
[9:33] * Cube (~Cube@66-87-67-206.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[9:36] * TMM (~hp@c97185.upc-c.chello.nl) Quit (Quit: Ex-Chat)
[9:40] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[9:57] * leseb (~leseb@185.21.172.77) Quit (Killed (NickServ (Too many failed password attempts.)))
[9:58] * leseb (~leseb@185.21.172.77) has joined #ceph
[10:02] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has left #ceph
[10:09] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:10] * Cube (~Cube@66-87-67-206.pools.spcsdns.net) has joined #ceph
[10:18] * allsystemsarego (~allsystem@5-12-37-194.residential.rdsnet.ro) has joined #ceph
[10:38] * LeaChim (~LeaChim@host86-159-235-225.range86-159.btcentralplus.com) has joined #ceph
[10:38] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[10:39] * jtangwk (~Adium@gateway.tchpc.tcd.ie) has joined #ceph
[10:40] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[10:57] * Cube (~Cube@66-87-67-206.pools.spcsdns.net) Quit (Quit: Leaving.)
[10:58] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) has joined #ceph
[11:09] * analbeard (~shw@host31-53-108-38.range31-53.btcentralplus.com) has joined #ceph
[11:11] * vincenzo (~vincenzo@160.85.122.37) has joined #ceph
[11:12] <vincenzo> I was looking for a defintion of "bucket" within Ceph (object storage)
[11:13] <vincenzo> how it relates to the concept of placement group
[11:13] <vincenzo> if a bucket is contained or a container of a PG
[11:13] <vincenzo> if it is the same thing
[11:13] <vincenzo> or if it is not related at all
[11:16] * thb (~me@2a02:2028:165:a300:6267:20ff:fec9:4e40) has joined #ceph
[11:16] * thb is now known as Guest3909
[11:18] * analbeard (~shw@host31-53-108-38.range31-53.btcentralplus.com) has left #ceph
[11:20] * ponyofdeath (~vladi@cpe-76-167-201-214.san.res.rr.com) Quit (Ping timeout: 480 seconds)
[11:20] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[11:20] * dpippenger (~riven@cpe-198-72-157-189.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[11:23] <fghaas> buckets are in radosgw only
[11:23] <fghaas> and they're exactly the same as an s3 bucket or a swift container
[11:24] * isodude_ (~isodude@kungsbacka.oderland.com) Quit (Ping timeout: 480 seconds)
[11:25] <dwm> I note ceph-deploy 1.4 has landed; is there a changelog somewhere?
[11:25] * isodude_ (~isodude@kungsbacka.oderland.com) has joined #ceph
[11:26] * Guest3909 is now known as thb
[11:27] * Cube (~Cube@66-87-67-206.pools.spcsdns.net) has joined #ceph
[11:28] * glambert (~glambert@37.157.50.80) has joined #ceph
[11:30] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[11:33] <vincenzo> so when a bucket gets to the storage cluster, it will be stored inside some placement group?
[11:34] * dpippenger (~riven@cpe-198-72-157-189.socal.res.rr.com) has joined #ceph
[11:34] <jcsp> vincenzo: there's no direct mapping between RGW buckets and placement groups, individual RGW objects are mapped to multiple RADOS objects, and each RADOS object is assigned to a placement group.
[11:35] * Cube (~Cube@66-87-67-206.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[11:36] * ponyofdeath (~vladi@cpe-76-167-201-214.san.res.rr.com) has joined #ceph
[11:38] * ChrisNBlum (~Adium@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[11:39] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) has joined #ceph
[11:43] * isodude_ (~isodude@kungsbacka.oderland.com) Quit (Ping timeout: 480 seconds)
[11:48] * ChrisNBlum1 (~Adium@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[11:48] * ChrisNBlum (~Adium@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Remote host closed the connection)
[11:52] * isodude_ (~isodude@kungsbacka.oderland.com) has joined #ceph
[11:55] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[11:56] * analbeard1 (~shw@host31-53-108-38.range31-53.btcentralplus.com) has joined #ceph
[12:03] * baffle_ (baffle@jump.stenstad.net) Quit (Read error: Operation timed out)
[12:03] * hybrid512 (~walid@195.200.167.70) Quit (Ping timeout: 480 seconds)
[12:03] * allsystemsarego (~allsystem@50c25c2.test.dnsbl.oftc.net) Quit (Read error: Operation timed out)
[12:04] * hybrid512 (~walid@195.200.167.70) has joined #ceph
[12:04] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[12:04] * fdmanana (~fdmanana@bl13-154-103.dsl.telepac.pt) has joined #ceph
[12:05] * baffle (baffle@jump.stenstad.net) has joined #ceph
[12:08] * lofejndif (~lsqavnbok@77.109.138.42) has joined #ceph
[12:16] * allsystemsarego (~allsystem@5-12-37-194.residential.rdsnet.ro) has joined #ceph
[12:21] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[12:21] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit ()
[12:29] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[12:31] * shang (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[12:43] * morse (~morse@supercomputing.univpm.it) Quit (Ping timeout: 480 seconds)
[12:45] * classicsnail (~David@2600:3c01::f03c:91ff:fe96:d3c0) has joined #ceph
[12:49] * ChrisNBlum (~Adium@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[12:49] * ChrisNBlum1 (~Adium@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Read error: Connection reset by peer)
[12:52] <alfredodeza> dwm: the changelog is in the docs at ceph.com/ceph-deploy/docs
[12:52] <alfredodeza> I am getting the announcement out in a few minutes
[12:57] * thb (~me@0001bd58.user.oftc.net) Quit (Quit: Leaving.)
[12:57] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[12:59] <glambert> i see 0.78 release notes.. is it out? :)
[12:59] <alfredodeza> what release notes?
[13:01] <alfredodeza> dwm: http://ceph.com/ceph-deploy/docs/changelog.html
[13:04] <classicsnail> hi all
[13:04] <classicsnail> bit of a question I feel dumb for asking, but I have a number of 72 disk chassis, where the disks are on 2 disk hot swap trays
[13:05] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[13:05] <classicsnail> in experiments, ceph works pretty solidly when first built, but if I reboot things, have a crash or otherwise, recovery can be really problematic
[13:05] <classicsnail> am I better halfing the osd count to 36 or less, and setting up the hot swap trays as raid 0 (given I'll be taking two disks out ot replace one anyway?)
[13:06] <classicsnail> I read in the docs that many disks in the one chassis isn't the best way to go
[13:06] <dwm> alfredodeza: Yup, found it, thanks. :-)
[13:08] <alfredodeza> dwm: the new big thing is ceph-deploy configurations
[13:18] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[13:23] * fdmanana (~fdmanana@bl13-154-103.dsl.telepac.pt) Quit (Quit: Leaving)
[13:25] * ChrisNBlum (~Adium@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Quit: Leaving.)
[13:29] * analbeard1 (~shw@host31-53-108-38.range31-53.btcentralplus.com) Quit (Quit: Leaving.)
[13:33] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) Quit (Ping timeout: 480 seconds)
[13:35] * kiwnix (~kiwnix@00011f91.user.oftc.net) has joined #ceph
[13:42] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[13:48] * garphy is now known as garphy`aw
[13:49] * shang (~ShangWu@36-228-117-30.dynamic-ip.hinet.net) has joined #ceph
[13:50] <glambert> alfredodeza, http://ceph.com/docs/master/release-notes/#v0-78
[13:57] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[13:57] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[13:57] * markbby (~Adium@168.94.245.3) has joined #ceph
[13:58] * kiwnix (~kiwnix@00011f91.user.oftc.net) Quit (Ping timeout: 480 seconds)
[13:58] * kiwnix (~kiwnix@00011f91.user.oftc.net) has joined #ceph
[13:58] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[14:01] * markednmbr1 (~Mark-OVS@109.239.90.187) has joined #ceph
[14:01] <markednmbr1> hi all - I notice there is some ceph trainin courses in mainland europe and US - is anyone planning on doing any courses in the uk?
[14:03] * boris_ (~boris@router14.mail.bg) has joined #ceph
[14:03] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[14:04] <boris_> i used ceph-monstore-tool to extract the osd map from which i extracted the crush map and edited it
[14:04] <boris_> compiled the crush map and imported it into the osd map
[14:05] <boris_> i can't figure how to import the osd map into the monitor
[14:05] <boris_> ceph-monstore-tool --command setosdmap is not an existing command
[14:06] <boris_> --command help gives no info, --help does not explain neither
[14:07] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[14:07] <boris_> i need help importing the map back to the monitor
[14:11] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[14:13] * shang (~ShangWu@36-228-117-30.dynamic-ip.hinet.net) Quit (Read error: Connection reset by peer)
[14:15] * fedgoat (~fedgoat@cpe-24-28-22-21.austin.res.rr.com) has joined #ceph
[14:15] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) has joined #ceph
[14:18] <fedgoat> whats the best way to deal with a full OSD? why am i not able to just purge objects or buckets?
[14:18] <fghaas> add more OSDs
[14:19] <fghaas> or if it's just a single one, mark it out
[14:19] <fghaas> but that will of course increase the fill ratio of all other OSDs
[14:19] <fedgoat> yea i gathered that, this is a POC environment so not looking to add more OSD's at the moment
[14:19] <fedgoat> yea i dont want to do that then it will fill up the other osds
[14:22] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[14:24] * gaveen (~gaveen@220.247.234.28) Quit (Remote host closed the connection)
[14:25] <fghaas> your osds are so full that your cluster refuses to perform on "rados -p <pool> rm <object>"?
[14:25] <fedgoat> i have one full and 5 more near full i cant even list the buckets
[14:26] * kiwnix (~kiwnix@00011f91.user.oftc.net) Quit (Ping timeout: 480 seconds)
[14:26] <fghaas> yeah that's probably because radosgw is trying to write its access logs
[14:26] * kiwnix (~kiwnix@00011f91.user.oftc.net) has joined #ceph
[14:27] <fghaas> iow, what you think is just a read operation is likely causing some writes, which will duly fail
[14:27] <fghaas> hence why I asked whether you're failing even at the rados level
[14:28] <fedgoat> i havent tried to remove any specific objects with rados...im not even sure how i can get a list of those objects or their names
[14:28] <fedgoat> or can i use a wildcard there
[14:28] * kiwnix (~kiwnix@00011f91.user.oftc.net) Quit ()
[14:28] <fedgoat> radosgw-admin bucket list just hangs
[14:30] <fghaas> if you don't want to touch your cluster with rados and you're unable to even temporarily add another osd, then I guess with just radosgw tools you would be out of luck
[14:30] <fedgoat> i dont mind touching it with rados, im just trying to figure out what i need to do
[14:30] <fedgoat> im attempting to get a list of the objects from the pool now
[14:31] <fedgoat> the only thing i can really do is delete some of the paging data from the full osd's directory I think, add another OSD, or maybe try to rebalance with crush mapping
[14:32] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[14:33] * dmsimard (~Adium@ap03.wireless.co.mtl.iweb.com) has joined #ceph
[14:33] <fghaas> never ever ever poke around in osd filestores.
[14:34] <fghaas> that's an utterly terrible idea.
[14:34] <fedgoat> maybe i can remove the pool, and just recreate it
[14:35] <fghaas> that you can do, but good luck for doing that with rgw pools at random
[14:36] <fghaas> of course, if this is just a test environment and you can afford to zap your data, you might as well just reinitialize your whole cluster from scratch
[14:38] <fedgoat> i have another storage node i can throw into the cluster i wanted to avoid doing that, but i guess ill just add a few OSD's then nuke the data
[14:43] <fedgoat> yea i cant even remove an object with rados, FULL, paused hangs
[14:52] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[14:59] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) has joined #ceph
[15:18] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) Quit (Remote host closed the connection)
[15:18] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[15:20] * thomnico (~thomnico@2001:920:7000:102:5067:9c8b:71ec:a5) has joined #ceph
[15:20] * rahatm1 (~rahatm1@CPE602ad089ce64-CM602ad089ce61.cpe.net.cable.rogers.com) has joined #ceph
[15:21] * rahatm1 (~rahatm1@CPE602ad089ce64-CM602ad089ce61.cpe.net.cable.rogers.com) Quit ()
[15:22] * vincenzo (~vincenzo@160.85.122.37) Quit (Remote host closed the connection)
[15:23] * analbeard (~shw@host86-155-192-176.range86-155.btcentralplus.com) has joined #ceph
[15:23] * rmahbub (~rmahbub@128.224.252.2) has joined #ceph
[15:24] * rmahbub (~rmahbub@128.224.252.2) Quit ()
[15:25] * rahat (~rmahbub@128.224.252.2) Quit (Remote host closed the connection)
[15:26] * madkiss (~madkiss@212.150.252.55) Quit (Quit: Leaving.)
[15:26] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[15:27] * xarses (~andreww@173-12-165-153-oregon.hfc.comcastbusiness.net) has joined #ceph
[15:29] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Remote host closed the connection)
[15:30] * jtangwk1 (~Adium@gateway.tchpc.tcd.ie) has joined #ceph
[15:31] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[15:31] * thomnico_ (~thomnico@2001:920:7000:101:5067:9c8b:71ec:a5) has joined #ceph
[15:34] * jtangwk (~Adium@gateway.tchpc.tcd.ie) Quit (Ping timeout: 480 seconds)
[15:34] * thomnico (~thomnico@2001:920:7000:102:5067:9c8b:71ec:a5) Quit (Ping timeout: 480 seconds)
[15:38] <boris_> how can i import the osdmap i extracted with ceph-monstore-tool back to the monitor?
[15:38] * xarses (~andreww@173-12-165-153-oregon.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[15:39] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[15:39] * allsystemsarego (~allsystem@50c25c2.test.dnsbl.oftc.net) Quit (Quit: Leaving)
[15:39] <glambert> markednmbr1, I wish they would!
[15:41] <markednmbr1> yes, ceph training course in London please -- I would be there!
[15:42] <markednmbr1> Would be good to do CEPH100 and CEPH110 as listed here http://www.inktank.com/ceph-training/
[15:43] <saturnine> Anyone know what the most likely bottleneck for sequential reads on RBD would be?
[15:44] <saturnine> 2 SSDs on 2 modes with a 1Gig link to KVM host
[15:46] <mikedawson> saturnine: what block size are you testing and what is the throughput you are seeing?
[15:47] <glambert> markednmbr1, I would prefer Manchester :(
[15:47] <markednmbr1> nahh cmon you can have a trip to london :)
[15:53] * alphe (~alphe@0001ac6f.user.oftc.net) has joined #ceph
[15:53] <saturnine> mikedawson: I've tried 4k-1M block sizes.
[15:53] <alphe> hello all
[15:53] <mikedawson> saturnine: what throughput can you achieve?
[15:53] <alphe> I have a warning 1 near full OSD how do I force a rebalance ?
[15:53] <saturnine> Basic setup is 2 ceph nodes w/ 1 SSD in each for the RBD pool, XFS, KVM host with VM connected to the cluster over 1G
[15:53] * fdmanana (~fdmanana@bl13-154-103.dsl.telepac.pt) has joined #ceph
[15:54] <mikedawson> alphe: ceph osd reweight-by-utilization <threshold>' http://ceph.com/docs/master/rados/operations/control/#osd-subsystem
[15:55] <alphe> mikedawson I knew that command but will not that fill faster the most empty then instead of having 1 near full osd I will have 3 near full OSD
[15:57] <saturnine> mikedawson: I'm seeing ~75MB/s for seq writes, ~11MB/s for reads (seq and rand)
[15:57] <mikedawson> alphe: that will force PGs to migrate from fuller osds to less full osds
[15:57] <alphe> ok
[15:57] * dmsimard1 (~Adium@108.163.152.2) has joined #ceph
[15:57] <mikedawson> alphe: and may cause significant load on the cluster during the migration
[15:57] <alphe> has for the threshold what value should I put ?
[15:57] * gregmark (~Adium@68.87.42.115) has joined #ceph
[15:58] <saturnine> I compared it to a vm running on a local LVM volume which was getting ~100MB/s writes & ~197MB/s reads at 4k
[15:59] <mikedawson> alphe: it defaults to 120% of average utilization if you don't specify otherwise
[15:59] * dmsimard (~Adium@ap03.wireless.co.mtl.iweb.com) Quit (Read error: Connection reset by peer)
[15:59] <mikedawson> saturnine: what is your replication factor in this pool?
[15:59] <alphe> yes and I don t understand there is such a default ... it is absurd meaning
[16:00] <alphe> mikedawson I know the laconical docs ... don t worry but a single sentence or two don t make a full extend explaination able for the reader to understand the concept
[16:01] <saturnine> mikedawson: min_size = 1, size = 1
[16:01] <saturnine> I've tried with 2 & 3 as well, not much if any difference.
[16:01] <alphe> for instance I need to put a use ratio (threshold) to that reweight automated but how do I determine what is the use ratio
[16:01] <mikedawson> saturnine: that's really bad performance then
[16:01] <saturnine> mikedawson: Mind if I send a PM?
[16:02] <mikedawson> saturnine: sure
[16:02] <saturnine> I can give you a more detailed overview, so you'll know if I'm doing anything retarded. :)
[16:03] * thomnico_ (~thomnico@2001:920:7000:101:5067:9c8b:71ec:a5) Quit (Ping timeout: 480 seconds)
[16:03] <alphe> is there a way to now what is the os almost full and what is it
[16:03] <alphe> is there a way to now what is the osd almost full and what is it
[16:04] <alphe> other than manually doing a df -h on each node
[16:04] <alphe> a ceph df like command ?
[16:04] <mikedawson> alphe: you may find something valuable in reading http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/15042 to better understand reweight-by-utilization
[16:04] * garphy`aw is now known as garphy
[16:04] * bandrus (~Adium@adsl-75-5-246-77.dsl.scrm01.sbcglobal.net) has joined #ceph
[16:05] * madkiss (~madkiss@bzq-218-11-179.cablep.bezeqint.net) has joined #ceph
[16:06] <alphe> mikedawson hum but the threshold is the minimum utilization we want to set right ?
[16:07] <alphe> but utilization here stands for disk space use ?
[16:07] * bitblt (~don@128-107-239-233.cisco.com) has joined #ceph
[16:08] * b0e (~aledermue@juniper1.netways.de) Quit (Remote host closed the connection)
[16:08] <mikedawson> alphe: if you set 110, it will re-weight any osd with 10% more utilization (percent of available space used) than the mean utilization
[16:09] <mikedawson> alphe: so if your mean utilization is 50% and a few osds exceed 55%, they would be reweighted to take less PGs
[16:09] <alphe> ok .. but still sorry I have problem to embrace what it means...
[16:09] <alphe> lets be crude and concrete ... I have osd using there disk betwin 70% and 86% how do I rescale them ?
[16:10] <alphe> ok so I should reweight to the middle end of the min to max use ?
[16:11] <alphe> ok so I should reweight to the middle valueof the min to max use ?
[16:11] <alphe> for instance 78% in my case ?
[16:11] <alphe> so 178 should be my threshold ?
[16:13] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[16:13] <alphe> mikedawson ?
[16:15] <alphe> haaa sorry I understoud so I should know first what is my min max percentage
[16:16] <alphe> so actually my diferencial is around 17 %
[16:16] <alphe> which is normal since the differencial should be about 20 %
[16:17] <alphe> now If I caculate my ideal weight of 78% this means I have to have a differencial of 8% so the threshold should be of 108
[16:18] <mikedawson> alphe: calculate the mean util by averaging the utilization of all osds in your cluster (perhaps 72%) then determine which OSDs you like to get less utilization (perhaps anything over 80%), then divide 80%/75%=1.11 or in this case a threshold of 111
[16:18] <alphe> SUCCESSFUL reweight-by-utilization: average_util: 0.776835, overload_util: 0.854519. overloaded osds: 16 [1.000000 -> 0.905701],
[16:19] <alphe> what does it means ? I set 110 because it s likely they arrive to 108
[16:20] * xarses (~andreww@173-12-165-153-oregon.hfc.comcastbusiness.net) has joined #ceph
[16:20] <mikedawson> alphe: I think you'll see PGs migrate off osd.16 and towards other osds according to your crush rules
[16:21] * garphy is now known as garphy`aw
[16:21] * boris_ (~boris@router14.mail.bg) Quit (Read error: No route to host)
[16:22] * diegows (~diegows@190.49.168.202) has joined #ceph
[16:23] <alphe> mikedawson the crush rules are the default ones ... there is no particular tuning in that area
[16:23] <alphe> ok 26 pgs revamped
[16:23] <alphe> this is not a big big thing
[16:24] <alphe> mikedawson thank you for your help I understand now what is that threshold concept
[16:24] <mikedawson> alphe: glad to help!
[16:25] <alphe> mikedawson another problem I have is to foresee what will happend with my ceph cluster once my rbd image is full
[16:25] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) Quit (Quit: Leaving.)
[16:26] <alphe> I have a ceph cluster of 38TB after initial xfs disk-parepare round
[16:26] <alphe> then I made a rbd image of 18 TB a little less than the half in order to not get the "replication stuck"
[16:27] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[16:27] <alphe> but if my rbd image is near full what will happend to the "replica" data ?
[16:27] <mikedawson> alphe: I'd start planning to add hosts and/or osds before you get too close to full
[16:28] <alphe> yeah but I got a probleme withe extending a rbd image xfs_grow did worked well ...
[16:28] <alphe> yeah but I got a probleme withe extending a rbd image xfs_grow did not worked well ...
[16:30] <alphe> for instance I tryed to set a 7TB rbd image xfs then growth it to 12 TB no problem then I tryed to growth it to 17 TB and then I got errors the xfs what seeing a 17 TB partition but the os was only seeing 12 TB
[16:30] * JeffK (~Jeff@38.99.52.10) has joined #ceph
[16:31] <alphe> mikedawson if I write 17TB to my 18TB rbd so the replicas will be of 17TB right ?
[16:31] <alphe> if I free 5 TB now by erasing data from my rbd image
[16:32] <alphe> we are agree that the replica amount will remain unchanged right ?
[16:32] <alphe> replica so = 17TB
[16:32] * boris_ (~boris@router14.mail.bg) has joined #ceph
[16:33] <alphe> if I write 4 new TB to my rbd image will the replica of before free data will be overwriten or will it append and growth to 21 TB exceeding disk real capacities
[16:33] <alphe> ?
[16:35] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit (Quit: valeech)
[16:36] <mikedawson> alphe: what is the replica count in your pool that hosts these RBD images?
[16:37] <alphe> 2
[16:37] <alphe> 1 replica for 1 data
[16:37] <alphe> is it so that 1 pg = 1 replica pg ?
[16:39] <mikedawson> alphe: ok. RDBs are thin provisioned If you have 17TB of 18TB rbd used in a pool with 2x replication, it will consume 34TB of actual storage.
[16:39] <alphe> pgmap v2719039: 2048 pgs, 1 pools, 14936 GB data, 473 kobjects
[16:39] <alphe> 28949 GB used, 8272 GB / 37222 GB avail
[16:40] <alphe> I use 14TB in my pool and the global use is 28 TB wich is the 14TB data + 14TB replica
[16:40] <mikedawson> alphe: yes your PGs will be located like [0,5] where osd.0 is the primary and osd.5 is the replica. If you upped your replication to 3, the PGs would be located like [0,5,12] or something like that.
[16:40] <mikedawson> alphe: right
[16:40] <alphe> ok
[16:42] * xarses (~andreww@173-12-165-153-oregon.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[16:43] * madkiss (~madkiss@bzq-218-11-179.cablep.bezeqint.net) Quit (Quit: Leaving.)
[17:00] * rmoe (~quassel@12.164.168.117) has joined #ceph
[17:03] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) has joined #ceph
[17:06] * bitblt (~don@128-107-239-233.cisco.com) Quit (Quit: Leaving)
[17:06] * bitblt (~don@128-107-239-233.cisco.com) has joined #ceph
[17:08] * xmltok (~xmltok@216.103.134.250) has joined #ceph
[17:10] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[17:17] * garphy`aw is now known as garphy
[17:17] * thomnico (~thomnico@2001:920:7000:101:5067:9c8b:71ec:a5) has joined #ceph
[17:19] * BManojlovic (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[17:22] * garphy is now known as garphy`aw
[17:23] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[17:23] * LPG (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) has joined #ceph
[17:24] * kaizh (~kaizh@128-107-239-233.cisco.com) has joined #ceph
[17:29] * thomnico (~thomnico@2001:920:7000:101:5067:9c8b:71ec:a5) Quit (Quit: Ex-Chat)
[17:31] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:31] * LPG (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) Quit (Remote host closed the connection)
[17:37] * linuxkidd (~linuxkidd@rtp-isp-nat1.cisco.com) Quit (Read error: Connection reset by peer)
[17:42] * pmatulis (~peter@64.34.151.178) has joined #ceph
[17:43] <pmatulis> no idea that ceph-deploy could not be used to add a monitor before
[17:44] <pmatulis> i'm pretty sure the docs said it could be done with a few simple commands
[17:44] * meeh (~meeh@193.150.121.66) Quit (Quit: Lost terminal)
[17:44] * leseb (~leseb@185.21.172.77) Quit (Killed (NickServ (Too many failed password attempts.)))
[17:45] * leseb (~leseb@185.21.172.77) has joined #ceph
[17:47] * alanr (~alanr@ip72-205-7-86.dc.dc.cox.net) has joined #ceph
[17:53] * piezo (~piezo@108-88-37-13.lightspeed.iplsin.sbcglobal.net) Quit (Quit: Konversation terminated!)
[17:56] <bens> hi all
[17:56] <bens> is there a way to set the size and ruleset at the time of pool creation
[17:58] <fghaas> you can change both on the fly at any time, what difference does it make to you to set them right after creation vs *at* creation time?
[17:58] <bens> because i am having a problem with a ruleset i need to test.
[17:59] <bens> and i was asked to delete that rule from the ruleset. it is the default rule, and i want to skip it.
[17:59] <bens> when i create a pool, it takes 20+ minutes to get everthing out of degraded/peering.
[18:00] <bens> i am working with my support people, but I think they are baffled, too.
[18:00] <bens> i can't find it in the docs, so I would like very much if someone mught know it before i dig into the source code.
[18:01] <fghaas> I think there's a default for pool size that would then apply to all freshly created pools
[18:03] <bens> size is easier than rule.
[18:04] <bens> the rule thing is strange to me
[18:13] * gregsfortytwo1 (~Adium@38.122.20.226) has joined #ceph
[18:15] * JoeGruher (~JoeGruher@jfdmzpr02-ext.jf.intel.com) has joined #ceph
[18:17] <JoeGruher> any recent news on when 0.78 might release? still this week?
[18:17] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[18:19] * thomnico (~thomnico@37.162.39.34) has joined #ceph
[18:22] * Cube (~Cube@66-87-67-206.pools.spcsdns.net) has joined #ceph
[18:22] * boris_ (~boris@router14.mail.bg) Quit (Ping timeout: 480 seconds)
[18:25] * diegows (~diegows@190.49.168.202) Quit (Ping timeout: 480 seconds)
[18:26] * diegows (~diegows@190.49.168.202) has joined #ceph
[18:30] * gregsfortytwo1 (~Adium@38.122.20.226) Quit (Quit: Leaving.)
[18:33] * oblu (~o@62.109.134.112) Quit (Read error: Operation timed out)
[18:34] * rendar (~s@host220-176-dynamic.22-79-r.retail.telecomitalia.it) Quit ()
[18:34] * capri_on (~capri@212.218.127.222) has joined #ceph
[18:35] * oblu (~o@62.109.134.112) has joined #ceph
[18:37] * thomnico (~thomnico@37.162.39.34) Quit (Ping timeout: 480 seconds)
[18:40] * linuxkidd (~linuxkidd@rtp-isp-nat1.cisco.com) has joined #ceph
[18:40] <mikedawson> JoeGruher: The latest I have seen -> http://www.spinics.net/lists/ceph-users/msg08459.html
[18:41] * capri (~capri@212.218.127.222) Quit (Ping timeout: 480 seconds)
[18:41] * sjusthm (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) has joined #ceph
[18:43] <JoeGruher> thx midedawson, i did see that, just wondering if it was still on track for this week :)
[18:47] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[18:48] * markednmbr1 (~Mark-OVS@109.239.90.187) Quit (Quit: Leaving)
[18:49] * Machske (~Bram@d5152D87C.static.telenet.be) Quit ()
[18:52] * glambert (~glambert@37.157.50.80) Quit (Read error: Operation timed out)
[18:52] * analbeard (~shw@host86-155-192-176.range86-155.btcentralplus.com) Quit (Quit: Leaving.)
[18:55] * oblu (~o@62.109.134.112) Quit (Ping timeout: 480 seconds)
[18:55] * oblu- (~o@62.109.134.112) has joined #ceph
[18:56] * alanr (~alanr@ip72-205-7-86.dc.dc.cox.net) has left #ceph
[18:57] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[19:05] <JeffK> Hmm, when trying to add the release key I get, "gpg: no valid OpenPGP data found."
[19:08] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[19:12] * Boltsky (~textual@office.deviantart.net) has joined #ceph
[19:12] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[19:13] <JeffK> False alarm, DNS issue.
[19:14] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[19:16] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit (Quit: valeech)
[19:17] * BillK (~BillK-OFT@124-148-226-72.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[19:19] * BillK (~BillK-OFT@106-69-6-106.dyn.iinet.net.au) has joined #ceph
[19:22] * lofejndif (~lsqavnbok@83TAAH016.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[19:25] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[19:26] * markbby (~Adium@168.94.245.3) has joined #ceph
[19:27] <Anticimex> sage: qemu-rbd -- just as qcow2 supports encryption, i'd like to ... have encryption on qemu-rbd. for openstack / kvm deployments, it provides data-at-rest-encryption + data-in-flight-encryption, for the network.
[19:28] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Read error: Connection reset by peer)
[19:28] <Anticimex> think i saw some mail on the dev list, but im not sure.
[19:28] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[19:28] <Anticimex> openstack also haven't expanded nova's crypto key framework to encapsulate also qcow2, but it's there for dm-crypt. so infra is there
[19:29] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Read error: Connection reset by peer)
[19:30] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[19:33] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[19:34] * Haksoldier (~islamatta@88.234.63.112) has joined #ceph
[19:34] <Haksoldier> EUZUBILLAHIMINE??EYTANIRRACIM BISMILLAHIRRAHMANIRRAHIM
[19:34] <Haksoldier> ALLAHU EKBERRRRR! LA ?LAHE ?LLALLAH MUHAMMEDEN RESULULLAH!
[19:34] <Haksoldier> I did the obligatory prayers five times a day to the nation. And I promised myself that, who (beside me) taking care not to make the five daily prayers comes ahead of time, I'll put it to heaven. Who says prayer does not show attention to me I do not have a word for it.! Prophet Muhammad (s.a.v.)
[19:34] <Haksoldier> hell if you did until the needle tip could not remove your head from prostration Prophet Muhammad pbuh
[19:34] * Haksoldier (~islamatta@88.234.63.112) has left #ceph
[19:35] <Anticimex> (wrong channel)
[19:35] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[19:36] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[19:36] * oblu- (~o@62.109.134.112) Quit (Ping timeout: 480 seconds)
[19:37] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[19:39] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[19:41] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:41] * oblu (~o@62.109.134.112) has joined #ceph
[19:42] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[19:42] * sarob (~sarob@2001:4998:effd:600:24d7:db09:6eff:c469) has joined #ceph
[19:42] * kaizh (~kaizh@128-107-239-233.cisco.com) Quit (Remote host closed the connection)
[19:45] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[19:46] <dwm> Anticimex: They've wandered by here and pasted the same thing for the past few days..
[19:50] * Machske (~Bram@d5152D87C.static.telenet.be) has joined #ceph
[19:51] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[19:57] * kaizh (~kaizh@128-107-239-235.cisco.com) has joined #ceph
[20:00] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[20:00] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[20:00] * sjusthm (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[20:06] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[20:07] * boris_ (~boris@78.90.142.146) has joined #ceph
[20:15] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) has joined #ceph
[20:20] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[20:21] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[20:26] * BManojlovic (~steki@198.199.65.141) has joined #ceph
[20:30] * thb (~me@port-19899.pppoe.wtnet.de) has joined #ceph
[20:31] * thb is now known as Guest3952
[20:32] <alphe> mike are your here ?
[20:32] <alphe> mikedawn ?
[20:33] <alphe> after performing a ceph osd wight-by-utilization 110 and waited most of the day I have this 6 active+remapped+backfill_toofull
[20:35] * boris_ (~boris@78.90.142.146) Quit (Quit: leaving)
[20:36] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) has joined #ceph
[20:36] * boris_ (~boris@78.90.142.146) has joined #ceph
[20:44] * oblu (~o@62.109.134.112) Quit (Read error: Operation timed out)
[20:44] * oblu- (~o@62.109.134.112) has joined #ceph
[20:45] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[20:53] <alphe> 6 active+remapped+backfill_toofull what can be done ?
[20:53] <alphe> I still have one near full osd
[20:54] <alphe> and now on top I have 6pg stucks
[20:55] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[20:57] * mdjp (~mdjp@213.229.87.114) Quit (Ping timeout: 480 seconds)
[20:58] * mdjp (~mdjp@213.229.87.114) has joined #ceph
[20:58] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[21:02] <eightyeight> trying to run "ceph-deploy mon create-initial", and getting an "admin-socket: exception getting command descriptions: [Errno2] No such file or directior" error
[21:02] <eightyeight> how can i move past this?
[21:03] <eightyeight> i'm using the latest ceph-deploy(1) from github.com, and i have 3 monitors setup
[21:06] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) Quit (Quit: Siva)
[21:06] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[21:09] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) has joined #ceph
[21:13] <eightyeight> s/directior/directory/
[21:14] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[21:14] * garphy`aw is now known as garphy
[21:21] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[21:22] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[21:26] * sarob (~sarob@2001:4998:effd:600:24d7:db09:6eff:c469) Quit (Remote host closed the connection)
[21:26] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) has joined #ceph
[21:27] * boris_ is now known as bboris
[21:28] * bboris (~boris@78.90.142.146) Quit (Quit: leaving)
[21:28] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) has joined #ceph
[21:29] * wrale (~wrale@cpe-107-9-20-3.woh.res.rr.com) has joined #ceph
[21:31] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (Ping timeout: 480 seconds)
[21:32] * garphy is now known as garphy`aw
[21:33] * garphy`aw is now known as garphy
[21:34] * alphe (~alphe@0001ac6f.user.oftc.net) Quit (Quit: Leaving)
[21:34] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[21:38] * madkiss (~madkiss@bzq-218-11-179.cablep.bezeqint.net) has joined #ceph
[21:40] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[21:41] <dwm> eightyeight: My suspicion is that some directories in /var/lib/ceph that ceph-deploy is expecting aren't present.
[21:42] <eightyeight> what builds those directories? i'm just about ready to look over some strace output
[21:44] * sjustwork (~sam@2607:f298:a:607:74f9:232d:3326:b73b) has joined #ceph
[21:44] <dwm> eightyeight: In my experience, package post-install scripts.
[21:44] <dwm> eightyeight: If you pulled it from github, however, they may be absent.
[21:44] <eightyeight> first i installed ceph from debian unstable, then later from ceph-deploy itself
[21:45] <eightyeight> just following the quickstart guide at: http://ceph.com/docs/master/start/quick-ceph-deploy/
[21:45] <eightyeight> which appears to just be using a ceph.com debian repo
[21:46] * JoeGruher (~JoeGruher@jfdmzpr02-ext.jf.intel.com) Quit (Remote host closed the connection)
[21:46] * garphy is now known as garphy`aw
[21:47] <eightyeight> # ls -F /var/lib/ceph/
[21:47] <eightyeight> bootstrap-mds/ bootstrap-osd/ mds/ mon/ osd/ tmp/
[21:47] * kaizh (~kaizh@128-107-239-235.cisco.com) Quit (Remote host closed the connection)
[21:47] <eightyeight> (on the monitor node(s))
[21:48] * Steki (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[21:52] * BManojlovic (~steki@198.199.65.141) Quit (Ping timeout: 480 seconds)
[21:54] * bboris (~boris@78.90.142.146) has joined #ceph
[21:56] * mtanski (~mtanski@69.193.178.202) Quit (Ping timeout: 480 seconds)
[21:56] * b0e (~aledermue@rgnb-5d87890c.pool.mediaWays.net) has joined #ceph
[21:56] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[21:58] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[21:59] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[21:59] * ChanServ sets mode +v andreask
[22:05] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[22:06] * sarob (~sarob@2001:4998:effd:600:64fc:eb0a:b120:baa5) has joined #ceph
[22:06] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[22:06] * ChanServ sets mode +v andreask
[22:06] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) Quit (Quit: Leaving)
[22:11] * sarob_ (~sarob@2001:4998:effd:600:c1bc:7ec8:8e0f:3a80) has joined #ceph
[22:14] * sarob (~sarob@2001:4998:effd:600:64fc:eb0a:b120:baa5) Quit (Ping timeout: 480 seconds)
[22:16] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[22:17] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[22:18] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[22:19] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit ()
[22:21] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has left #ceph
[22:22] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[22:26] <athrift> We have a cluster that has been in production for about 6 months now, going great qith QEMU-RBD. I recently tried connecting KRBD but cannot get it working, it appears to freeze client side. Server side I can see the packets arriving with TCPDUMP but cannot see any error messages in the logs. The client eventually boms with the message "rbd: add failed: (5) Input/output error" any pointers on where I should be looking
[22:26] <athrift> ?
[22:28] * garphy`aw is now known as garphy
[22:29] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[22:30] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[22:31] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) Quit (Quit: Siva)
[22:33] <gregsfortytwo> athrift: check dmesg?
[22:33] <gregsfortytwo> the cluster is probably making use of features that your kernel version doesn't support, at a guess
[22:38] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[22:38] * ChanServ sets mode +v andreask
[22:38] * oblu- (~o@62.109.134.112) Quit (Quit: ~)
[22:40] * scalability-junk (uid6422@id-6422.ealing.irccloud.com) has joined #ceph
[22:41] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[22:44] <athrift> gregsfortytwo: no ominous messages in dmesg. http://pastebin.com/LUqjcMpS
[22:45] <gregsfortytwo> not sure then; you could turn on messenger debugging on the monitor/OSDs to try and see where it's failing by tracking the kclient's messages; if you have debugfs you can turn that on (search the docs/wiki, I think it's described in there)
[22:45] <gregsfortytwo> but I don't have anything offhand
[22:45] <athrift> gregsfortytwo: Thanks will dig a bit deeper
[22:45] * oblu (~o@62.109.134.112) has joined #ceph
[22:46] * jeff-YF (~jeffyf@67.23.117.122) Quit (Quit: jeff-YF)
[22:47] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) has joined #ceph
[22:51] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has left #ceph
[22:57] * jhujhiti (~jhujhiti@00012a8b.user.oftc.net) has joined #ceph
[22:58] <jhujhiti> how can i figure out which OSDs an RBD object is stored on?
[23:00] * garphy is now known as garphy`aw
[23:02] <dmick> jhujhiti: ceph osd map
[23:03] <jhujhiti> dmick: oh. my, that was easy. not sure how i missed that. thanks.
[23:03] <dmick> know that it just calculates the placement from the object name, so any object name will work (including nonexistent ones)
[23:04] <dmick> so rados ls -p <pool> <name> will tell you for sure if you're dealing with an extant object
[23:04] <jhujhiti> that answers my next question.. so i can't move it to another pg unless i rename it
[23:04] <loicd> houkouonchi-work: ping ?
[23:05] <houkouonchi-work> loicd: sup?
[23:05] <dmick> jhujhiti: yeah, they're placed where they're placed
[23:05] <loicd> https://github.com/dachary/autobuild-ceph/commit/fd831c5b83d080daf198d8d5e99b146fd0ccdbab got merged
[23:05] <loicd> but
[23:05] <jhujhiti> i'm trying to troubleshoot really bad write latency to only some objects.. i think it's a hardware issue
[23:05] <loicd> http://gitbuilder.sepia.ceph.com/gitbuilder-centos6-amd64/log.cgi?log=f50d208cc59d46dc0f0f45363b23bd3b052af580
[23:05] <dmick> you don't really have control over where
[23:05] <loicd> does not show it houkouonchi-work
[23:05] * kaizh (~kaizh@128-107-239-235.cisco.com) has joined #ceph
[23:05] <loicd> and it's probably the reason why it fails (git submodules are tricky to update...)
[23:06] <loicd> houkouonchi-work: how is it synchronized ? if it's something I can do, I'd be happy to ;-)
[23:06] <houkouonchi-work> loicd: yeah actually i need to re-base mine possibly and push my changes to github (which I hadn't done yet. Also its probably because after changes were made to autobuild-ceph it was not re-ran onthe machines
[23:06] <houkouonchi-work> its manually done
[23:07] <houkouonchi-work> it pulls a local git repo on your machine and not one from a URL or anything (its weird)
[23:08] <houkouonchi-work> yeah my push failed... gonna need to rebase it with your changes
[23:08] <loicd> ah
[23:09] <loicd> I'm sorry I caused so much trouble with these two modules
[23:10] <loicd> sage pinged me about an hour ago and I think it is because builds were broken after adding these two modules
[23:10] <houkouonchi-work> its no biggy. i hadn't pushed my changes to the non deb gitbuilders anyway so I was going to do this today anyway
[23:10] <loicd> cool
[23:11] <fedgoat> Hi, Im trying to remove a bucket, but it's saying "Error: could not remove bucket bucketname This was after my OSD's became full...I added 4 new OSD's let the cluster rebalance most of the day, 3 osd's are reporting near full..but i need to purge the data..any ideas?
[23:11] * dmsimard1 (~Adium@108.163.152.2) Quit (Ping timeout: 480 seconds)
[23:12] <loicd> http://gitbuilder.sepia.ceph.com/gitbuilder-ceph-deb-precise-amd64-notcmalloc/log.cgi?log=d4d77d71d0f5e1ef025221bd9ab751da9d44c6bb also misses the rm -fr src/erasure-code/jerasure/jerasure patch
[23:13] * Machske (~Bram@d5152D87C.static.telenet.be) Quit ()
[23:14] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[23:15] <houkouonchi-work> they probably are all missing it
[23:16] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[23:16] <loicd> it's strange that some fail to update/init the module and others don't
[23:18] * Sysadmin88 (~IceChat77@176.254.32.31) has joined #ceph
[23:19] * JoeGruher (~JoeGruher@134.134.139.76) has joined #ceph
[23:20] <houkouonchi-work> loicd: maybe it was passing on the first time it was used on one of them?
[23:20] <loicd> houkouonchi-work: maybe. There has been a lot of add / remove / change links on these modules in the past 72 hours
[23:23] <houkouonchi-work> loicd: i got some more of my autobuild-ceph changes comitted and pushed
[23:23] <houkouonchi-work> so i will run it on all the gitbuilders
[23:24] <loicd> great
[23:25] <loicd> I'll take care of the false errors at http://gitbuilder.sepia.ceph.com/gitbuilder-ceph-tarball-precise-amd64-basic/log.cgi?log=d4d77d71d0f5e1ef025221bd9ab751da9d44c6bb
[23:25] <loicd> the !!! that I used on some new tests trigger a regular expression that incorrectly inteprets it as an error :-)
[23:26] <dmick> loicd: the excitable boy :)
[23:26] <loicd> dmick: gitbuilder is quite sensitive to a number of exotic regexps ;-)
[23:27] * lluis (~oftc-webi@pat.hitachigst.com) has joined #ceph
[23:28] <lluis> in the config file I've seen "[osd.0]\n host=hostname". Is the "host" variable used anywhere in the OSD?
[23:31] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[23:39] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[23:43] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[23:45] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[23:49] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) Quit (Quit: Siva)
[23:49] * TMM (~hp@c97185.upc-c.chello.nl) has joined #ceph
[23:49] * Cube (~Cube@66-87-67-206.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[23:49] * Cube (~Cube@66-87-67-34.pools.spcsdns.net) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.