#ceph IRC Log

Index

IRC Log for 2013-09-09

Timestamps are in GMT/BST.

[0:12] * smiley_ (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) has joined #ceph
[0:13] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[0:13] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[0:14] * danieagle (~Daniel@177.99.132.92) has joined #ceph
[0:15] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Ping timeout: 480 seconds)
[0:18] * nwat_ (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[0:18] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[0:24] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[0:24] * zhyan_ (~zhyan@101.83.116.57) has joined #ceph
[0:31] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[0:32] * zhyan_ (~zhyan@101.83.116.57) Quit (Ping timeout: 480 seconds)
[0:36] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[0:48] * marrusl (~mark@207.96.227.9) has joined #ceph
[0:48] * sprachgenerator (~sprachgen@48.sub-70-208-146.myvzw.com) has joined #ceph
[0:51] * erwan_taf (~erwan@lns-bzn-48f-62-147-157-222.adsl.proxad.net) Quit (Quit: ZNC - http://znc.sourceforge.net)
[0:52] * erwan_taf (~erwan@lns-bzn-48f-62-147-157-222.adsl.proxad.net) has joined #ceph
[0:52] * adam1 (~adam@46-65-111-12.zone16.bethere.co.uk) has joined #ceph
[0:53] * nwat_ (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[0:53] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[0:58] * adam4 (~adam@46-65-111-12.zone16.bethere.co.uk) Quit (Ping timeout: 480 seconds)
[1:00] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[1:03] * DarkAceZ (~BillyMays@50.107.55.36) has joined #ceph
[1:09] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[1:10] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) Quit (Ping timeout: 480 seconds)
[1:12] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[1:12] * malcolm (~malcolm@silico24.lnk.telstra.net) has joined #ceph
[1:16] * sprachgenerator (~sprachgen@48.sub-70-208-146.myvzw.com) Quit (Quit: sprachgenerator)
[1:17] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[1:22] * ScOut3R (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[1:25] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[1:26] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[1:31] * nwat_ (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[1:31] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[1:32] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[1:34] * sprachgenerator (~sprachgen@54.sub-70-208-144.myvzw.com) has joined #ceph
[1:36] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[1:37] * LeaChim (~LeaChim@05407724.skybroadband.com) Quit (Ping timeout: 480 seconds)
[1:37] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[1:41] * nwat_ (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[1:41] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[1:44] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[1:46] * ross_ (~ross@60.208.111.209) has joined #ceph
[1:49] * nwat_ (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[1:53] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[2:07] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[2:11] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[2:12] * grepory (~Adium@0.sub-70-192-198.myvzw.com) has joined #ceph
[2:13] * freedomhui (~freedomhu@117.79.232.247) has joined #ceph
[2:19] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[2:23] * danieagle (~Daniel@177.99.132.92) Quit (Quit: inte+ e Obrigado Por tudo mesmo! :-D)
[2:29] * TiCPU (~jeromepou@190-130.cgocable.ca) Quit (Ping timeout: 480 seconds)
[2:40] * cofol1986 (~xwrj@110.90.119.113) Quit (Read error: Connection reset by peer)
[2:48] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) has joined #ceph
[2:48] * yy-nm (~Thunderbi@218.74.35.201) has joined #ceph
[2:57] * freedomhui (~freedomhu@117.79.232.247) Quit (Quit: Leaving...)
[2:59] * sprachgenerator (~sprachgen@54.sub-70-208-144.myvzw.com) Quit (Quit: sprachgenerator)
[3:05] * grepory (~Adium@0.sub-70-192-198.myvzw.com) Quit (Quit: Leaving.)
[3:06] * yy-nm1 (~Thunderbi@218.74.35.201) has joined #ceph
[3:07] * yy-nm (~Thunderbi@218.74.35.201) Quit (Read error: Connection reset by peer)
[3:08] * freedomhui (~freedomhu@117.79.232.247) has joined #ceph
[3:10] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[3:10] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[3:14] * nerdtron (~kenneth@202.60.8.252) has joined #ceph
[3:17] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[3:21] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[3:26] * diegows (~diegows@190.190.11.42) has joined #ceph
[3:28] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[3:33] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:33] * nwat_ (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:34] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[3:41] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[3:45] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[3:50] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) has joined #ceph
[4:03] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[4:07] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (Ping timeout: 480 seconds)
[4:08] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) has joined #ceph
[4:10] * haomaiwang (~haomaiwan@124.161.72.27) has joined #ceph
[4:16] * freedomhui (~freedomhu@117.79.232.247) Quit (Quit: Leaving...)
[4:20] * freedomhui (~freedomhu@117.79.232.247) has joined #ceph
[4:31] * glzhao (~glzhao@117.79.232.216) has joined #ceph
[4:31] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[4:32] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[4:32] * glzhao (~glzhao@117.79.232.216) Quit ()
[4:38] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[4:40] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[4:40] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[4:41] * tserong (~tserong@58-6-101-181.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[4:42] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit ()
[4:43] * nwat_ (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[4:45] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[4:47] * freedomhui (~freedomhu@117.79.232.247) Quit (Quit: Leaving...)
[4:50] * diegows (~diegows@190.190.11.42) Quit (Ping timeout: 480 seconds)
[4:58] * nwat_ (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:59] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:59] * julian (~julianwa@125.70.133.187) has joined #ceph
[5:05] * fireD_ (~fireD@93-139-154-230.adsl.net.t-com.hr) has joined #ceph
[5:07] * fireD (~fireD@93-142-206-168.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[5:09] * tserong (~tserong@124-168-226-254.dyn.iinet.net.au) has joined #ceph
[5:34] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[5:50] * tserong_ (~tserong@203-57-208-89.dyn.iinet.net.au) has joined #ceph
[5:52] * tserong (~tserong@124-168-226-254.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[6:20] * BillK (~BillK-OFT@124-171-168-171.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[6:21] * BillK (~BillK-OFT@58-7-135-10.dyn.iinet.net.au) has joined #ceph
[6:30] * S0d0 (joku@a88-113-108-239.elisa-laajakaista.fi) has joined #ceph
[6:31] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[6:35] * athrift_ (~nz_monkey@203.86.205.13) has joined #ceph
[6:36] * nwat_ (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[6:36] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[6:37] * BillK (~BillK-OFT@58-7-135-10.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[6:38] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[6:38] * athrift (~nz_monkey@203.86.205.13) Quit (Ping timeout: 480 seconds)
[6:38] * AfC (~andrew@2407:7800:200:1011:6e88:14ff:fe33:2a9c) has joined #ceph
[6:38] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[6:41] * BillK (~BillK-OFT@124-169-37-58.dyn.iinet.net.au) has joined #ceph
[6:49] * athrift (~nz_monkey@203.86.205.13) has joined #ceph
[6:49] * BillK (~BillK-OFT@124-169-37-58.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[6:50] * yy-nm1 (~Thunderbi@218.74.35.201) Quit (Quit: yy-nm1)
[6:51] * BillK (~BillK-OFT@203-59-42-161.dyn.iinet.net.au) has joined #ceph
[6:52] * athrift_ (~nz_monkey@203.86.205.13) Quit (Ping timeout: 480 seconds)
[6:58] * S0d0 (joku@a88-113-108-239.elisa-laajakaista.fi) Quit (Read error: No route to host)
[6:58] * S0d0 (joku@a88-113-108-239.elisa-laajakaista.fi) has joined #ceph
[6:59] * BillK (~BillK-OFT@203-59-42-161.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[7:00] * BillK (~BillK-OFT@124.148.203.138) has joined #ceph
[7:05] * jksM (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[7:05] * jks (~jks@3e6b5724.rev.stofanet.dk) Quit (Read error: Connection reset by peer)
[7:10] * madkiss (~madkiss@a6264-0299838063.pck.nerim.net) Quit (Quit: Leaving.)
[7:17] * nwat_ (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[7:17] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[7:22] * S0d0 (joku@a88-113-108-239.elisa-laajakaista.fi) Quit (Ping timeout: 480 seconds)
[7:51] * KindTwo (~KindOne@198.14.204.43) has joined #ceph
[7:52] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[7:52] * KindTwo is now known as KindOne
[7:52] * AfC (~andrew@2407:7800:200:1011:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[8:01] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[8:09] * AfC (~andrew@2407:7800:200:1011:6e88:14ff:fe33:2a9c) has joined #ceph
[8:09] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[8:12] * AfC (~andrew@2407:7800:200:1011:6e88:14ff:fe33:2a9c) Quit ()
[8:13] * AfC (~andrew@2407:7800:200:1011:6e88:14ff:fe33:2a9c) has joined #ceph
[8:15] * AfC (~andrew@2407:7800:200:1011:6e88:14ff:fe33:2a9c) Quit ()
[8:23] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[8:24] * nwat_ (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[8:24] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[8:25] * Vjarjadian (~IceChat77@176.254.37.210) Quit (Quit: Relax, its only ONES and ZEROS!)
[8:25] * tobru (~quassel@2a02:41a:3999::94) Quit (Remote host closed the connection)
[8:25] * KindOne (~KindOne@0001a7db.user.oftc.net) has joined #ceph
[8:33] * KindTwo (~KindOne@50.96.226.114) has joined #ceph
[8:34] * saumya (uid12057@id-12057.ealing.irccloud.com) has joined #ceph
[8:34] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[8:34] * KindTwo is now known as KindOne
[8:35] * Karcaw_ (~evan@68-186-68-219.dhcp.knwc.wa.charter.com) has joined #ceph
[8:35] * Karcaw (~evan@68-186-68-219.dhcp.knwc.wa.charter.com) Quit (Read error: Connection reset by peer)
[8:35] * tobru (~quassel@2a02:41a:3999::94) has joined #ceph
[8:38] * PITon (~pavel@195.182.195.107) Quit (Ping timeout: 480 seconds)
[8:38] <saumya> hi I am trying to install ceph on linuxmint.(version 12.0), distribution - wheezy/sid. I have used the git clone command to install ceph-deploy. When i am running the command - 'ceph-deploy install pankhuri@10.1.97.83' , I am getting error - [ceph_deploy][ERROR ] UnsupportedPlatform: Platform is not supported: linuxmint. Could you help?
[8:38] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[8:39] * PITon (~pavel@178-136-128-118.static.vega-ua.net) has joined #ceph
[8:41] * foosinn (~stefan@office.unitedcolo.de) has joined #ceph
[8:41] * isaac_ (~isaac@mike-alien.esc.auckland.ac.nz) has joined #ceph
[8:43] <nerdtron> saumya, recommended platform for ceph is ubuntu 12.04
[8:46] * AfC (~andrew@2407:7800:200:1011:6e88:14ff:fe33:2a9c) has joined #ceph
[8:47] <saumya> i came to know about that when i searched for my unsupportedPlatform error. But I came to know from http://ceph.com/docs/next/install/os-recommendations/
[8:47] <saumya> that distro debian with codename Wheezy supports ceph.
[8:50] <saumya> and since i am getting wheezy/sid as an output fro the command - cat /etc/debian_version , I wonder if there is any chance that this problem could be resolved in linuxmint.
[8:53] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[8:54] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[8:55] <nerdtron> saumya, even though Ubuntu and Linux Mint are "based" from debian, packages for Debian cannot be directly installed in Ubuntu/Linux Mint.
[8:56] * hugo (~hugo@50-197-147-249-static.hfc.comcastbusiness.net) has joined #ceph
[8:57] * odyssey4me (~odyssey4m@165.233.71.2) has joined #ceph
[8:57] <hugo> " radosgw-admin temp remove --date=2013-09-08 " is this a proper command for CultFish ?
[9:01] <hugo> As the description in Ceph doc. The object been marked as remove while issuing a delete operation via swift API
[9:02] <hugo> so that I have to conduct the temp remove to purge all marked object
[9:02] <hugo> But how to ????
[9:02] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[9:03] * PITon (~pavel@178-136-128-118.static.vega-ua.net) Quit (Ping timeout: 480 seconds)
[9:03] <saumya> so what exaclty I need to do for ceph installation as i have a deadline for ceph-project two days later?
[9:03] * PITon (~pavel@195.182.195.107) has joined #ceph
[9:06] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[9:06] * nwat_ (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[9:08] * sleinen (~Adium@2001:620:0:46:7c30:5ba1:eacb:cfb7) has joined #ceph
[9:10] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:10] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[9:10] * sleinen1 (~Adium@2001:620:0:46:14c5:f594:8f4f:56c9) has joined #ceph
[9:10] * hugo (~hugo@50-197-147-249-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[9:12] <nerdtron> saumya, install ceph on ubuntu 12.04 server and follow the pre-flight checklist then storage quick start...you can finish it in a day
[9:13] * hugo (~hugo@220-135-5-231.HINET-IP.hinet.net) has joined #ceph
[9:14] * hugo_ (~hugo@50-197-147-249-static.hfc.comcastbusiness.net) has joined #ceph
[9:16] * mnash_ (~chatzilla@66-194-114-178.static.twtelecom.net) has joined #ceph
[9:16] * mnash (~chatzilla@66-194-114-178.static.twtelecom.net) Quit (Read error: Connection reset by peer)
[9:16] * mnash_ is now known as mnash
[9:16] * sleinen (~Adium@2001:620:0:46:7c30:5ba1:eacb:cfb7) Quit (Ping timeout: 480 seconds)
[9:16] * malcolm (~malcolm@silico24.lnk.telstra.net) Quit (Ping timeout: 480 seconds)
[9:17] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[9:17] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (Read error: Operation timed out)
[9:18] * hugo (~hugo@220-135-5-231.HINET-IP.hinet.net) Quit (Read error: Operation timed out)
[9:19] * Bada (~Bada@194.88.193.33) has joined #ceph
[9:20] <saumya> nerdtron, so do i need to install ubuntu as a new platform or it can be done virtually too?
[9:21] <nerdtron> can be virtual depending on how you want to use it.. i haven't tried it virtual yet... also be aware that virtual can introduce time delay
[9:21] <nerdtron> ceph need a latency of below <50ms
[9:21] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[9:22] <nerdtron> If you have at least 2 computers, (3 would be good) you can try it on full installations of ubuntu
[9:23] <saumya> i don't have 2 computers.
[9:24] <nerdtron> saumya, you only have 1 computer? how are you supposed to use ceph? 2 virtual machines?
[9:25] <saumya> actually, the project is to create virtual machiens on other computers too..but those computers aren't mine.
[9:25] * yy-nm (~Thunderbi@218.74.35.201) has joined #ceph
[9:26] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) Quit (Remote host closed the connection)
[9:27] * AfC (~andrew@2407:7800:200:1011:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[9:28] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[9:29] * hugo (~hugo@220-135-5-231.HINET-IP.hinet.net) has joined #ceph
[9:31] * sherry (~sherry@wireless-nat-10.auckland.ac.nz) has joined #ceph
[9:32] <sherry> how to run osd map that I just created in developer mode?
[9:33] <sherry> I added into tree but it is down
[9:33] <sherry> 1 1 osd.1 down 0
[9:34] * mattt (~mattt@92.52.76.140) has joined #ceph
[9:36] * hugo_ (~hugo@50-197-147-249-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[9:43] * Bada (~Bada@194.88.193.33) Quit (Ping timeout: 480 seconds)
[9:45] * tziOm (~bjornar@194.19.106.242) has joined #ceph
[9:47] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[9:47] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit ()
[9:50] * Bada (~Bada@194.88.193.33) has joined #ceph
[9:50] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[9:50] * ChanServ sets mode +v andreask
[9:51] <sherry> how to up osd map that I just created in developer mode?
[9:52] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit (Quit: wogri_risc)
[9:53] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[9:53] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit ()
[9:53] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[9:53] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit ()
[9:54] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[9:57] <nerdtron> ceph-deploy osd activate
[10:00] <hugo> The best concurrency connection of RadosGW is 100 only ?
[10:03] * Bada (~Bada@194.88.193.33) Quit (Ping timeout: 480 seconds)
[10:04] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has left #ceph
[10:04] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (Read error: Operation timed out)
[10:05] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[10:07] <sherry> nerdtron in developer mode i mean
[10:07] <vipr> hi
[10:07] <vipr> Does anyone here have experience with connecting Ceph to cloudstack?
[10:07] <vipr> or is there a better channel for questions regarding this?
[10:09] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit ()
[10:09] * mjblw (~mbaysek@wsip-174-79-34-244.ph.ph.cox.net) Quit (Ping timeout: 480 seconds)
[10:09] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[10:10] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) has left #ceph
[10:10] * allsystemsarego (~allsystem@188.25.130.226) has joined #ceph
[10:13] * hugo (~hugo@220-135-5-231.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[10:15] * ScOut3R (~scout3r@91EC1DC5.catv.pool.telekom.hu) has joined #ceph
[10:15] * matsuhashi (~matsuhash@124x35x46x9.ap124.ftth.ucom.ne.jp) has joined #ceph
[10:17] <sherry> how to up osd map that I just created in developer mode?
[10:19] * hugo (~hugo@220-135-5-231.HINET-IP.hinet.net) has joined #ceph
[10:20] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[10:21] * KindTwo (~KindOne@h23.215.89.75.dynamic.ip.windstream.net) has joined #ceph
[10:23] * sherry (~sherry@wireless-nat-10.auckland.ac.nz) Quit (Quit: Konversation terminated!)
[10:24] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:24] * BillK (~BillK-OFT@124.148.203.138) Quit (Ping timeout: 480 seconds)
[10:26] * BillK (~BillK-OFT@220-253-193-83.dyn.iinet.net.au) has joined #ceph
[10:26] * KindOne (~KindOne@0001a7db.user.oftc.net) has joined #ceph
[10:26] <andreask> vipr: well wido developed the ceph integration fro cloudstack ... so I'd say he has most experience with it ;-)
[10:27] * mjblw (~mbaysek@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[10:27] <wido> vipr: andreask: Indeed
[10:27] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[10:27] <wido> vipr: Something you are running in to?
[10:28] <andreask> hugo: you mean you can't go beyond 100 connections?
[10:29] * smiley_ (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) Quit (Quit: smiley_)
[10:30] <hugo> andreask: My environment combined with 4 nodes. 1RadosGW + 3 nodes , the total OSDs is 33. While I'm using swift benchmark tools. for 100 concurrency, the reqs/sec reach to 1500reqs/sec
[10:31] * KindTwo (~KindOne@h23.215.89.75.dynamic.ip.windstream.net) Quit (Ping timeout: 480 seconds)
[10:31] <hugo> but more concurrency will reduce the performance.. For example, 700reqs/sec for 200 concurrency. The object size was using 1KB
[10:31] <hugo> higher concurrency is degrading the reqs/sec
[10:31] <andreask> hugo: ... rgw thread pool size is at 100 by default ... or did you already increased that?
[10:32] <hugo> andreask: I change it to 200
[10:32] <hugo> rgw_thread_pool_size = 200
[10:32] <vipr> wido: We're currently running cloudstack 4.1, and having trouble with using the ceph cluster as primary storage.
[10:33] * LeaChim (~LeaChim@05407724.skybroadband.com) has joined #ceph
[10:33] * KindTwo (~KindOne@h50.44.28.71.dynamic.ip.windstream.net) has joined #ceph
[10:33] <hugo> how to verify the pool size is 200 now ? I did restart radosgw after tweaking the value
[10:33] <vipr> The problem is at the disk attaching process, it fails due to getting wrong pool usage information.
[10:33] <vipr> It's the same bug as this: https://issues.apache.org/jira/browse/CLOUDSTACK-3542
[10:34] <vipr> Problem is, that the proposed solution, compiling libvirt 1.1.0, doesn't work for us
[10:34] <vipr> Still getting the same error
[10:34] * ScOut3R (~scout3r@91EC1DC5.catv.pool.telekom.hu) Quit (Remote host closed the connection)
[10:34] * ScOut3R (~scout3r@91EC1DC5.catv.pool.telekom.hu) has joined #ceph
[10:35] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:35] * KindTwo is now known as KindOne
[10:35] <andreask> hugo: ceph --admin-daemon {/path/to/admin/socket} config show
[10:35] <vipr> Putting together a pastebin with log info
[10:36] <vipr> wido: http://pastebin.com/7pPbTwss
[10:37] * Bada (~Bada@195.65.225.142) has joined #ceph
[10:40] * angdraug (~angdraug@c-98-248-39-148.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[10:42] <hugo> rgw_socket_path = /tmp/radosgw.sock ==> ceph --admin-daemon /tmp/radosgw.sock config show ?
[10:44] * waldi (~waldi@shell.thinkmo.de) has joined #ceph
[10:44] <waldi> hi
[10:44] <waldi> for testing purposes i tried to setup ceph with one osd. all the pgs remained stuck, even if the pools have "size=1"
[10:45] <waldi> after some testing it turns out that this happens always if the crush map only have one osd for a given input
[10:45] <andreask> hugo: the admin socket should be in /var/run/ceph/ .... postfix asok
[10:45] <waldi> so it even happens if two osds are available but not disperse enough to be selected
[10:45] <waldi> can i change this behaviour?
[10:50] <andreask> waldi: have a look here http://ceph.com/docs/next/start/quick-ceph-deploy/
[10:50] <hugo> andreask: awesome.... you are right. The value remain on 100 only for each osd
[10:51] <andreask> waldi: you need "osd crush chooseleaf type = 0" in your configuration
[10:51] <hugo> andreask: I *thought* the way to tweak the pool size is by adding a line in /etc/ceph.conf
[10:54] <andreask> hugo: you mean default and min size? ... yes
[10:54] * BillK (~BillK-OFT@220-253-193-83.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[10:55] * BillK (~BillK-OFT@220-253-183-250.dyn.iinet.net.au) has joined #ceph
[10:56] <hugo> andreask: so that the value should be set in ceph.conf on all nodes rather than only on RadowGW node ?
[10:56] * bernieke (~bernieke@176-9-206-129.cinfuserver.com) has joined #ceph
[10:57] <wido> vipr: Ah, yes, I've seen that before
[10:57] <andreask> hugo: you are referring to rgw_thread_pool_size?
[10:57] <wido> Somehow libvirt gets confused
[10:57] <wido> Not sure what it is
[10:58] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[10:59] <bernieke> we had some networking issues, and now our ceph-mon don't seem to be able to reach quorum (if I'm reading the situation correctly)
[10:59] <bernieke> we stopped all non-mon services, and one mon is saying electing in the log, and other two probing
[11:00] <waldi> andreask: works. in the crush documentation it is somehow documented, not really clear
[11:00] <vipr> wido: Hmm that's unfortunate, do you have any advice on what we can try, or are we in the dark here?
[11:00] <hugo> andreask: yes the rgw_thread_poolsize
[11:01] <hugo> andreask: What's the correct way to change the pool_size now ?
[11:02] <loicd> ccourtaut: would you be so kind as to try and compile master ? I can't and I don't know if it's just me.
[11:02] <ccourtaut> loicd: ok, i'll launch that
[11:03] <hugo> andreask: I think a link or doc could be helpful :) thanks
[11:03] <andreask> hugo: you mean beside ... ceph osd pool set _poolname_ size _nr-of-replicas_
[11:04] <loicd> hobject.cc not found it says
[11:04] <andreask> hugo: docs ... mass of ;-) ... http://ceph.com/docs/master/rados/operations/pools/
[11:04] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit (Quit: wogri_risc)
[11:05] <andreask> hugo: forget it ... you meant the thread_poolsize ... correct?
[11:05] <hugo> andreask: my fault..... I mean the rgw_thread_pool_size
[11:05] <ccourtaut> loicd: compile in progress
[11:05] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[11:06] <hugo> andreask: My ceph.conf now ... http://pastebin.com/cCTKpZqJ
[11:06] <hugo> andreask: I did restart all ceph daemons by *stop ceph-all* & *start ceph-all*
[11:07] <hugo> andreask: The rgw_thread_pool_size remain in 100. I thought the value only for radosgw damon only
[11:08] <andreask> hugo: hmm ... looks fine
[11:09] <andreask> hugo: you restarted also radosgw ?
[11:10] <hugo> andreask: yup... I have to mention that the RadosGW is an isolated server.
[11:11] <hugo> well... maybe that's the real performance of RadosGW
[11:12] <hugo> seems that Swift-proxy is better than Radosgw for high concurrency case
[11:17] <andreask> hugo: use several rgw and load balance
[11:18] <hugo> andreask: sounds a good idea. I think the bottle-neck of single rgw is came from apache it-self
[11:18] <andreask> hugo: makes sense, that there are also tunings needed
[11:18] <hugo> andreask: thanks for your time today... apprecite
[11:19] <andreask> hugo: yw
[11:19] <hugo> I'm evaluating the object storage backend of our new upper layer application ... :)
[11:20] <waldi> partner:
[11:20] * waldi (~waldi@shell.thinkmo.de) has left #ceph
[11:20] <hugo> Both Swift & Ceph are candidates :)
[11:21] <andreask> I see ;-)
[11:22] <loicd> ccourtaut: nevermind, it works now ;-)
[11:22] <loicd> sorry for the noise
[11:22] <ccourtaut> loicd: what was the problem, didn't compiled for me neither ^^
[11:23] <loicd> really ? it failed to find hobject.cc although it was there
[11:24] <loicd> but I only have the problem with a checkout that was already configured with ./configure in the past, not a fresh checkout
[11:25] <loicd> make[3]: *** No rule to make target `os/hobject.cc', needed by `libcommon_la-hobject.lo'. Stop.
[11:25] <andreask> hugo: out of curiosity .. you also tried lighttpd?
[11:26] <hugo> andreask: nope... I followed the ecph online document to build a Ceph pool for testing. I found only instruction for apache tho.
[11:26] * matsuhashi (~matsuhash@124x35x46x9.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[11:26] <hugo> andreask: any reference doc would be great :>
[11:26] * matsuhashi (~matsuhash@124x35x46x9.ap124.ftth.ucom.ne.jp) has joined #ceph
[11:29] * rBEL (robbe@november.openminds.be) has joined #ceph
[11:30] <bernieke> i've stopped all mons and removed two from the thirds monmap (extract / rm / inject), and then started the third again
[11:30] <bernieke> but even now it's still probing
[11:30] <bernieke> I did get a message in the log "failed to create new leveldb store"
[11:31] <andreask> hugo: well, best tested is definitely apache .. and you are using the special mod_fastcgi packages for ceph?
[11:32] <hugo> andreask: yup... I'm using the special mod_fastcgi
[11:33] <hugo> andreask: I'm pretty sure the bottleneck is on radosgw now. While I'm using the rados bench for 1KB write. The performance is much better. About 5000ops/sec
[11:36] <loicd> ccourtaut: I think it's a non-interesting problem related to dependancies handlining in automake
[11:36] <ccourtaut> ok
[11:37] <loicd> s/dancies/dencies/
[11:37] * yy-nm (~Thunderbi@218.74.35.201) Quit (Quit: yy-nm)
[11:37] <andreask> hugo: you could place some numbers on mailing-list, for devs to comment ...they may have some good tuning hints
[11:38] <hugo> andreask: I'll have a chart later
[11:38] <hugo> btw , I'm encounter a problem for purge all removed objects XD
[11:39] <hugo> It properly a bug in CultFish
[11:44] <bernieke> when I issue "ceph-mon -i 9 --compact" (my store.db is 1.9gb) I also get that "failed to create new leveldb store" error in the logs
[11:44] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[11:45] * ScOut3R (~scout3r@91EC1DC5.catv.pool.telekom.hu) Quit (Remote host closed the connection)
[11:49] <wido> vipr: I'm still in the dark. I'm not seeing this on my test systems with about 80TB, but I've seen other reports
[11:49] <wido> It is a libvirt thing
[12:01] <hugo> Hi all , where can I find the source code of RadosGW ?
[12:16] * S0d0 (joku@a88-113-108-239.elisa-laajakaista.fi) has joined #ceph
[12:17] * nerdtron (~kenneth@202.60.8.252) Quit (Remote host closed the connection)
[12:19] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[12:19] <andreask> hugo: https://github.com/ceph/ceph
[12:21] <hugo> found it
[12:22] * julian (~julianwa@125.70.133.187) Quit (Quit: afk)
[12:25] <hugo> funny ... I deleted all objects in .rgw & .rgw.buckets. The swift list still shows me containers list.
[12:29] * grepory (~Adium@34.sub-70-192-199.myvzw.com) has joined #ceph
[12:29] * BillK (~BillK-OFT@220-253-183-250.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[12:30] * matsuhashi (~matsuhash@124x35x46x9.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[12:31] * hugo (~hugo@220-135-5-231.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[12:31] * BillK (~BillK-OFT@58-7-172-nwork.dyn.iinet.net.au) has joined #ceph
[12:32] * S0d0 (joku@a88-113-108-239.elisa-laajakaista.fi) Quit (Ping timeout: 480 seconds)
[12:32] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[12:32] * ScOut3R (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) has joined #ceph
[12:33] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[12:33] * ChanServ sets mode +v andreask
[12:34] * mschiff (~mschiff@pD9511954.dip0.t-ipconnect.de) has joined #ceph
[12:34] * joao (~joao@89-181-152-211.net.novis.pt) has joined #ceph
[12:34] * ChanServ sets mode +o joao
[12:47] * grepory (~Adium@34.sub-70-192-199.myvzw.com) Quit (Quit: Leaving.)
[12:47] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Ping timeout: 480 seconds)
[12:48] * ismell (~ismell@host-24-56-171-198.beyondbb.com) Quit (Read error: Operation timed out)
[12:48] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Read error: Operation timed out)
[12:48] * ismell (~ismell@host-24-56-171-198.beyondbb.com) has joined #ceph
[12:49] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[13:02] <bernieke> after some messing around I now have two mon in (electing) state, and another which goes from (synchronizing) to (probing)
[13:03] <bernieke> the third's store.db is only 40k though, while the two others are now at 1021mb after compacting
[13:05] * grepory (~Adium@116.sub-70-192-201.myvzw.com) has joined #ceph
[13:09] * lupine (~lupine@lupine.me.uk) Quit (Quit: If you see this, my dogfood was poisoned)
[13:09] * lupine (~lupine@lupine.me.uk) has joined #ceph
[13:11] * smiley_ (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) has joined #ceph
[13:15] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[13:16] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[13:17] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit ()
[13:24] * alex___11 (~alex___11@94-143-117-240.enovance.net) has joined #ceph
[13:25] * alex___11 (~alex___11@94-143-117-240.enovance.net) has left #ceph
[13:38] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit (Quit: wogri_risc)
[13:38] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[13:42] * smiley_ (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) Quit (Quit: smiley_)
[13:44] * DarkAceZ (~BillyMays@50.107.55.36) has joined #ceph
[13:48] * shdb_ (~shdb@80-219-0-163.dclient.hispeed.ch) Quit (Remote host closed the connection)
[13:48] * shdb (~shdb@gw.ptr-62-65-159-122.customer.ch.netstream.com) has joined #ceph
[13:51] * l0nkaji (~l0nkaji@94-143-117-240.enovance.net) has joined #ceph
[13:52] * yanzheng (~zhyan@134.134.139.70) has joined #ceph
[13:56] * AfC (~andrew@2001:44b8:31cb:d400:2ad2:44ff:fe08:a4c) has joined #ceph
[13:58] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[14:03] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[14:04] * sprachgenerator (~sprachgen@16.sub-70-208-152.myvzw.com) has joined #ceph
[14:06] <bernieke> ok, so i've gone back to the one-monitor scenario, and even though extract/print shows me a monmap with only one monitor, admin-daemon quorum_status and mon_status still show me all three...
[14:06] <bernieke> is there something more than inject I can do to get rid of them?
[14:08] <bernieke> I also removed them from ceph.conf (even though they shouldn't be looking there)
[14:08] * l0nkaji is now known as l0nakji
[14:09] * l0nakji is now known as l0nkaji
[14:13] * S0d0 (joku@a88-113-108-239.elisa-laajakaista.fi) has joined #ceph
[14:15] * TiCPU (~jeromepou@190-130.cgocable.ca) has joined #ceph
[14:17] * grepory (~Adium@116.sub-70-192-201.myvzw.com) Quit (Quit: Leaving.)
[14:17] * LeaChim (~LeaChim@05407724.skybroadband.com) Quit (Ping timeout: 480 seconds)
[14:18] * marrusl (~mark@207.96.227.9) Quit (Remote host closed the connection)
[14:19] <loicd> anyone willing to review a small patch ( adds key=value option to osd pool create ;-) https://github.com/ceph/ceph/pull/578 ?
[14:20] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[14:22] <loicd> http://www.gnu.org/software/global/ ccourtaut do you use this or something else to navigate the code ?
[14:22] <loicd> maybe you told me but I don't remember
[14:24] * diegows (~diegows@190.190.11.42) has joined #ceph
[14:24] * ScOut3R (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[14:26] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[14:26] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[14:26] * tryggvil (~tryggvil@178.19.53.254) Quit (Remote host closed the connection)
[14:27] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[14:27] * LeaChim (~LeaChim@054073b1.skybroadband.com) has joined #ceph
[14:27] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[14:29] * AfC (~andrew@2001:44b8:31cb:d400:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[14:30] * sleinen1 (~Adium@2001:620:0:46:14c5:f594:8f4f:56c9) Quit (Ping timeout: 480 seconds)
[14:30] * clayb (~kvirc@proxy-nj2.bloomberg.com) has joined #ceph
[14:31] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[14:33] * grepory (~Adium@50-200-116-163-static.hfc.comcastbusiness.net) has joined #ceph
[14:40] * l0nkaji is now known as alexoooooaaazzeeddss
[14:40] * alexoooooaaazzeeddss (~l0nkaji@94-143-117-240.enovance.net) has left #ceph
[14:43] * ross_ (~ross@60.208.111.209) Quit (Ping timeout: 480 seconds)
[14:44] <bernieke> when we go nosing in the leveldb, we can see the old monmap with the three monitors still being the latest (even though an monmaptool extract will show the map with only one monitor)
[14:44] <bernieke> does anyone know how we can inject a new monmap into the leveldb without having to modify the thing manually?
[14:49] * TiCPU_ (~jeromepou@c207.134.3-34.clta.globetrotter.net) has joined #ceph
[14:53] * zhyan_ (~zhyan@jfdmzpr01-ext.jf.intel.com) has joined #ceph
[14:53] * yanzheng (~zhyan@134.134.139.70) Quit (Remote host closed the connection)
[14:55] * TiCPU (~jeromepou@190-130.cgocable.ca) Quit (Ping timeout: 480 seconds)
[14:57] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[15:00] * thomnico (~thomnico@modemcable105.166-161-184.mc.videotron.ca) has joined #ceph
[15:00] <bernieke> well, we modified it manually with the python bindings, and now we have quorum with our single monitor
[15:01] <bernieke> I guess we'll just squash the other two and add them once more from scratch
[15:13] * dmsimard (~Adium@ap08.wireless.co.mtl.iweb.com) has joined #ceph
[15:17] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[15:19] * markbby (~Adium@168.94.245.1) has joined #ceph
[15:20] * thomnico (~thomnico@modemcable105.166-161-184.mc.videotron.ca) Quit (Quit: Ex-Chat)
[15:24] * thomnico (~thomnico@modemcable105.166-161-184.mc.videotron.ca) has joined #ceph
[15:35] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[15:40] <bernieke> right, haven't even gotten to adding the two mons again, still running with the single mon (all config files modified)
[15:41] <bernieke> got everything running on that one node with the running monitor, but on the others I can't do fi. "ceph health" (or launch the osd's)
[15:41] <bernieke> after a few minutes I see: "monclient: hunting for new mon"
[15:41] <bernieke> if I CTRL-C I get: "EINTR: problem getting command descriptions from mon."
[15:42] <bernieke> I've straced a "ceph health" and I see it connecting to the correct ip:port of the single running monitor
[15:42] <bernieke> I don't understand enough of the rest to tell me anything else though
[15:47] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[15:48] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit (Remote host closed the connection)
[15:50] * ScOut3R (~ScOut3R@catv-89-133-21-146.catv.broadband.hu) has joined #ceph
[15:51] * thomnico (~thomnico@modemcable105.166-161-184.mc.videotron.ca) Quit (Read error: No route to host)
[15:53] * ScOut3R_ (~ScOut3R@catv-89-133-21-203.catv.broadband.hu) has joined #ceph
[15:54] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:55] * ScOut3R__ (~ScOut3R@catv-89-133-21-203.catv.broadband.hu) has joined #ceph
[15:57] * bclark (~bclark@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[15:58] * marrusl (~mark@64.34.151.178) has joined #ceph
[15:59] * ScOut3R (~ScOut3R@catv-89-133-21-146.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[16:01] * ScOut3R_ (~ScOut3R@catv-89-133-21-203.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[16:11] * jcsp (~john@fluency-gw1.summerhall.co.uk) has joined #ceph
[16:11] * gaveen (~gaveen@175.157.81.32) has joined #ceph
[16:14] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[16:15] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[16:15] * ChanServ sets mode +v andreask
[16:15] * vata (~vata@2607:fad8:4:6:c4b1:ee75:2192:bc22) has joined #ceph
[16:18] <vipr> wido: I reinstalled the QEMU host with ubuntu 13.04 instead of 12.04 and now disks can be attached
[16:19] <vipr> same qemu and libvirt packages, but somehow it works, so maybe it's an issue with ubuntu 12.04...
[16:22] * tziOm (~bjornar@194.19.106.242) Quit (Remote host closed the connection)
[16:24] <vipr> qemu version is not the same 1.0 vs 1.4 now
[16:25] <vipr> But we tested with qemu 1.6.0 on 12.04 and it also didn't work
[16:25] <vipr> I don't know if this information is helpful, but might as well give it :-)
[16:30] <wido> vipr: Thanks! Good to know
[16:33] * sprachgenerator (~sprachgen@16.sub-70-208-152.myvzw.com) Quit (Quit: sprachgenerator)
[16:33] * shang (~ShangWu@64.34.151.178) has joined #ceph
[16:41] <bernieke> right, found the problem
[16:41] <bernieke> we had MTU set to 1546 to account for the extra gre tunnel information for quantum
[16:42] <bernieke> which apparantly ceph doesn't like?
[16:42] <bernieke> as soon as we set it back to 1500 everything started working fine and dandy once more
[16:42] <bernieke> by the way, setting it to 3000 didn't work either...
[16:46] * BillK (~BillK-OFT@58-7-172-nwork.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[16:46] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[16:47] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[16:50] * thomnico (~thomnico@modemcable105.166-161-184.mc.videotron.ca) has joined #ceph
[16:52] * TiCPU__ (~jeromepou@190-130.cgocable.ca) has joined #ceph
[16:53] <bernieke> ok, aparantly the new switch didn't have jumbo frames enabled
[16:54] <bernieke> after enabling jumbo frames we could go back to mtu 1546
[16:59] * TiCPU_ (~jeromepou@c207.134.3-34.clta.globetrotter.net) Quit (Ping timeout: 480 seconds)
[17:00] * Vjarjadian (~IceChat77@176.254.37.210) has joined #ceph
[17:00] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[17:00] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[17:00] * ChanServ sets mode +v andreask
[17:04] * tchmnkyz (~jeremy@0001638b.user.oftc.net) has joined #ceph
[17:04] * zhyan_ (~zhyan@jfdmzpr01-ext.jf.intel.com) Quit (Remote host closed the connection)
[17:05] * nwat_ (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[17:07] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (Quit: Leaving)
[17:09] * peetaur (~peter@CPEbc1401e60493-CMbc1401e60490.cpe.net.cable.rogers.com) Quit (Read error: Operation timed out)
[17:09] * sjm (~sjm@64.34.151.178) has joined #ceph
[17:09] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[17:10] * sjm (~sjm@64.34.151.178) has joined #ceph
[17:12] * xdeller (~xdeller@91.218.144.129) has joined #ceph
[17:16] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:18] * thomnico (~thomnico@modemcable105.166-161-184.mc.videotron.ca) Quit (Ping timeout: 480 seconds)
[17:19] * peetaur (~peter@CPEbc1401e60493-CMbc1401e60490.cpe.net.cable.rogers.com) has joined #ceph
[17:26] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[17:26] * haomaiwang (~haomaiwan@124.161.72.27) Quit (Remote host closed the connection)
[17:28] * sjm (~sjm@64.34.151.178) has joined #ceph
[17:28] * mschiff (~mschiff@pD9511954.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[17:28] * S0d0 (joku@a88-113-108-239.elisa-laajakaista.fi) Quit (Ping timeout: 480 seconds)
[17:28] * mattt (~mattt@92.52.76.140) Quit (Read error: Connection reset by peer)
[17:29] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[17:32] * jcsp (~john@fluency-gw1.summerhall.co.uk) Quit (Ping timeout: 480 seconds)
[17:33] * Bada (~Bada@195.65.225.142) Quit (Ping timeout: 480 seconds)
[17:34] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[17:35] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[17:35] * sleinen1 (~Adium@2001:620:0:25:69d0:9ae4:e856:51b3) has joined #ceph
[17:36] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[17:38] * KevinPerks (~Adium@64.34.151.178) has joined #ceph
[17:38] * sjm (~sjm@64.34.151.178) has joined #ceph
[17:39] * diegows (~diegows@190.190.11.42) Quit (Ping timeout: 480 seconds)
[17:40] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[17:41] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[17:41] * sagelap (~sage@2600:1012:b02f:1b98:f19a:3ea9:afe1:ad87) has joined #ceph
[17:42] * foosinn (~stefan@office.unitedcolo.de) Quit (Quit: Leaving)
[17:42] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[17:44] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[17:44] * sagelap1 (~sage@2600:1012:b009:996:c43:1ae0:800:dee2) has joined #ceph
[17:50] * sagelap (~sage@2600:1012:b02f:1b98:f19a:3ea9:afe1:ad87) Quit (Ping timeout: 480 seconds)
[18:01] * xarses (~andreww@c-71-202-167-197.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:02] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[18:03] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) has joined #ceph
[18:04] * grepory1 (~Adium@117.sub-70-208-151.myvzw.com) has joined #ceph
[18:04] * grepory (~Adium@50-200-116-163-static.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[18:04] * angdraug (~angdraug@204.11.231.50.static.etheric.net) has joined #ceph
[18:05] * grepory (~Adium@50-200-116-163-static.hfc.comcastbusiness.net) has joined #ceph
[18:07] * grepory (~Adium@50-200-116-163-static.hfc.comcastbusiness.net) Quit ()
[18:12] * grepory1 (~Adium@117.sub-70-208-151.myvzw.com) Quit (Ping timeout: 480 seconds)
[18:13] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) has joined #ceph
[18:14] * xarses (~andreww@204.11.231.50.static.etheric.net) has joined #ceph
[18:14] * mschiff (~mschiff@46.189.28.48) has joined #ceph
[18:17] * nwat_ (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has left #ceph
[18:19] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[18:22] * sleinen (~Adium@2001:620:0:25:1096:8680:332:cdb5) has joined #ceph
[18:25] * ScOut3R__ (~ScOut3R@catv-89-133-21-203.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[18:26] * sagelap (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) has joined #ceph
[18:27] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[18:27] * ChanServ sets mode +v andreask
[18:28] * sjm (~sjm@64.34.151.178) has joined #ceph
[18:28] * sleinen1 (~Adium@2001:620:0:25:69d0:9ae4:e856:51b3) Quit (Ping timeout: 480 seconds)
[18:28] * sagelap1 (~sage@2600:1012:b009:996:c43:1ae0:800:dee2) Quit (Ping timeout: 480 seconds)
[18:30] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[18:30] <sagewk> alfredodeza: execnet == pushy replacement?
[18:30] <alfredodeza> sagewk: yes
[18:31] <sagewk> nice
[18:31] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[18:31] <alfredodeza> it comes from the same guys behind tox and py.test
[18:31] <alfredodeza> extremely well tested and widely supported/used
[18:31] <sagewk> excellent
[18:31] <alfredodeza> and it doesn't hang!
[18:31] <alfredodeza> imagine that
[18:31] <sagewk> :)
[18:34] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) Quit (Quit: Leaving.)
[18:34] * tobru_ (~quassel@217-162-50-53.dynamic.hispeed.ch) has joined #ceph
[18:37] * xmltok (~xmltok@pool101.bizrate.com) has joined #ceph
[18:37] * waldi (~waldi@shell.thinkmo.de) has joined #ceph
[18:37] <waldi> hi
[18:38] <waldi> how does ceph handle overfull osd?
[18:38] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[18:41] <sagewk> waldi: cluster is marked as full, writes stop
[18:42] * sjm (~sjm@64.34.151.178) has joined #ceph
[18:42] * gregaf (~Adium@2607:f298:a:607:c501:9f75:49ae:ffe5) Quit (Quit: Leaving.)
[18:42] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[18:43] * gregaf (~Adium@2607:f298:a:607:89a0:81e3:33b:bccc) has joined #ceph
[18:44] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[18:46] <waldi> sagewk: is the free space somehow taken into account in the crush map?
[18:48] <kislotniq> it seems to me that http://ceph.com/docs/master/ lacks proper table of contents
[18:48] <kislotniq> it was okay several days ago, when i checked it last time
[18:50] * buck (~buck@c-24-6-91-4.hsd1.ca.comcast.net) has joined #ceph
[18:50] <Psi-Jack> Hmm, 0.61.8 already! LOL.. And 0.67.2? Sheash! You guys move quickly. :)
[18:59] * davidzlap (~Adium@ip68-5-239-214.oc.oc.cox.net) has joined #ceph
[18:59] <sagewk> waldi: crush weights osds proportionally to their size, so with high probability they won't fill up. there is a mon command reweight-by-utilization that will make minor adjustments to compensate for statistical outliers
[18:59] * thomnico (~thomnico@64.34.151.178) has joined #ceph
[19:02] <waldi> i'm currently trying to find loopholes in my play for a project. the plan was to use rados directly without striping. and right now i see a large possibility for severely asymetric usage of osd
[19:06] * odyssey4me (~odyssey4m@165.233.71.2) Quit (Ping timeout: 480 seconds)
[19:06] <waldi> seems that i can use rbd, which does not have this problem
[19:06] * grepory (~Adium@50-200-116-163-static.hfc.comcastbusiness.net) has joined #ceph
[19:09] * ScOut3R (~scout3r@91EC1DC5.catv.pool.telekom.hu) has joined #ceph
[19:17] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[19:19] * Vjarjadian (~IceChat77@176.254.37.210) Quit (Quit: If at first you don't succeed, skydiving is not for you)
[19:19] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[19:20] * S0d0 (joku@a88-113-108-239.elisa-laajakaista.fi) has joined #ceph
[19:26] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[19:26] * ChanServ sets mode +v andreask
[19:43] * thomnico (~thomnico@64.34.151.178) Quit (Quit: Ex-Chat)
[19:44] * thomnico (~thomnico@64.34.151.178) has joined #ceph
[19:46] * dmick (~dmick@2607:f298:a:607:4d3d:fe55:b729:c3ee) has joined #ceph
[19:48] * carif (~mcarifio@cpe-74-78-54-137.maine.res.rr.com) has joined #ceph
[19:49] * antoinerg (~antoine@dsl.static-187-116-74-220.electronicbox.net) has joined #ceph
[19:50] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[19:51] * diegows (~diegows@200.68.116.185) has joined #ceph
[19:53] <antoinerg> Hello all, would you recommend running Ceph on a single node just for the sake of faster recovery time in case of disk failure
[19:53] * nwl_ is now known as nwl
[19:54] * thomnico (~thomnico@64.34.151.178) Quit (Quit: Ex-Chat)
[19:55] * sprachgenerator (~sprachgen@182.sub-70-208-154.myvzw.com) has joined #ceph
[19:56] * allsystemsarego (~allsystem@188.25.130.226) Quit (Quit: Leaving)
[19:57] <dmsimard> antoinerg: But what if this single node - the server itself - fails ? Ceph is meant to be run across multiple nodes so that you not only have redundant "disks" per se, but redundant nodes.
[19:58] <dmsimard> If anything, run the same amount of disks as your single node plan but spread on two nodes
[19:58] <antoinerg> I don't need redundant nodes, this would be for storage at home
[19:58] * thomnico (~thomnico@64.34.151.178) has joined #ceph
[19:58] * carif (~mcarifio@cpe-74-78-54-137.maine.res.rr.com) Quit (Read error: Operation timed out)
[19:58] <antoinerg> I'm just worried about rebuilding times of HD > 4TB
[19:58] * sleinen (~Adium@2001:620:0:25:1096:8680:332:cdb5) Quit (Quit: Leaving.)
[19:58] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[19:59] <dmick> antoinerg: one of the wins you get from the OSD using a filesystem is that it's not the size of the drive but the amount of data stored on it
[19:59] * sleinen1 (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[19:59] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Read error: Connection reset by peer)
[20:01] * sleinen (~Adium@2001:620:0:26:51fe:32d4:2984:5ad9) has joined #ceph
[20:03] <antoinerg> dmick: Would I also get an increase in recovery speed since data will be written to an array of disk
[20:04] * thomnico (~thomnico@64.34.151.178) Quit (Read error: No route to host)
[20:06] * dpippenger (~riven@tenant.pas.idealab.com) has joined #ceph
[20:07] * sleinen1 (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:09] * madkiss (~madkiss@a6264-0299838063.pck.nerim.net) has joined #ceph
[20:09] * alfredodeza is now known as alfredo|noms
[20:14] * sleinen (~Adium@2001:620:0:26:51fe:32d4:2984:5ad9) Quit (Quit: Leaving.)
[20:14] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[20:14] * cjh_ (~cjh@ps123903.dreamhost.com) has joined #ceph
[20:15] * carif (~mcarifio@cpe-74-78-54-137.maine.res.rr.com) has joined #ceph
[20:15] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[20:18] <dmick> antoinerg: comparing what to what, exactly, now?
[20:18] * sleinen1 (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[20:19] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Read error: No route to host)
[20:20] * sleinen (~Adium@2001:620:0:26:d433:f033:b4d8:1c16) has joined #ceph
[20:21] * TiCPU__ (~jeromepou@190-130.cgocable.ca) Quit (Quit: Ex-Chat)
[20:24] <antoinerg> dmick: If I loose a 4TB HD, will data be evenly replicated around to available disks therefore improving speed significantly compared to RAID5
[20:24] <antoinerg> dmick: *improving recovery speed
[20:25] <nhm> antoinerg: the whole cluster is used for recovery yes
[20:25] * zirpu (~zirpu@74.207.224.175) has joined #ceph
[20:25] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) Quit (Ping timeout: 480 seconds)
[20:25] <nhm> antoinerg: how fast it is probably depends on your network/node setup.
[20:26] <dmick> antoinerg: the real win is that with HWRAID, the controller can't know what's data and what's empty
[20:26] <nhm> antoinerg: At least for replication setups Ceph doesn't have to worry about parity calculations which helps too.
[20:26] <dmick> so it must reconstitute the whole drive
[20:26] <tchmnkyz> antoinerg: i can tell you from tests that were completed here when inktank came in to do consulting time, we saw a ~75% boost in performance going jbod over raid
[20:27] * sleinen1 (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:27] <nhm> dmick: good point!
[20:27] <dmick> so 1) you only resilver the data, and 2) you do it from the whole cluster
[20:30] <antoinerg> dmick: Awesome! So even in a single-node setup, Ceph protects you better than RAID when dealing with large drives by providing reduced recovery times
[20:30] <tchmnkyz> antoinerg: that is what i found out too
[20:30] * sjm (~sjm@64.34.151.178) has joined #ceph
[20:31] <tchmnkyz> w
[20:31] <antoinerg> tchmnkyz: This should be better advertised as it might be a killer feature of Ceph. I mean, they will come out with 10TB hard drive in a few years. People will want to move away from RAID5
[20:32] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[20:33] <antoinerg> thank you all for your help!
[20:34] * roald (~oftc-webi@87.209.150.214) has joined #ceph
[20:34] * thomnico (~thomnico@64.34.151.178) has joined #ceph
[20:35] <tchmnkyz> antoinerg: raid5 and others like it are the devil with ceph
[20:35] * sprachgenerator (~sprachgen@182.sub-70-208-154.myvzw.com) Quit (Quit: sprachgenerator)
[20:35] <tchmnkyz> raid of any kind seems to just eat performance with it
[20:35] * waldi (~waldi@shell.thinkmo.de) has left #ceph
[20:36] <tchmnkyz> conventional wisdom would say that hardware raid would help speed things up
[20:41] <torment4> is there a proper way to shutdown a ceph cluster
[20:42] * carif (~mcarifio@cpe-74-78-54-137.maine.res.rr.com) Quit (Quit: Ex-Chat)
[20:45] <dmsimard> torment4: You've made me curious too, now.
[20:47] <dmsimard> You probably don't want OSDs go rebalancing left and right - so probably have to deal with that with something like "noout"
[20:48] <dmsimard> Maybe http://www.sebastien-han.fr/blog/2012/08/17/ceph-storage-node-maintenance/ can point you in the right direction
[20:49] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[20:52] <torment4> thats interseting. but if you need it all shutdown to move it phyiscally how would one go about that
[20:53] * loicd reading http://bblank.thinkmo.de/blog/archive/2013/09/09/setting-up-ceph-the-hard-way
[21:07] * alfredo|noms is now known as alfredodeza
[21:15] <sagewk> torment4: ceph osd set noout ; pdsh -a killall ceph-osd ceph-mds ; pdsh -a killall ceph-mon
[21:16] <sagewk> noout is probably superfluous
[21:21] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[21:24] <torment4> very good, thanks!
[21:31] <lxo> consider a consistent snapshot taken of a replicated cluster. it is clear to me that the data of each file in each replica of the pg is supposed to be the same if the snapshot is consistent, but how about the xattrs associated with these files?
[21:32] * sjm (~sjm@64.34.151.178) has joined #ceph
[21:32] <lxo> i.e., are the xattrs ceph puts in them a global property of the cluster (and thus should be the same for all replicas), or are there any per-osd specific data in the xattrs?
[21:32] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) has joined #ceph
[21:34] <lxo> (the question has to do with whether it is possible to manually rebuild an osd out of its meta/ and omap/ subdirs plus pgs subdirs taken out of other replicas, in case of some major disaster)
[21:36] <gregaf> lxo: the xattrs should be the same, but there's stuff in leveldb too that would also need to be moved
[21:37] <lxo> e.g., I know the user.ceph._parent is the parent attribute that ought to be globally the same; and I suspect the same applies to user.ceph._snapset, but I'ev no idea about user.ceph._
[21:38] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Remote host closed the connection)
[21:38] <lxo> I also know dirs contain attributes, IIRC pertaining to what files go in each subdir, and I know the exact number of DIR_*s leading to a file may vary across osds, so I assume those xattrs may change as well, but if I take an entire pg out of another osd, that wouldn't be a problem
[21:39] <lxo> yeah, preserving the omap leveldb would definitely be necessary to be able to rebuild an osd's snapshot out of other consistent and clean osds' pg replicas
[21:40] <lxo> but now I wonder if it would be possible to extract the pertinent omap/leveldb data from the other osds, should the leveldb itself become corrupt or somesuch
[21:41] <lxo> (with the frequent disk problems I've faced, I'm surprised I haven't had much leveldb corruption after the btrfs compress bug was fixed)
[21:41] * grepory (~Adium@50-200-116-163-static.hfc.comcastbusiness.net) Quit (Quit: Leaving.)
[21:43] <gregaf> there's some tool that can be used to extract a PG but I haven't used it — sjust?
[21:44] <sjust> davidzlap could probably comment
[21:44] <lxo> the reason I'm so interested is that right now I'm undergoing a major reorg of my cluster (big changes to the crushmap), with lots of pg replicas moving about, and I've had to drop last-known-good snapshots of most osds for the reorg to complete. it's a very stressful moment, but if I knew I could rebuild working osds out of the data in other osds, that would give me some significant peace of mind ;-)
[21:45] <sjust> lxo: there is a tool for moving pgs from osd to osd
[21:45] <sjust> lxo: ceph-filestore-dump I think
[21:45] <sjust> in the ceph-test package?
[21:46] <lxo> (and stop me from making the problem worse as I drop more snapshots before the process completes; I've discarded even the omaps and osdmaps before the possibility of reconstructing stuff by hand came to mind :-)
[21:46] <lxo> oooh. nice. I'm gonna look into it.
[21:52] * S0d0 (joku@a88-113-108-239.elisa-laajakaista.fi) Quit (Ping timeout: 480 seconds)
[21:56] * bclark (~bclark@23-25-46-97-static.hfc.comcastbusiness.net) has left #ceph
[21:56] * Vjarjadian (~IceChat77@176.254.37.210) has joined #ceph
[21:58] * thomnico (~thomnico@64.34.151.178) Quit (Ping timeout: 480 seconds)
[21:59] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[21:59] * zirpu (~zirpu@74.207.224.175) has left #ceph
[22:00] * thomnico (~thomnico@64.34.151.178) has joined #ceph
[22:01] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[22:04] <sagewk> zackc: have a sec?
[22:05] <zackc> sagewk: sure
[22:05] * sjm (~sjm@64.34.151.178) has joined #ceph
[22:05] <sagewk> Tamil: you probably have an opinion here
[22:05] <sagewk> wondering if we should away with teh subdirs in the test dir
[22:05] <sagewk> right now it's ~ubuntu/cephtest/<something>/...
[22:06] <sagewk> the idea being that you wouldn't fail if some other test failed to clean up. but the (earlier) idea is that the test dir is almost a lock/backstop so that you don't step on another test
[22:06] <sagewk> hitting some stupid bug in this code and never liked the subdirs; tempted to just rip it out and simplify
[22:07] <zackc> yeah it's always seemed unnecessary to me
[22:07] <zackc> for locks, we have a lockserver, no?
[22:08] <sagewk> yeah, but sometimes machiens get unlocked without getting cleaned up.
[22:08] <sagewk> since this dir is removed last, it's a flag that something is possibly not clean
[22:10] <zackc> that's true... but why are dirty machines being unlocked?
[22:10] <sagewk> usually user error
[22:11] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[22:11] <zackc> hmm
[22:12] <zackc> and what is the issue you're hitting right now?
[22:17] * xmltok_ (~xmltok@pool101.bizrate.com) has joined #ceph
[22:17] * xmltok (~xmltok@pool101.bizrate.com) Quit (Read error: Connection reset by peer)
[22:20] * iggy_ (~iggy@theiggy.com) Quit (Remote host closed the connection)
[22:20] * iggy_ (~iggy@theiggy.com) has joined #ceph
[22:20] <lxo> sjust, thanks for the tip. ceph-filestore-dump looks useful for certain cases indeed, but AFAICT not for the sort of disaster recovery I have in mind. if it could export from a preserved snap_* or clustersnap_* dir, rather than from an osd's current+journal, and import to another such tree, I think it would be more useful to the disaster scenario
[22:21] <sjust> lxo: it would have to be modified for that
[22:21] <lxo> but then... it doesn't require the filestore to be any of the actual osds, so I guess one can set up a fake osd dir with a zeroed journal and the chosen snapdir as current/, export from that, and similarly import into such a fake tree, later turned into an osd's tree, no?
[22:23] * chutz (~chutz@rygel.linuxfreak.ca) Quit (Quit: Leaving)
[22:23] <sjust> probably
[22:24] <davidzlap> gregaf: ceph-filestore-dump can be used like this: sudo ceph_filestore_dump --filestore-path /var/lib/ceph/osd/ceph-0 --journal-path /var/lib/ceph/osd/ceph-0/journal --pgid 1.2 --type export --file /tmp/export1.2
[22:24] <gregaf> sjust: sagewk: one of you want to review https://github.com/ceph/ceph/pull/580 ? it's the redirect stuff
[22:25] <lxo> but then... I suppose copying the files (say with rsync -aAX or cp -R --preserve=all, plus --reflink, if in the same filesystem) and exporting/importing any additional metadata might be more efficient. but there's no immediate way to export/import just the metadata AFAICT
[22:26] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[22:26] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[22:27] * chutz (~chutz@rygel.linuxfreak.ca) Quit ()
[22:27] <lxo> it would sure help if we could export to and import from a pipe, too, a bit like btrfs send|btrfs receive ;-)
[22:27] <sjust> lxo: might make sense to put together a ticket outlining the precise use cases
[22:27] <lxo> I guess my cluster falling apart from this crushmap revamp will be a great motivator for me to implement this stuff ;-D but I still hope it doesn't ;-)
[22:28] <lxo> 'k, will do
[22:28] <lxo> (file the ticket, for now ;-)
[22:28] <sjust> lxo: and if you could include invocation examples, it would cut down on future interfaces bike-shedding :P
[22:30] * mnash_ (~chatzilla@vpn.expressionanalysis.com) has joined #ceph
[22:30] * KevinPerks1 (~Adium@64.34.151.178) has joined #ceph
[22:31] * mschiff (~mschiff@46.189.28.48) Quit (Read error: Connection reset by peer)
[22:31] * mschiff (~mschiff@46.189.28.48) has joined #ceph
[22:32] * antoinerg (~antoine@dsl.static-187-116-74-220.electronicbox.net) Quit (Ping timeout: 480 seconds)
[22:32] * antoinerg (~antoine@dsl.static-187-116-74-220.electronicbox.net) has joined #ceph
[22:33] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[22:33] * chutz (~chutz@rygel.linuxfreak.ca) Quit ()
[22:34] * mnash (~chatzilla@66-194-114-178.static.twtelecom.net) Quit (Ping timeout: 480 seconds)
[22:34] * mnash_ is now known as mnash
[22:35] * KevinPerks (~Adium@64.34.151.178) Quit (Ping timeout: 480 seconds)
[22:36] * sjm (~sjm@64.34.151.178) has joined #ceph
[22:38] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[22:39] * mschiff (~mschiff@46.189.28.48) Quit (Remote host closed the connection)
[22:40] * mschiff (~mschiff@46.189.28.159) has joined #ceph
[22:44] * mschiff (~mschiff@46.189.28.159) Quit (Remote host closed the connection)
[22:45] * mschiff (~mschiff@46.189.28.159) has joined #ceph
[22:49] * mschiff (~mschiff@46.189.28.159) Quit (Remote host closed the connection)
[22:49] <lxo> sjust, http://tracker.ceph.com/issues/6261
[22:50] <lxo> (about ceph-filestore-dump withlist items)
[22:50] <lxo> gotta go now. thanks again
[22:50] * thomnico (~thomnico@64.34.151.178) Quit (Quit: Ex-Chat)
[22:51] * mschiff (~mschiff@46.189.28.159) has joined #ceph
[22:51] <davidzlap> lxo: FYI, If you don't specify —file the export goes to stdout and an import reads from stdin.
[22:54] * Steki (~steki@198.199.65.141) has joined #ceph
[22:57] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[22:58] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Ping timeout: 480 seconds)
[22:58] <lxo> is that so? nice!
[22:58] <lxo> thanks
[22:58] <lxo> ok, filed one more, really gone now ;-)
[23:01] <sjust> davidzlap: sure, lookin
[23:02] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[23:04] <sjust> davidzlap: actually, I'll defer to sagewk on that one
[23:04] <sagewk> lookin
[23:05] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[23:05] * chutz (~chutz@rygel.linuxfreak.ca) Quit (Quit: Leaving)
[23:06] <sagewk> davidzlap: i'd make it EEXIST instead of EINVAL on crush_add_bucket
[23:07] <davidzlap> sagewk: ok. What is with directory test/old? There is a testcrush.cc there, but it doesn't get built anymore..
[23:07] <sagewk> and probably drop the fprintf since nobody else in that file is doing it
[23:07] <sagewk> i think it is very very old..
[23:07] <davidzlap> sagewk: sure, ok
[23:08] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[23:08] * chutz (~chutz@rygel.linuxfreak.ca) Quit ()
[23:08] <sagewk> and verify that the error is understandable. if not, the parse_bucket() should check for EEXIST and print an understandable message
[23:08] * thomnico (~thomnico@64.34.151.178) has joined #ceph
[23:10] <sagewk> gregaf: did you see my github comments before 6033 got rebased?
[23:10] <gregaf> yeah, will review when I'm done building my desk; thanks!
[23:10] <sagewk> cool
[23:10] <gregaf> sorry about the rebase, realized that a commit comment was out of date
[23:13] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[23:13] * chutz (~chutz@rygel.linuxfreak.ca) Quit ()
[23:17] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[23:19] * vata (~vata@2607:fad8:4:6:c4b1:ee75:2192:bc22) Quit (Quit: Leaving.)
[23:19] <sagewk> np. btw a couple other commits got caught up in the rebase
[23:20] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[23:20] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[23:24] <gregaf> bah humbug, will fix
[23:29] <madkiss> do we have http://gitbuilder.ceph.com/libapache-mod-fastcgi-deb-precise-x86_64-basic/ for i386 somewhere?
[23:29] * dmsimard (~Adium@ap08.wireless.co.mtl.iweb.com) Quit (Quit: Leaving.)
[23:29] * dmsimard (~Adium@70.38.0.251) has joined #ceph
[23:30] <sagewk> madkiss: don't think so
[23:30] <madkiss> hm
[23:32] <sagewk> iirc the package was built manually. don't think we did an i386 one
[23:32] <madkiss> ugh. :( can I find the .diff.gz/.dsc/.orig.tar.gz somewhere?
[23:32] * tobru_ (~quassel@217-162-50-53.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[23:33] <sagewk> glowell: do you know where teh fastcgi package dsc etc are
[23:33] <sagewk> ?
[23:33] <sagewk> or dmick?
[23:35] <madkiss> Background: I see this in my apache log
[23:35] <madkiss> [Mon Sep 09 21:25:30 2013] [error] [client 192.168.122.1] chunked Transfer-Encoding forbidden: /swift/v1/test/foobar
[23:36] <madkiss> And google suggests that installing the inktank fastcgi module might work
[23:38] * dmsimard (~Adium@70.38.0.251) Quit (Ping timeout: 480 seconds)
[23:39] <roald> [23:07] <davidzlap> sagewk: ok. What is with directory test/old? There is a testcrush.cc there, but it doesn't get built anymore.. <-- could be that I removed it from the make, but I don't recall seeing anything like a src/test/old before...
[23:40] <sagewk> roald: pretty sure it predates the use of automake even. old stuff.
[23:40] <glowell> sagewk: they are out on gitbuilder
[23:41] <roald> so it's truly worthy of its name :-)
[23:41] <sagewk> yep
[23:42] <madkiss> http://gitbuilder.ceph.com/libapache-mod-fastcgi-deb-precise-x86_64-basic/ref/master/dists/precise/main/source/ is empty :(
[23:42] <madkiss> (the files in there are)
[23:44] <glowell> I'll have to chase them down if they never got added to the repo.
[23:45] <madkiss> okay
[23:46] <glowell> he arm build has source: http://gitbuilder.ceph.com/libapache-mod-fastcgi-deb-quantal-arm7l-basic/ref/repo/pool/main/liba/libapache-mod-fastcgi/
[23:47] <madkiss> i probably wouldn't have looked there ;)
[23:47] <madkiss> the .orig.tar.gz is the same as the vanilla debian one?
[23:48] <glowell> I believe so. I havn't looked at it for a while.
[23:48] <dmick> madkiss: the last I touched it, I put the precise version with patches into github. It was never clean when I left it; I don't know if anyone's ever cleaned it up in to a "set of patches on top of upstream"
[23:49] <madkiss> okay
[23:49] <madkiss> so in fact, there isn't a real way to rebuild the package from scratch for i386?
[23:50] <dmick> well that version should be buildable
[23:50] <loicd> sagewk: sjust https://github.com/ceph/ceph/pull/518 erasure plugin mechanism and abstract API is ready to merge. I feel compeled to let you know right away because I fear another rebase will be necessary in the next 30 minutes ( kidding ;-)
[23:51] <dmick> but if upstream has moved, you'll need to extract the diffs, reapply/resolve, and rebuild
[23:51] <madkiss> oic
[23:52] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) Quit (Remote host closed the connection)
[23:52] * KevinPerks1 (~Adium@64.34.151.178) Quit (Quit: Leaving.)
[23:53] <sjust> loicd: on it!
[23:53] <loicd> sjust: :-)
[23:54] <sjust> 1 down
[23:54] <sjust> or rather, merged that one
[23:55] <madkiss> dmick: looks good, thanks!
[23:55] <dmick> madkiss: yw. sorry it's not cleaner.
[23:56] <madkiss> and indeed that fixes the problem.
[23:57] * shang (~ShangWu@64.34.151.178) Quit (Quit: Ex-Chat)
[23:57] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) Quit (Read error: Operation timed out)
[23:58] <loicd> sjust: cool thanks !
[23:58] * sleinen (~Adium@2001:620:0:26:d433:f033:b4d8:1c16) Quit (Quit: Leaving.)
[23:58] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[23:59] <madkiss> dmick: thank you for the help, it's appreciated :)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.