#ceph IRC Log

Index

IRC Log for 2012-02-28

Timestamps are in GMT/BST.

[0:30] <sagewk> pulsar: ping?
[0:33] * fronlius (~fronlius@f054187150.adsl.alicedsl.de) Quit (Quit: fronlius)
[0:41] * tnt__ (~tnt@235.36-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[0:52] * The_Bishop (~bishop@cable-89-16-138-109.cust.telecolumbus.net) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[1:04] * Tv|work (~Tv__@aon.hq.newdream.net) has joined #ceph
[1:14] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has left #ceph
[1:18] * elder_ (~elder@aon.hq.newdream.net) Quit (Read error: Connection reset by peer)
[1:20] * elder (~elder@aon.hq.newdream.net) has joined #ceph
[1:24] * lofejndif (~lsqavnbok@29.Red-81-39-149.dynamicIP.rima-tde.net) Quit (Quit: Leaving)
[1:37] * yoshi (~yoshi@p8031-ipngn2701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[1:41] * BManojlovic (~steki@212.200.243.16) Quit (Remote host closed the connection)
[2:01] * jluis (~JL@ace.ops.newdream.net) Quit (Ping timeout: 480 seconds)
[2:05] * Tv|work (~Tv__@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[2:32] * bchrisman (~Adium@108.60.121.114) Quit (Quit: Leaving.)
[2:47] * nyeates (~nyeates@pool-173-59-237-75.bltmmd.fios.verizon.net) has joined #ceph
[3:18] * pruby (~tim@leibniz.catalyst.net.nz) Quit (Remote host closed the connection)
[3:20] * pruby (~tim@leibniz.catalyst.net.nz) has joined #ceph
[3:25] * pruby (~tim@leibniz.catalyst.net.nz) Quit (Remote host closed the connection)
[3:27] * pruby (~tim@leibniz.catalyst.net.nz) has joined #ceph
[3:29] * joshd1 (~joshd@aon.hq.newdream.net) Quit (Quit: Leaving.)
[4:42] * nyeates (~nyeates@pool-173-59-237-75.bltmmd.fios.verizon.net) Quit (Quit: Zzzzzz)
[4:58] * nyeates (~nyeates@pool-173-59-237-75.bltmmd.fios.verizon.net) has joined #ceph
[5:03] * nyeates (~nyeates@pool-173-59-237-75.bltmmd.fios.verizon.net) Quit (Quit: Zzzzzz)
[5:15] * nyeates (~nyeates@pool-173-59-237-75.bltmmd.fios.verizon.net) has joined #ceph
[5:15] * chutzpah (~chutz@216.174.109.254) Quit (Quit: Leaving)
[6:07] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[6:47] * nyeates (~nyeates@pool-173-59-237-75.bltmmd.fios.verizon.net) Quit (Quit: Zzzzzz)
[7:04] * elder (~elder@aon.hq.newdream.net) Quit (Quit: Leaving)
[7:29] * Kioob (~kioob@luuna.daevel.fr) Quit (Quit: Leaving.)
[7:34] * Kioob (~kioob@luuna.daevel.fr) has joined #ceph
[7:42] * Kioob (~kioob@luuna.daevel.fr) Quit (Remote host closed the connection)
[7:42] * Kioob (~kioob@luuna.daevel.fr) has joined #ceph
[7:51] * tnt_ (~tnt@235.36-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[8:56] * Enoria (~Enoria@albaldah.dreamhost.com) Quit (Remote host closed the connection)
[8:58] * Enoria (~Enoria@albaldah.dreamhost.com) has joined #ceph
[9:07] * alexxy[home] (~alexxy@79.173.81.171) has joined #ceph
[9:07] * alexxy (~alexxy@79.173.81.171) Quit (Read error: Connection reset by peer)
[9:09] * Enoria (~Enoria@albaldah.dreamhost.com) Quit (Remote host closed the connection)
[9:21] * Enoria (~Enoria@albaldah.dreamhost.com) has joined #ceph
[9:30] * tnt_ (~tnt@235.36-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[9:33] * stxShadow (~jens@p4FD06078.dip.t-dialin.net) has joined #ceph
[9:39] * tnt_ (~tnt@212-166-48-236.win.be) has joined #ceph
[9:40] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[10:07] * fronlius (~fronlius@testing78.jimdo-server.com) has joined #ceph
[10:08] * u3q (~ben@uranus.tspigot.net) Quit (Ping timeout: 480 seconds)
[11:11] * lofejndif (~lsqavnbok@29.Red-81-39-149.dynamicIP.rima-tde.net) has joined #ceph
[11:16] * yoshi (~yoshi@p8031-ipngn2701marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[11:40] * lofejndif (~lsqavnbok@29.Red-81-39-149.dynamicIP.rima-tde.net) Quit (Quit: Leaving)
[11:40] <filoo_absynth> anyone around who's a pro in radosgw/s3?
[11:42] <wido> filoo_absynth: what is the problem?
[11:43] <filoo_absynth> i have a couple rather basic questions about your design
[11:44] <filoo_absynth> i enabled radosgw yesterady on our test cluster, and it seems to use one pool for all the data. is that correct?
[11:44] <filoo_absynth> is there a way to define, say, one pool per user or something?
[11:45] <wido> filoo_absynth: The radosgw used to create multiple pools
[11:46] <wido> the problem was however that with a lot of users you could run into thousands of pools
[11:46] <wido> which will lead to cluster performance problems
[11:46] <filoo_absynth> yeah, i know you are discussing that with my colleagues right now
[11:46] <wido> the number of objects per pool is however not an issue
[11:46] <filoo_absynth> stxShadow and oliever
[11:46] <filoo_absynth> -e
[11:46] <wido> Ah, ok
[11:46] <filoo_absynth> the point is, i want to offer multiple-protocol access to the buckets
[11:46] <wido> The RGW has undergone some changes lately, one of those was moving the data to a central pool
[11:46] <filoo_absynth> ftp, s3, whatever
[11:47] <filoo_absynth> is that a bad idea? is it feasible with radosgw?
[11:47] <wido> filoo_absynth: ftp? How would you do that?
[11:47] <filoo_absynth> that's yet another question
[11:47] <filoo_absynth> you don't see a possibility to do this with ceph and some frontends to ceph?
[11:48] <wido> Most FTP servers assume a local directory to browse. The POSIX filesystem Ceph itself is build on top of RADOS, but files are split into multiple objects
[11:48] <wido> to make a long story short, I don't see that feasible
[11:48] <wido> What you could do is generate a access key for customers and grant that key access to one or more pools
[11:49] <wido> you could give your customers this key and with then native librados API's they can access the data, or use phprados or the Java/Python bindings
[11:49] <wido> But giving users access to the RGW data by not using the RGW is kind of tricky
[11:56] * jluis (~JL@89-181-150-46.net.novis.pt) has joined #ceph
[11:58] <wido> filoo_absynth: You are aware that Ceph is still undergoing heavy development?
[12:13] <filoo_absynth> yes, we are
[12:25] * jluis is now known as joao
[12:33] * joao (~JL@89-181-150-46.net.novis.pt) Quit (Ping timeout: 480 seconds)
[12:36] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[12:43] * joao (~JL@ace.ops.newdream.net) has joined #ceph
[13:26] * lofejndif (~lsqavnbok@29.Red-81-39-149.dynamicIP.rima-tde.net) has joined #ceph
[13:32] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[13:32] * rocky (~r.nap@188.205.52.204) Quit (Quit: leaving)
[13:32] * rosco (~r.nap@188.205.52.204) has joined #ceph
[13:47] * nyeates (~nyeates@pool-173-59-237-75.bltmmd.fios.verizon.net) has joined #ceph
[14:16] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[14:42] * lofejndif (~lsqavnbok@29.Red-81-39-149.dynamicIP.rima-tde.net) Quit (Quit: Leaving)
[15:18] * morse (~morse@supercomputing.univpm.it) Quit (Read error: Connection reset by peer)
[15:18] * _are_ (~quassel@vs01.lug-s.org) Quit (Read error: Connection reset by peer)
[15:18] * monrad-51468 (~mmk@domitian.tdx.dk) Quit (Read error: Connection reset by peer)
[15:19] * dwm__ (~dwm@2001:ba8:0:1c0:225:90ff:fe08:9150) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * al_ (quassel@niel.cx) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * fronlius (~fronlius@testing78.jimdo-server.com) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * eternaleye___ (~eternaley@195.215.30.181) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * Azrael (~azrael@terra.negativeblue.com) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * Meths (rift@2.25.214.237) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * joao (~JL@ace.ops.newdream.net) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * psomas_ (~psomas@inferno.cc.ece.ntua.gr) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * nyeates (~nyeates@pool-173-59-237-75.bltmmd.fios.verizon.net) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * Enoria (~Enoria@albaldah.dreamhost.com) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * pruby (~tim@leibniz.catalyst.net.nz) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * gohko (~gohko@natter.interq.or.jp) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * dmick (~dmick@aon.hq.newdream.net) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * sagewk (~sage@aon.hq.newdream.net) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * gregaf (~Adium@aon.hq.newdream.net) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * ^conner (~conner@leo.tuc.noao.edu) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * lxo (~aoliva@lxo.user.oftc.net) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * jpieper (~josh@209-6-86-62.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * nolan (~nolan@phong.sigbus.net) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * cattelan_away (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * cclien (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * rosco (~r.nap@188.205.52.204) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * sjust (~sam@aon.hq.newdream.net) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * yehudasa__ (~yehudasa@aon.hq.newdream.net) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * diegows (~diegows@50.57.106.86) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * sage (~sage@cpe-76-94-40-34.socal.res.rr.com) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * iggy (~iggy@theiggy.com) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * Olivier_bzh (~langella@xunil.moulon.inra.fr) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * MK_FG (~MK_FG@188.226.51.71) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * darkfader (~floh@188.40.175.2) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * kirkland (~kirkland@74.126.19.140.static.a2webhosting.com) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * nhm (~nh@68.168.168.19) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * ottod (~ANONYMOUS@li127-75.members.linode.com) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * ajm (adam@adam.gs) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * Sargun (~sargun@208-106-98-2.static.sonic.net) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * ameen (~ameen@unstoppable.gigeservers.net) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * imjustmatthew (~matthew@pool-96-228-59-130.rcmdva.fios.verizon.net) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * guido (~guido@mx1.hannover.ccc.de) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * __jt__ (~james@jamestaylor.org) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * edwardw`away (~edward@ec2-50-19-100-56.compute-1.amazonaws.com) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * stxShadow (~jens@p4FD06078.dip.t-dialin.net) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * alexxy[home] (~alexxy@79.173.81.171) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * __nolife (~Lirezh@83-64-53-66.kocheck.xdsl-line.inode.at) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * jks (jks@193.189.93.254) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * jantje (~jan@paranoid.nl) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * DLange (~DLange@dlange.user.oftc.net) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * f4m8 (~f4m8@lug-owl.de) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * Meyer__ (meyer@c64.org) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * johnl_ (~johnl@2a02:1348:14c:1720:24:19ff:fef0:5c82) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * tnt_ (~tnt@212-166-48-236.win.be) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * filoo_absynth (~absynth@absynth.de) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * Ormod (~valtha@ohmu.fi) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * andret (~andre@pcandre.nine.ch) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * pulsar (6a5be70dba@176.9.203.178) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * Ludo (~Ludo@88-191-129-65.rev.dedibox.fr) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * Kioob (~kioob@luuna.daevel.fr) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * Anticimex (anticimex@netforce.csbnet.se) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * pmjdebruijn (~pascal@overlord.pcode.nl) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * NaioN (~stefan@andor.naion.nl) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * RupS (~rups@panoramix.m0z.net) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * ogelbukh (~weechat@nat3.4c.ru) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * vhasi (vhasi@vha.si) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * hijacker (~hijacker@213.91.163.5) Quit (kilo.oftc.net reticulum.oftc.net)
[15:19] * stass (stas@ssh.deglitch.com) Quit (kilo.oftc.net reticulum.oftc.net)
[15:20] * monrad (~mmk@domitian.tdx.dk) has joined #ceph
[15:20] * morse_ (~morse@supercomputing.univpm.it) has joined #ceph
[15:20] * _are__ (~quassel@vs01.lug-s.org) has joined #ceph
[15:20] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[15:20] * nyeates (~nyeates@pool-173-59-237-75.bltmmd.fios.verizon.net) has joined #ceph
[15:20] * rosco (~r.nap@188.205.52.204) has joined #ceph
[15:20] * joao (~JL@ace.ops.newdream.net) has joined #ceph
[15:20] * fronlius (~fronlius@testing78.jimdo-server.com) has joined #ceph
[15:20] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[15:20] * stxShadow (~jens@p4FD06078.dip.t-dialin.net) has joined #ceph
[15:20] * Enoria (~Enoria@albaldah.dreamhost.com) has joined #ceph
[15:20] * alexxy[home] (~alexxy@79.173.81.171) has joined #ceph
[15:20] * Kioob (~kioob@luuna.daevel.fr) has joined #ceph
[15:20] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[15:20] * pruby (~tim@leibniz.catalyst.net.nz) has joined #ceph
[15:20] * gohko (~gohko@natter.interq.or.jp) has joined #ceph
[15:20] * __nolife (~Lirezh@83-64-53-66.kocheck.xdsl-line.inode.at) has joined #ceph
[15:20] * dmick (~dmick@aon.hq.newdream.net) has joined #ceph
[15:20] * sagewk (~sage@aon.hq.newdream.net) has joined #ceph
[15:20] * sjust (~sam@aon.hq.newdream.net) has joined #ceph
[15:20] * yehudasa__ (~yehudasa@aon.hq.newdream.net) has joined #ceph
[15:20] * gregaf (~Adium@aon.hq.newdream.net) has joined #ceph
[15:20] * ^conner (~conner@leo.tuc.noao.edu) has joined #ceph
[15:20] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) has joined #ceph
[15:20] * imjustmatthew (~matthew@pool-96-228-59-130.rcmdva.fios.verizon.net) has joined #ceph
[15:20] * filoo_absynth (~absynth@absynth.de) has joined #ceph
[15:20] * pmjdebruijn (~pascal@overlord.pcode.nl) has joined #ceph
[15:20] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[15:20] * jpieper (~josh@209-6-86-62.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com) has joined #ceph
[15:20] * diegows (~diegows@50.57.106.86) has joined #ceph
[15:20] * sage (~sage@cpe-76-94-40-34.socal.res.rr.com) has joined #ceph
[15:20] * Ormod (~valtha@ohmu.fi) has joined #ceph
[15:20] * Anticimex (anticimex@netforce.csbnet.se) has joined #ceph
[15:20] * guido (~guido@mx1.hannover.ccc.de) has joined #ceph
[15:20] * nolan (~nolan@phong.sigbus.net) has joined #ceph
[15:20] * cattelan_away (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) has joined #ceph
[15:20] * andret (~andre@pcandre.nine.ch) has joined #ceph
[15:20] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[15:20] * psomas_ (~psomas@inferno.cc.ece.ntua.gr) has joined #ceph
[15:20] * Olivier_bzh (~langella@xunil.moulon.inra.fr) has joined #ceph
[15:20] * NaioN (~stefan@andor.naion.nl) has joined #ceph
[15:20] * iggy (~iggy@theiggy.com) has joined #ceph
[15:20] * jks (jks@193.189.93.254) has joined #ceph
[15:20] * RupS (~rups@panoramix.m0z.net) has joined #ceph
[15:20] * eternaleye___ (~eternaley@195.215.30.181) has joined #ceph
[15:20] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) has joined #ceph
[15:20] * Azrael (~azrael@terra.negativeblue.com) has joined #ceph
[15:20] * Meths (rift@2.25.214.237) has joined #ceph
[15:20] * ogelbukh (~weechat@nat3.4c.ru) has joined #ceph
[15:20] * pulsar (6a5be70dba@176.9.203.178) has joined #ceph
[15:20] * Ludo (~Ludo@88-191-129-65.rev.dedibox.fr) has joined #ceph
[15:20] * jantje (~jan@paranoid.nl) has joined #ceph
[15:20] * darkfader (~floh@188.40.175.2) has joined #ceph
[15:20] * DLange (~DLange@dlange.user.oftc.net) has joined #ceph
[15:20] * MK_FG (~MK_FG@188.226.51.71) has joined #ceph
[15:20] * vhasi (vhasi@vha.si) has joined #ceph
[15:20] * cclien (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) has joined #ceph
[15:20] * dwm__ (~dwm@2001:ba8:0:1c0:225:90ff:fe08:9150) has joined #ceph
[15:20] * kirkland (~kirkland@74.126.19.140.static.a2webhosting.com) has joined #ceph
[15:20] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[15:20] * al_ (quassel@niel.cx) has joined #ceph
[15:20] * __jt__ (~james@jamestaylor.org) has joined #ceph
[15:20] * f4m8 (~f4m8@lug-owl.de) has joined #ceph
[15:20] * nhm (~nh@68.168.168.19) has joined #ceph
[15:20] * ottod (~ANONYMOUS@li127-75.members.linode.com) has joined #ceph
[15:20] * ajm (adam@adam.gs) has joined #ceph
[15:20] * Sargun (~sargun@208-106-98-2.static.sonic.net) has joined #ceph
[15:20] * edwardw`away (~edward@ec2-50-19-100-56.compute-1.amazonaws.com) has joined #ceph
[15:20] * ameen (~ameen@unstoppable.gigeservers.net) has joined #ceph
[15:20] * hijacker (~hijacker@213.91.163.5) has joined #ceph
[15:20] * stass (stas@ssh.deglitch.com) has joined #ceph
[15:20] * Meyer__ (meyer@c64.org) has joined #ceph
[15:20] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) has joined #ceph
[15:20] * johnl_ (~johnl@2a02:1348:14c:1720:24:19ff:fef0:5c82) has joined #ceph
[15:40] * tnt_ (~tnt@212-166-55-100.win.be) has joined #ceph
[15:56] * aliguori (~anthony@32.97.110.59) has joined #ceph
[16:03] * nyeates (~nyeates@pool-173-59-237-75.bltmmd.fios.verizon.net) Quit (Quit: Zzzzzz)
[16:07] * tnt_ (~tnt@212-166-55-100.win.be) Quit (Ping timeout: 480 seconds)
[16:22] * tnt_ (~tnt@212-166-48-236.win.be) has joined #ceph
[16:23] * aliguori (~anthony@32.97.110.59) Quit (Ping timeout: 480 seconds)
[16:23] * aliguori (~anthony@32.97.110.65) has joined #ceph
[16:46] * elder (~elder@aon.hq.newdream.net) has joined #ceph
[17:09] * Kioob`Taff1 (~plug-oliv@local.plusdinfo.com) has joined #ceph
[17:21] * tnt_ (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[17:25] * cattelan_away is now known as cattelan
[17:36] * stxShadow (~jens@p4FD06078.dip.t-dialin.net) Quit (Remote host closed the connection)
[17:41] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:53] * tnt_ (~tnt@235.36-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[17:53] * The_Bishop (~bishop@cable-89-16-138-109.cust.telecolumbus.net) has joined #ceph
[17:55] * fred__ (~fred@80-219-180-134.dclient.hispeed.ch) has joined #ceph
[17:55] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[17:55] <fred__> Hi, I can't log into ceph tracker using my openid account, is it a known problem?
[17:56] <iggy> agree with tv about ditching btrfs devs
[18:01] * Tv|work (~Tv__@aon.hq.newdream.net) has joined #ceph
[18:02] * fred__ (~fred@80-219-180-134.dclient.hispeed.ch) Quit (Quit: Leaving)
[18:09] <yehudasa__> fred_: was it you about the rgw copy issues?
[18:22] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) has joined #ceph
[18:23] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) Quit ()
[18:24] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[18:27] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[18:53] * joshd (~joshd@aon.hq.newdream.net) has joined #ceph
[18:54] * bchrisman (~Adium@108.60.121.114) has joined #ceph
[19:06] * chutzpah (~chutz@216.174.109.254) has joined #ceph
[19:22] * aliguori (~anthony@32.97.110.65) Quit (Ping timeout: 480 seconds)
[19:27] * fronlius (~fronlius@testing78.jimdo-server.com) Quit (Quit: fronlius)
[19:35] * Kathor (~Kathor@80.81.57.66) has joined #ceph
[19:36] <Kathor> I'm trying to compile ceph from source on Centos 5.7, following the manual on wiki page.
[19:37] <Kathor> First error: configure: error: no tcmalloc found (use --without-tcmalloc to disable)
[19:37] <Kathor> if I use "--without-tcmalloc" option,
[19:38] <Kathor> then it's: configure: error: "Can't find boost statechart headers; need 1.34 or later"
[19:39] <Tv|work> Kathor: you need to install the build dependencies, see BuildRequires lines in ceph.spec.in
[19:39] <Tv|work> Kathor: for centos, you might be better off using the rpm build process
[19:40] <Kathor> I tried to install all of the, including the google-perftools-devel <- which provides tcmalloc.
[19:41] <Kathor> and i do have boost-devel package..
[19:42] <gregaf> Kathor: is the packaged version 1.34 or later? and does the default install actually include those statechart headers?
[19:44] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) has joined #ceph
[19:44] <Kathor> the one thing: i'm using the x86_64 OS, but google-perftools-devel i have only the i386 version installed. I dunno, if this makes diffrence..
[19:45] <gregaf> on CentOS you probably just want to build without perftools; it has a few problems for them and the 64-bit version isn't even packaged
[19:46] <gregaf> you'll need to solve the statechart issue either way :)
[19:46] <Kathor> oh, yes, I do have 1.33 version of "boost-devel", not 1.34..
[19:47] <Kathor> oh, yes, I do have 1.33 version of "boost-devel", not 1.34.. thanks
[19:47] <Kathor> I gues, I can try to compile earlier version of ceph ?
[19:48] <gregaf> no, it required 1.34 when it was inserted
[19:48] <gregaf> I'm not sure if that's a real requirement though…sjust?
[19:49] <nhm> Kathor: btw, what kernel are you on?
[19:49] <Kathor> 2.6.18-274.18.1.el5xen
[19:50] <Kathor> is there some way I can try to compile it, even with 1.33 ?
[19:55] <Kathor> btw, i try to "rpmbuild -tb ~/rpmbuild/SOURCES/ceph-0.42.tar.gz", it throws "libcurl-devel is needed by ceph-0.42-6.x86_64". On Centos5, there is curl-devel package, but not libcurl-devel.
[19:55] <Tv|work> gregaf: 1528d2c42b8eee9379902855429d38bc33a4d026
[19:56] <Kathor> I don't know how to avoid/skip those errors in both cases..
[19:56] <Tv|work> gregaf: vs ceph.spec.in not specifying *any* version numbers
[19:56] <gregaf> Kathor: unfortunately, we do require v1.34 or better
[19:57] <Kathor> ok
[19:57] <yehudasa__> Kathor: you can try installing boost from source
[19:57] * aliguori (~anthony@32.97.110.65) has joined #ceph
[19:58] <Kathor> ok, thanks for help. i'll try to setup boost >= 1.34.
[19:59] <Kathor> what about "libcurl-devel" requirement? if I have curl-devel - is it the same?
[19:59] <gregaf> Kathor: probably?
[20:00] <gregaf> unfortunately we don't have anybody on RPM where we live, so all that stuff is pretty much 100% community maintained :)
[20:00] <Kathor> ok, thanks for help.
[20:02] * fronlius (~fronlius@f054111067.adsl.alicedsl.de) has joined #ceph
[20:04] * iggy shivers at the thought of trying to set up ceph on something as old and crufty as RH5.7
[20:06] <nhm> iggy: technically it was released like 6 months ago.. ;)
[20:06] <nhm> well, centos 5.7 was.
[20:07] <iggy> yeah, with 75 more patches to it's 2.6.18 kernel...
[20:08] <nhm> Yeah, they've gone a bit overboard regarding that imho.
[20:12] <sagewk> yehudasa_, sjust: how does wip-2118 look?
[20:15] <sjust> looks about right to me
[20:15] * BManojlovic (~steki@212.200.243.16) has joined #ceph
[20:31] <joshd> sagewk: wip-2118 doesn't compile
[20:32] <yehudasa__> sagewk: looked ok for me too
[20:33] <yehudasa__> joshd: does it compile if you run make under src/leveldb?
[20:33] <yehudasa__> I remember there was this issue, don't know if sam fixed it
[20:34] <joshd> yehudasa__: unrelated to leveldb, there's an incorrect variable name (fn should be nosnapfn)
[20:35] <yehudasa__> wait a second.. I was referring to a different branch
[20:49] * lofejndif (~lsqavnbok@23.Red-88-11-191.dynamicIP.rima-tde.net) has joined #ceph
[20:54] <sagewk> joshd: fixed, and added cleanup of nosnap marker
[20:59] <joshd> sagewk: does this handle upgrades gracefully and how was it tested?
[21:00] <sagewk> an upgrade to new code that coincides with the conf change we're guarding against will mean the check won't happen. i'm not worried about that, though.. it's guarding against a weird corner case
[21:01] <sagewk> my only test was to verify nosnap was created.
[21:02] <sagewk> not sure we can easily build a teuth test bc we'd need to adjust ceph.conf midway through, or pss extra args when we restart the daemon
[21:02] <sagewk> i guess the latter would be the way to go
[21:04] <joshd> yeah, we could add extra args easily
[21:05] * stxShadow (~jens@ip-88-153-224-220.unitymediagroup.de) has joined #ceph
[21:07] <joshd> more generally though, is there a stress test to reproduce the problem and verify this fixes it (or at least makes it much less likely)?
[21:13] <sagewk> this eliminates the race completely (the window between commit_op_seq update and snapshot ioctl)
[21:14] <sagewk> i think the interesting cases to test are that it properly catches the configuration change and lets you override it
[21:17] <stxShadow> is there any wiki article, which discribes the expansion of osds with the "ceph rush add" command ? As i understand the command, there is no need to edit the crushmap anymore .... right ?
[21:21] <gregaf> stxShadow: unfortunately I don't think there is
[21:21] <gregaf> a wiki article, I mean
[21:21] <gregaf> but I believe you'll just follow the instructions at http://ceph.newdream.net/wiki/OSD_cluster_expansion/contraction
[21:22] <gregaf> and then instead of editing the crush map, run "ceph crush add osd"
[21:22] <stxShadow> without any parameters ?
[21:22] <gregaf> wait, no…where did you see that command?
[21:23] <stxShadow> http://ceph.newdream.net/docs/latest/control/
[21:23] <gregaf> ah, there we go
[21:23] <gregaf> yeah, use that syntax
[21:25] <stxShadow> ok .... very nice .... seems a lot easier for me
[21:26] <stxShadow> i also found a posting of sage which discribes adding of osds on the fly ..... but i wasn't sure, that this is the recommended way
[21:27] <stxShadow> ;)
[21:29] <gregaf> hmm, which post did you see?
[21:29] <stxShadow> http://permalink.gmane.org/gmane.comp.file-systems.ceph.devel/4391
[21:29] <stxShadow> this one
[21:30] <Tv|work> current best advice on adding osds is unfortunately well hidden in the chef cookbook :(
[21:30] <Tv|work> https://github.com/NewDreamNetwork/ceph-cookbooks/blob/master/ceph/recipes/bootstrap_osd.rb
[21:31] <Tv|work> "ceph osd create" is a really welcome addition
[21:31] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[21:31] <Tv|work> we just need to finish that work, and document the ceph subcommands better
[21:31] <Tv|work> http://ceph.newdream.net/docs/latest/control/#osd-subsystem is a start
[21:32] <Tv|work> http://ceph.newdream.net/docs/latest/ops/manage/grow/osd/ is sadly empty
[21:33] <darkfader> Tv|work: i've sped people through our class the last two days, so we'll have almost all of tomorrow for building and breaking ceph :)
[21:35] <darkfader> Tv|work: you should add some marker for docs that are already written
[21:35] <Tv|work> darkfader: i try to have todo notes on the ones that aren't
[21:35] <darkfader> makes browsing them less discouraging
[21:35] <Tv|work> but that one got left behnd
[21:35] <Tv|work> yeah sorry about that
[21:35] <darkfader> np
[21:35] <Tv|work> i needed to flush out the structure, so now there's placeholders all over it
[21:35] <Tv|work> and i haven't put time into the docs in ages
[21:36] <Tv|work> the good news is, we have a fulltime tech writer starting soon
[21:36] <darkfader> i don't mind at all i like the look of it
[21:36] <darkfader> cool
[21:36] <stxShadow> Tv|work: thanks for the link ;)
[21:36] <Tv|work> i think sphinx is an amazing toolchain to write docs in
[21:36] <stxShadow> that helps a lot
[21:36] <darkfader> Tv|work: we have our own publishing system, can be a little tricky at times
[21:37] <darkfader> @home i just bought a confluence starter license
[21:37] <darkfader> love it, totally.
[21:37] <darkfader> but of course that's not a toolchain at all
[21:37] <Tv|work> i liked sphinx enough that i actually reimplemented my personal website in it, without changing the layout at all.. ( http://eagain.net/ )
[21:38] <darkfader> oh a presentation about teuthology!
[21:39] <Tv|work> darkfader: it relies a little bit too much on me yammering things, so i haven't pushed it to a bigger audience
[21:39] <darkfader> i see
[21:39] <darkfader> but still nice to know there is some
[21:40] <stxShadow> and just another question: i setup my cluster with 2 active mds and 1 standby .... lately i saw a posting that 2 or more active mds's are not recommended and could lead to instability ...... any chance to change this on a active cluster ?
[21:40] <Tv|work> stxShadow: http://ceph.newdream.net/docs/latest/ops/manage/grow/mds/#removing-mdses
[21:41] <Tv|work> stxShadow: and "ceph mds set_max_mds 1" to say you only want one active
[21:43] <stxShadow> ok ..... i dont want to remove one .... only set one to standby .... -> "ceph mds set_max_mds 1" should be the way ....
[21:43] <Tv|work> you need to tell the currently active one to stop gracefully
[21:43] <Tv|work> there's two inter-dependent things happening here
[21:44] <Tv|work> (and yes we already talked about this being not ideal.. will get fixed at some point)
[21:45] <stxShadow> is there a chance to break the cluster with that ?
[21:46] <Tv|work> it's multi-mds so yes ;)
[21:47] * Tv|work . o O ( I hear Greg typing and I keep waiting for the response... ;)
[21:47] <stxShadow> ok .... so i will test it on our test cluster first ....
[21:47] <gregaf> lol, wasn't typing here
[21:48] <gregaf> sagewk, whoever: you want to review wip-1789?
[21:48] <gregaf> not thoroughly tested but I did force a machine to go through a basic slurp and that worked fine
[21:51] <sagewk> gregaf: one nit, otherwise looks good
[21:52] <gregaf> heh, okay
[21:52] <gregaf> thought about it but was lazy since it's a private and local function ;)
[21:52] <sagewk> sjust: might be worth looking at coverage for your testing on the new DBObjectMap code?
[21:52] <gregaf> okay if I push to master after changing that?
[21:52] <sagewk> sure
[21:52] <sjust> sagewk: yeah, most likely
[21:59] <stxShadow> one last question for today (i hope) -> i've got inconsitencies at one of my clusters ..... is there any recommended way to track them down ?
[22:00] <stxShadow> i know: "ceph pg dump -"
[22:00] <stxShadow> to list the bad pgs
[22:01] <stxShadow> sadly "ceph pg repaid x" will crash the responding osd
[22:03] <gregaf> stxShadow: right now dealing with that pretty much requires dev time, unfortunately...
[22:03] <gregaf> one thing to check is if all the inconsistent PGs have objects which are triggering your "bad locator" warnings, though
[22:04] <gregaf> when they're marked inconsistent it means that different OSDs have different metadata about the objects, which is…bad
[22:10] <stxShadow> hmmm ... ok ..... should the inconsistenies disappear if i delete the corresponding data ?
[22:11] <gregaf> well…probably
[22:11] <gregaf> but we'd rather tease out what caused them first
[22:12] <sagewk> sjust: you've reviewed 818d72ed90b3aacabb168eeb5133a147b226874b ?
[22:13] <sjust> yes
[22:13] <sjust> I'll add a reviewed-by when I clean up the commit comments again
[22:14] <gregaf> oh drat, I forgot to add Sage's reviewed-by on that mon fix :(
[22:14] <gregaf> sagewk: you want to start that messenger thing now?
[22:14] <sagewk> yeah ~5 min?
[22:15] <gregaf> sure
[22:15] <stxShadow> gregaf: oliver wrote a test script to verify written data inside of rbds ..... we already were able to "kill" the boot sector of 3 rbd images (always only the boot sector) .... i we know how to reproduce it .... we will post it here
[22:15] <sagewk> sjust: or squash it down.. the original change is mixed into the GET_HEADER patch, should probably be broken out of that
[22:16] <sjust> yeah
[22:16] <sagewk> stxshadow: awesome
[22:16] <gregaf> that's odd; I wonder if there's some race going on with mount/unmount
[22:30] <stxShadow> good night everyone .....
[22:32] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) Quit (Quit: Leaving)
[22:32] * stxShadow (~jens@ip-88-153-224-220.unitymediagroup.de) Quit (Quit: bye bye !! )
[22:39] <darkfader> Tv|work: http://ceph.newdream.net/docs/latest/rec/filesystem/ from what i read here over the last 1-2 months i'd ask you to consider simply adding xfs to the list
[22:40] <darkfader> we all hope for btrfs, we all know ext4 is standard, but if you're labeling it a recommendation that take the thing that has the least chance of blowing up, over having the best features :)
[22:40] <darkfader> </nitpick>
[22:41] <Tv|work> sagewk: the docs are getting out of date :(
[22:42] <Tv|work> we've talked today about mds, osd addition, filesystems, and all of those conversations have led to docs failing us
[22:42] <darkfader> Tv|work: $we also don't have time for a standard periodic review, and our docs support multiple versions via the same page
[22:43] <darkfader> it's a race we lose
[22:43] <Tv|work> yeah, Ceph is still new enough that there isn't that whole 1.x vs 2.x split
[22:43] <Tv|work> thankfully
[22:43] <Tv|work> though i did prepare for just forking the docs with the code, that's why the url is /docs/latest
[22:44] <darkfader> we can blame ourselves - didn't actively contact customers about their upgrade plans and such
[22:44] <darkfader> yes, i noticed the /latest/
[22:44] <darkfader> so they didn't upgrade since stuff was working for them
[22:44] <darkfader> and so they have totally outdated production systems
[22:44] <Tv|work> darkfader: lesson learned: always leave harmless little bugs in the stable branch ;)
[22:44] <darkfader> hahaa
[22:45] <darkfader> OK. i'll never forget that from now on
[23:49] * ^conner (~conner@leo.tuc.noao.edu) Quit (Read error: Operation timed out)
[23:54] * lofejndif (~lsqavnbok@23.Red-88-11-191.dynamicIP.rima-tde.net) Quit (Quit: Leaving)
[23:55] * lofejndif (~lsqavnbok@09GAADGC5.tor-irc.dnsbl.oftc.net) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.