#ceph IRC Log

Index

IRC Log for 2013-05-06

Timestamps are in GMT/BST.

[0:04] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[0:04] * loicd (~loic@magenta.dachary.org) has joined #ceph
[0:05] * lofejndif (~lsqavnbok@659AACA16.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[0:15] * tnt (~tnt@109.130.111.54) Quit (Ping timeout: 480 seconds)
[0:16] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Quit: Leaving.)
[0:21] * BillK (~BillK@58-7-104-61.dyn.iinet.net.au) has joined #ceph
[0:40] * darkfaded (~floh@88.79.251.60) Quit (Quit: leaving)
[0:40] * darkfader (~floh@88.79.251.60) has joined #ceph
[0:45] * BillK (~BillK@58-7-104-61.dyn.iinet.net.au) Quit (Remote host closed the connection)
[0:47] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[0:53] * BillK (~BillK@58-7-104-61.dyn.iinet.net.au) has joined #ceph
[0:56] * rustam (~rustam@94.15.91.30) has joined #ceph
[0:58] * diegows (~diegows@190.190.2.126) has joined #ceph
[1:00] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[1:25] <joshd1> leseb1: what's up?
[1:26] <leseb1> joshd: I was wondering if you already came across this issue —>https://bugs.launchpad.net/horizon/+bug/1159624
[1:27] <leseb1> somehow my images snapshots are not shown on the dashboard but from the CLI everything is fine
[1:27] <leseb1> joshd1: ^
[1:29] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[1:30] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:31] <joshd1> I haven't noticed that myself, but I usually use the cli, and volume snapshots instead of instance snapshots
[1:33] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[1:39] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[1:44] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[1:44] * loicd (~loic@magenta.dachary.org) has joined #ceph
[1:48] * leseb1 (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Quit: Leaving.)
[1:53] * danieagle (~Daniel@186.214.61.67) has joined #ceph
[1:53] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[2:01] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[2:01] * ChanServ sets mode +o scuttlemonkey
[2:01] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[2:05] * jpieper (~josh@209-6-205-161.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com) has joined #ceph
[2:07] * rustam (~rustam@94.15.91.30) has joined #ceph
[2:26] * coyo (~unf@pool-71-164-242-68.dllstx.fios.verizon.net) has joined #ceph
[2:32] * DarkAce-Z (~BillyMays@50.107.54.92) has joined #ceph
[2:35] * DarkAceZ (~BillyMays@50.107.54.92) Quit (Ping timeout: 480 seconds)
[2:45] * danieagle (~Daniel@186.214.61.67) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[2:53] * madkiss (~madkiss@2001:6f8:12c3:f00f:59f9:370a:1fc4:518f) has joined #ceph
[3:00] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[3:02] * madkiss (~madkiss@2001:6f8:12c3:f00f:59f9:370a:1fc4:518f) Quit (Ping timeout: 480 seconds)
[3:04] * rustam (~rustam@94.15.91.30) has joined #ceph
[3:54] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[4:02] * Dark-Ace-Z (~BillyMays@50.107.54.92) has joined #ceph
[4:02] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[4:05] * DarkAce-Z (~BillyMays@50.107.54.92) Quit (Ping timeout: 480 seconds)
[4:12] * john_barbee_ (~jbarbee@c-98-226-73-253.hsd1.in.comcast.net) has joined #ceph
[4:17] * TiCPU|Home (jerome@p4.i.ticpu.net) Quit (Ping timeout: 480 seconds)
[4:54] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[4:56] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[5:00] * madkiss1 (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[5:00] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Read error: Connection reset by peer)
[5:03] * madkiss1 (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit ()
[5:16] * rustam (~rustam@94.15.91.30) has joined #ceph
[5:17] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[5:20] * diegows (~diegows@190.190.2.126) Quit (Ping timeout: 480 seconds)
[5:27] <matt_> Does anyone happen to know how to inject a no-deep-scrub option into an osd?
[5:30] <lurbs> Not sure how to disable it, just how to change the interval.
[5:31] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[5:32] <lurbs> ceph osd tell $osd injectargs '--osd_deep_scrub_interval = $interval', I think.
[5:32] * loicd (~loic@magenta.dachary.org) has joined #ceph
[5:32] * rustam (~rustam@94.15.91.30) has joined #ceph
[5:33] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[5:33] <matt_> lurbs, I'll give it a go. Thankyou
[5:40] <BillK> small ceph single host running 2 mon/mds/osd had a bad system crash, started ceph but its just sitting there (30 mins now) with the following the msd log:
[5:40] <BillK> 2013-05-06 11:38:44.377197 7ffc3f40f700 0 -- 192.168.44.90:6800/15538 >> 192.168.44.90:6806/11785 pipe(0x139ac80 sd=19 :45288 s=1 pgs=0 cs=0 l=1).connect claims to be 192.168.44.90:6806/15845 not 192.168.44.90:6806/11785 - wrong node!
[5:40] <BillK> how can I kck it into syncing?
[5:40] <BillK> s/kck/kick/
[5:41] <BillK> and that should have been 3 mon/msd/osd
[5:50] * nibon7 (~nibon7@58.20.234.212) has joined #ceph
[5:50] * nibon7 (~nibon7@58.20.234.212) Quit ()
[5:51] <matt_> BillK, are your monitors in quorum? 'ceph quorum_status'
[5:55] * nibon7 (~nibon7@58.20.234.212) has joined #ceph
[5:56] <BillK> matt yes: monmap e1: 3 mons at {a=192.168.44.90:6789/0,b=192.168.44.90:6788/0,c=192.168.44.90:6786/0}, election epoch 55922, quorum 0,1,2 a,b,c
[6:04] * nibon7 (~nibon7@58.20.234.212) Quit (Ping timeout: 480 seconds)
[6:07] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[6:07] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[6:10] * rustam (~rustam@94.15.91.30) has joined #ceph
[6:12] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[6:25] * rustam (~rustam@94.15.91.30) has joined #ceph
[6:26] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[6:33] * bergerx (~bergerx@94.54.248.128) Quit (Quit: Leaving)
[6:41] * alrs (~lars@ip-64-134-233-23.public.wayport.net) has joined #ceph
[6:45] * rustam (~rustam@94.15.91.30) has joined #ceph
[6:46] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[6:49] * noahmehl (~noahmehl@cpe-71-67-115-16.cinci.res.rr.com) Quit (Ping timeout: 480 seconds)
[6:52] * mega_au (~chatzilla@84.244.21.218) has joined #ceph
[7:03] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[7:03] * loicd (~loic@magenta.dachary.org) has joined #ceph
[7:05] * tkensiski (~tkensiski@c-98-234-160-131.hsd1.ca.comcast.net) has joined #ceph
[7:05] * tkensiski (~tkensiski@c-98-234-160-131.hsd1.ca.comcast.net) has left #ceph
[7:07] * noahmehl (~noahmehl@cpe-71-67-115-16.cinci.res.rr.com) has joined #ceph
[7:12] * tnt (~tnt@109.130.111.54) has joined #ceph
[7:16] <nigwil> our Ceph design is making progress, and we started with three dedicated MON servers, but as we're being squeezed on the chassis count we're looking to squeeze the MONs back into residing on some of the storage nodes that have some free memory and CPUs "allocated" for this. This Ceph cluster will be backing an Openstack deployment. good or bad idea?
[7:16] <nigwil> good or bad giving up the dedicated MON servers I mean
[7:16] * rustam (~rustam@94.15.91.30) has joined #ceph
[7:17] <nigwil> the cluster has a dedicated cluster LAN and separate client LAN so plenty of network bandwidth
[7:17] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[7:21] * brian_appscale (~brian@wsip-72-215-161-77.sb.sd.cox.net) Quit (resistance.oftc.net oxygen.oftc.net)
[7:21] * SpamapS (~clint@xencbyrum2.srihosting.com) Quit (resistance.oftc.net oxygen.oftc.net)
[7:21] * __jt__ (~james@rhyolite.bx.mathcs.emory.edu) Quit (resistance.oftc.net oxygen.oftc.net)
[7:21] * uli (~uli@mail1.ksfh-bb.de) Quit (resistance.oftc.net oxygen.oftc.net)
[7:21] * wido (~wido@rockbox.widodh.nl) Quit (resistance.oftc.net oxygen.oftc.net)
[7:21] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (resistance.oftc.net oxygen.oftc.net)
[7:21] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (resistance.oftc.net oxygen.oftc.net)
[7:21] * JohansGlock_ (~quassel@kantoor.transip.nl) Quit (resistance.oftc.net oxygen.oftc.net)
[7:21] * mjblw (~mbaysek@wsip-174-79-34-244.ph.ph.cox.net) Quit (resistance.oftc.net oxygen.oftc.net)
[7:21] * scheuk (~scheuk@204.246.67.78) Quit (resistance.oftc.net oxygen.oftc.net)
[7:21] * Gugge-47527 (gugge@kriminel.dk) Quit (resistance.oftc.net oxygen.oftc.net)
[7:21] * prudhvi (~prudhvi@tau.supr.io) Quit (resistance.oftc.net oxygen.oftc.net)
[7:21] * NaioN (stefan@andor.naion.nl) Quit (resistance.oftc.net oxygen.oftc.net)
[7:21] * Tribaal (uid3081@hillingdon.irccloud.com) Quit (resistance.oftc.net oxygen.oftc.net)
[7:21] * Meths (rift@2.25.193.124) Quit (resistance.oftc.net oxygen.oftc.net)
[7:21] * raso (~raso@deb-multimedia.org) Quit (resistance.oftc.net oxygen.oftc.net)
[7:21] * rturk-away (~rturk@ds2390.dreamservers.com) Quit (resistance.oftc.net oxygen.oftc.net)
[7:21] * capri (~capri@212.218.127.222) has joined #ceph
[7:22] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[7:22] * mjblw (~mbaysek@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[7:22] * brian_appscale (~brian@wsip-72-215-161-77.sb.sd.cox.net) has joined #ceph
[7:22] * SpamapS (~clint@xencbyrum2.srihosting.com) has joined #ceph
[7:22] * __jt__ (~james@rhyolite.bx.mathcs.emory.edu) has joined #ceph
[7:22] * uli (~uli@mail1.ksfh-bb.de) has joined #ceph
[7:22] * wido (~wido@rockbox.widodh.nl) has joined #ceph
[7:22] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[7:22] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[7:22] * JohansGlock_ (~quassel@kantoor.transip.nl) has joined #ceph
[7:22] * rturk-away (~rturk@ds2390.dreamservers.com) has joined #ceph
[7:22] * Tribaal (uid3081@hillingdon.irccloud.com) has joined #ceph
[7:22] * NaioN (stefan@andor.naion.nl) has joined #ceph
[7:22] * Meths (rift@2.25.193.124) has joined #ceph
[7:22] * prudhvi (~prudhvi@tau.supr.io) has joined #ceph
[7:22] * Gugge-47527 (gugge@kriminel.dk) has joined #ceph
[7:22] * raso (~raso@deb-multimedia.org) has joined #ceph
[7:22] * scheuk (~scheuk@204.246.67.78) has joined #ceph
[7:25] * alrs_ (~lars@ip-64-134-233-23.public.wayport.net) has joined #ceph
[7:25] * alrs (~lars@ip-64-134-233-23.public.wayport.net) Quit (Read error: Connection reset by peer)
[7:33] * joshd1 (~jdurgin@2602:306:c5db:310:29c4:15ac:5ad:9e70) Quit (Ping timeout: 480 seconds)
[7:37] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[7:37] * coyo (~unf@00017955.user.oftc.net) Quit (Quit: F*ck you, I'm a daemon.)
[7:46] * Havre (~Havre@2a01:e35:8a2c:b230:39b9:c613:dfc1:799f) Quit (Ping timeout: 480 seconds)
[8:01] * noahmehl (~noahmehl@cpe-71-67-115-16.cinci.res.rr.com) Quit (Quit: noahmehl)
[8:02] * rustam (~rustam@94.15.91.30) has joined #ceph
[8:03] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[8:04] * alrs_ (~lars@ip-64-134-233-23.public.wayport.net) Quit (Ping timeout: 480 seconds)
[8:06] * tnt (~tnt@109.130.111.54) Quit (Ping timeout: 480 seconds)
[8:12] * rustam (~rustam@94.15.91.30) has joined #ceph
[8:14] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[8:16] * alrs (~lars@207.145.190.50) has joined #ceph
[8:29] * alrs (~lars@207.145.190.50) Quit (Ping timeout: 480 seconds)
[8:30] * rustam (~rustam@94.15.91.30) has joined #ceph
[8:32] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[8:32] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[8:33] * fridad (~fridad@b.clients.kiwiirc.com) has joined #ceph
[8:33] * Havre (~Havre@2a01:e35:8a2c:b230:307b:cbf1:6dd5:5164) has joined #ceph
[8:40] * rustam (~rustam@94.15.91.30) has joined #ceph
[8:41] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[8:43] * bergerx_ (~bekir@78.188.101.175) has joined #ceph
[9:03] * rustam (~rustam@94.15.91.30) has joined #ceph
[9:05] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[9:10] * topro (~topro@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[9:11] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[9:15] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[9:22] * rustam (~rustam@94.15.91.30) has joined #ceph
[9:22] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[9:24] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[9:26] * xiao (~xiao@61.187.54.9) has joined #ceph
[9:26] * tnt (~tnt@212-166-48-236.win.be) has joined #ceph
[9:27] * leseb (~Adium@83.167.43.235) has joined #ceph
[9:33] * rustam (~rustam@94.15.91.30) has joined #ceph
[9:34] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[9:37] * ScOut3R (~ScOut3R@212.96.47.215) has joined #ceph
[9:37] * syed_ (~chatzilla@180.151.28.156) has joined #ceph
[9:39] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[9:50] * NyanDog (~q@103.29.151.3) has joined #ceph
[9:52] <NyanDog> hi, i have added a new mon (ceph mon_status confirmed it) and i have copied ceph.conf to all cluster nodes and clients. does any client using librbd immediately aware and capable of using this new mon?
[9:55] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) has joined #ceph
[9:58] <matt_> NyanDog, yep. They will receive the new monmap automatically
[10:04] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[10:05] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[10:07] <NyanDog> matt_: thanks for the answer, it's really helpful
[10:08] * rustam (~rustam@94.15.91.30) has joined #ceph
[10:09] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[10:09] * l0nk (~alex@83.167.43.235) has joined #ceph
[10:10] * syed_ (~chatzilla@180.151.28.156) Quit (Quit: ChatZilla 0.9.90 [Firefox 19.0.2/20130307122351])
[10:15] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[10:17] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has joined #ceph
[10:24] * rustam (~rustam@94.15.91.30) has joined #ceph
[10:25] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[10:27] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[10:28] * l0nk (~alex@83.167.43.235) Quit (Quit: Leaving.)
[10:45] * sleinen (~Adium@2001:620:0:46:e053:f362:a2a7:c9a0) has joined #ceph
[10:52] * rustam (~rustam@94.15.91.30) has joined #ceph
[10:53] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[10:54] * v0id (~v0@212-183-97-24.adsl.highway.telekom.at) has joined #ceph
[10:58] <niklas> Hi. I try talking to my radosgw from Java using the Amazon AWS tools. sometimes I get "Leerstellen erforderlich zwischen publicId und systemId" (Space required between publicID nad systemID) as an error message
[10:58] <niklas> I do get the same message when radosgw is not started, so I guess it kind of crashes…
[10:59] <niklas> apache error log says: FastCGI: comm with server "/var/www/s3gw.fcgi" aborted: idle timeout (30 sec) \n FastCGI: incomplete headers (0 bytes) received from server "/var/www/s3gw.fcgi"
[11:01] * vo1d (~v0@212-183-100-112.adsl.highway.telekom.at) Quit (Ping timeout: 480 seconds)
[11:03] <andreask> niklas: you have double checked the correctness of your rewrite-rules in you webserver?
[11:06] <niklas> nope, I copied them from the website
[11:06] <niklas> and they seem to work most of the time^^
[11:06] <niklas> But I'll double-check it now
[11:07] <niklas> RewriteRule ^/([a-zA-Z0-9-_.]*)([/]?.*) /s3gw.fcgi?page=$1&params=$2&%{QUERY_STRING} [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
[11:07] <niklas> looks good to me, but I didn't look into the fcgi script…
[11:08] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[11:13] <andreask> niklas: this should also do : RewriteRule ^/(.*) /radosgw.fcgi?%{QUERY_STRING} [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
[11:13] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit (Ping timeout: 480 seconds)
[11:13] <andreask> niklas: or s3gw.fcgi ... however you named it
[11:18] * rustam (~rustam@94.15.91.30) has joined #ceph
[11:18] <niklas> andreask: I may hav found the problem: I am testing von vms, and forgot that radosgs does not delete normally delete objects --> my OSDs have 0B of space left…
[11:18] <niklas> fail
[11:18] <andreask> niklas: oh, welll ...
[11:19] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[11:22] <andreask> niklas: hmm ... bobtail? ... triggered the garbage collector manually?
[11:22] * xiao (~xiao@61.187.54.9) Quit (Read error: Connection reset by peer)
[11:25] <niklas> andreask: 0.56.4
[11:26] <niklas> how do I get rid of objects? Deleting the bucket with --purge-objects does not remove them^^
[11:29] <andreask> niklas: there are some tunables for the garbage collector ... time between objects are removed an garbage collector has to wait before remove them ...
[11:29] <andreask> niklas: rgw gc obj min wait .... 2h per default
[11:30] * rustam (~rustam@94.15.91.30) has joined #ceph
[11:30] <andreask> niklas: and the collector runs by default every hour ...
[11:33] <niklas> thx
[11:34] <andreask> yw
[11:37] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[11:37] <niklas> Trying to update ceph to newest version:
[11:37] <niklas> http://pastebin.de/34254
[11:38] <niklas> can be fixed by "apt-get install ceph" but is still not nice…
[11:41] * madkiss (~madkiss@p5DCA3735.dip0.t-ipconnect.de) has joined #ceph
[11:52] * v0id (~v0@212-183-97-24.adsl.highway.telekom.at) Quit (Quit: Verlassend)
[11:53] <niklas> andreask: can I trigger the garbage collection somehow?
[11:55] <andreask> niklas: radosgw-admin gc process
[11:56] <niklas> thanks
[12:00] <andreask> niklas: though it still obeys the minimal timings for deleted objects
[12:00] * jgallard (~jgallard@gw-aql-129.aql.fr) has joined #ceph
[12:00] <loicd> jgallard: \o
[12:00] <jgallard> hi loicd !
[12:01] <jgallard> hi all
[12:01] <loicd> jgallard: RBD uses multiple objects to store the data, not just one ( http://ceph.com/docs/master/architecture/#how-ceph-clients-stripe-data )
[12:01] * jgallard is reading : http://ceph.com/docs/master/architecture/#how-ceph-clients-stripe-data
[12:01] <jgallard> yes
[12:02] <jgallard> and an object, can contain data from several RBD?
[12:02] <loicd> no
[12:02] <jgallard> ok, thanks for your answer and for the link :)
[12:03] <loicd> you are welcome. I'm curious about why you asked :-)
[12:05] <jgallard> as you know, I continue to work on https://etherpad.openstack.org/instance-volume-collocation
[12:05] <jgallard> I want to be sure to understand ceph architecture
[12:06] <loicd> I see
[12:20] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[12:22] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) Quit (Quit: Leaving.)
[12:23] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) has joined #ceph
[12:26] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit (Quit: wogri_risc)
[12:30] * madkiss (~madkiss@p5DCA3735.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[12:32] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[12:45] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[12:50] <niklas> andreask: I tried your rewrite rule, but still get the same error message for like 10% of the request
[12:51] <andreask> niklas: anything specific on that requests?
[12:55] * athrift (~nz_monkey@222.47.255.123.static.snap.net.nz) Quit (Remote host closed the connection)
[12:56] <niklas> apache log says: [error] [client 192.168.58.20] FastCGI: comm with server "/var/www/s3gw.fcgi" aborted: error parsing headers: duplicate header 'Status'
[12:58] * Vjarjadian (~IceChat77@90.214.208.5) Quit (Quit: I cna ytpe 300 wrods pre mniuet!!!)
[12:58] <niklas> ok, I lost "rgw print continue = false" from my ceph config
[13:01] * aliguori (~anthony@12.151.150.4) has joined #ceph
[13:01] * diegows (~diegows@190.190.2.126) has joined #ceph
[13:05] <mrjack> does 0.56.5 include the patch to show also read op/s and read byte / sec ?
[13:05] <mrjack> when doing ceph -w?
[13:05] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[13:07] * madkiss (~madkiss@p5DCA3735.dip0.t-ipconnect.de) has joined #ceph
[13:09] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[13:12] * l0nk (~alex@83.167.43.235) has joined #ceph
[13:13] * l0nk (~alex@83.167.43.235) Quit ()
[13:23] * john_barbee_ (~jbarbee@c-98-226-73-253.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[13:26] * bergerx_ (~bekir@78.188.101.175) Quit (Remote host closed the connection)
[13:27] * bergerx_ (~bekir@78.188.101.175) has joined #ceph
[13:33] * eschnou (~eschnou@85.234.217.115.static.edpnet.net) has joined #ceph
[13:34] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[13:38] * madkiss (~madkiss@p5DCA3735.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[13:43] * DarkAce-Z (~BillyMays@50.107.54.92) has joined #ceph
[13:43] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[13:47] * Dark-Ace-Z (~BillyMays@50.107.54.92) Quit (Ping timeout: 480 seconds)
[13:49] * gio (~io@host58-228-dynamic.4-87-r.retail.telecomitalia.it) has joined #ceph
[13:50] <gio> !list
[13:51] * gio (~io@host58-228-dynamic.4-87-r.retail.telecomitalia.it) has left #ceph
[13:58] * vipr (~vipr@78-23-113-244.access.telenet.be) Quit (Ping timeout: 480 seconds)
[14:07] * vipr (~vipr@78-23-113-244.access.telenet.be) has joined #ceph
[14:28] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[14:30] * aliguori (~anthony@12.151.150.4) Quit (Remote host closed the connection)
[14:37] * noahmehl (~noahmehl@199.106.165.64) has joined #ceph
[14:40] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[14:45] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[14:46] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit (Quit: wogri_risc)
[14:55] * vipr (~vipr@78-23-113-244.access.telenet.be) Quit (Quit: leaving)
[14:55] * noahmehl (~noahmehl@199.106.165.64) Quit (Quit: noahmehl)
[14:56] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[14:57] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit ()
[14:58] * vipr (~vipr@78-23-113-244.access.telenet.be) has joined #ceph
[14:59] * vipr (~vipr@78-23-113-244.access.telenet.be) Quit ()
[15:00] * vipr (~vipr@78-23-113-244.access.telenet.be) has joined #ceph
[15:03] * noahmehl (~noahmehl@199.106.165.64) has joined #ceph
[15:07] * Wolff_John (~jwolff@ftp.monarch-beverage.com) has joined #ceph
[15:10] * vipr (~vipr@78-23-113-244.access.telenet.be) Quit (Quit: leaving)
[15:10] * vipr (~vipr@78-23-113-244.access.telenet.be) has joined #ceph
[15:17] * madkiss (~madkiss@p5DCA3735.dip0.t-ipconnect.de) has joined #ceph
[15:17] * drokita (~drokita@24-107-180-86.dhcp.stls.mo.charter.com) Quit (Ping timeout: 480 seconds)
[15:18] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) Quit (Quit: Leaving.)
[15:33] * noahmehl (~noahmehl@199.106.165.64) Quit (Remote host closed the connection)
[15:34] * noahmehl (~noahmehl@199.106.165.64) has joined #ceph
[15:41] * drokita (~drokita@199.255.228.128) has joined #ceph
[15:44] <todin> hi, does someone know the state of the incremental rbd backup via snapshot into qcow2 files?
[15:45] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[15:50] * lofejndif (~lsqavnbok@axigy2.torservers.net) has joined #ceph
[15:57] * PerlStalker (~PerlStalk@72.166.192.70) has joined #ceph
[16:00] * Vjarjadian (~IceChat77@90.214.208.5) has joined #ceph
[16:01] * rustam (~rustam@94.15.91.30) has joined #ceph
[16:01] * aliguori (~anthony@66.187.233.207) has joined #ceph
[16:05] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[16:05] * brother (foobaz@2a01:7e00::f03c:91ff:fe96:ab16) has joined #ceph
[16:07] * yehuda_hm (~yehuda@2602:306:330b:1410:882e:275c:f33f:7cd1) has joined #ceph
[16:08] <wido> todin: I think some work has been done there
[16:09] * noahmehl (~noahmehl@199.106.165.64) Quit (Quit: noahmehl)
[16:10] <wido> Can't find the page with the info now
[16:11] <todin> wido: yep, I remember an email from sage, that it was done, but I cannot find this email
[16:11] <wido> Same here, can't find it
[16:11] <wido> todin: http://ceph.com/docs/master/release-notes/#v0-61
[16:11] <wido> "rbd: incremental backups"
[16:12] <todin> wido: is 0.61 already released?
[16:12] <wido> todin: No, it lives in the "next" branch
[16:13] <wido> todin: I have 0.61 locally here, the new command is: "rbd export-diff"
[16:13] <wido> export-diff <image-name> [--from-snap <snap-name>] <path>
[16:13] <todin> wido: great, did you test it already?
[16:13] <wido> todin: Nope
[16:15] * brother (foobaz@2a01:7e00::f03c:91ff:fe96:ab16) Quit (Remote host closed the connection)
[16:16] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[16:17] * noahmehl (~noahmehl@199.106.165.64) has joined #ceph
[16:20] * gmason (~gmason@hpcc-fw.net.msu.edu) has joined #ceph
[16:28] * scuttlemonkey_ (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[16:28] * sstan (~chatzilla@modemcable016.164-202-24.mc.videotron.ca) Quit (Remote host closed the connection)
[16:30] * eschnou (~eschnou@85.234.217.115.static.edpnet.net) Quit (Remote host closed the connection)
[16:35] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Ping timeout: 480 seconds)
[16:35] * gmason (~gmason@hpcc-fw.net.msu.edu) Quit (Quit: Textual IRC Client: www.textualapp.com)
[16:36] * gmason (~gmason@hpcc-fw.net.msu.edu) has joined #ceph
[16:40] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) has joined #ceph
[16:55] * yasu` (~yasu`@dhcp-59-157.cse.ucsc.edu) has joined #ceph
[16:58] * scuttlemonkey_ (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Quit: my troubles seem so far away, now yours are too...)
[16:58] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[16:58] * ChanServ sets mode +o scuttlemonkey
[16:59] * mnash (~chatzilla@vpn.expressionanalysis.com) has joined #ceph
[17:00] <paravoid> yehuda_hm: hey
[17:00] <yasu`> Ah, is Dev Summit tomorrow ? ...
[17:00] <paravoid> yehuda_hm: I'm wondering if 303e739e5b34ad1aaedb0025ffc6da1a9e04c320 (unexpected error code) is bobtail material
[17:01] <paravoid> yehuda_hm: also 9b953aa4100eca5de2319b3c17c54bc2f6b03064, c83a01d4e8dcd26eec24c020c5b79fcfa4ae44a3 (doesn't apply cleanly), 290b5eb0f1b4a340c39f0d0fc5fb25697d7f8182 (ditto), f2df87625cbc0f08d3e4ab4619f2ef642d9bdad8, a8b1bfa1ccbb66d73b7b97ecc714c6c24effd7c4
[17:02] <yehuda_hm> paravoid I don't see why 303e739e5b34ad1aaedb0025ffc6da1a9e04c320 wouln't apply to bobtail
[17:02] <paravoid> it does
[17:02] <yehuda_hm> 9b953aa4100eca5de2319b3c17c54bc2f6b03064 is ok too
[17:03] <paravoid> they're not applied to the bobtail branch though
[17:03] <paravoid> and I'd like to stay there for a little while longer :)
[17:03] <yehuda_hm> I think c83a01d4e8dcd26eec24c020c5b79fcfa4ae44a3 fixes a later regression
[17:04] <yehuda_hm> 290b5eb0f1b4a340c39f0d0fc5fb25697d7f8182 is not for bobtail
[17:04] <paravoid> the last two are fixes for the streaming listing support, so ignore those
[17:04] <paravoid> well, the only one that I really care about is the first one, unexpected error code
[17:05] <paravoid> most of my containers are .r:* and then exposed to the web
[17:05] <paravoid> a wrong url returns 401 instead of 404, which is kinda nasty
[17:10] * brother (foobaz@vps1.hacking.dk) has joined #ceph
[17:14] * noahmehl (~noahmehl@199.106.165.64) Quit (Remote host closed the connection)
[17:15] * noahmehl (~noahmehl@199.106.165.64) has joined #ceph
[17:16] * ScOut3R (~ScOut3R@212.96.47.215) Quit (Ping timeout: 480 seconds)
[17:19] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) Quit (Quit: Leaving.)
[17:19] * rustam (~rustam@94.15.91.30) has joined #ceph
[17:21] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[17:22] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) has joined #ceph
[17:23] * gaveen (~gaveen@175.157.185.11) has joined #ceph
[17:23] * DarkAce-Z is now known as DarkAceZ
[17:24] * noahmehl (~noahmehl@199.106.165.64) Quit (Quit: noahmehl)
[17:27] * bergerx_ (~bekir@78.188.101.175) Quit (Quit: Leaving.)
[17:30] * jgallard (~jgallard@gw-aql-129.aql.fr) Quit (Quit: Leaving)
[17:34] * brady (~brady@rrcs-64-183-4-86.west.biz.rr.com) has joined #ceph
[17:35] * madkiss (~madkiss@p5DCA3735.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[17:36] * madkiss (~madkiss@p5DCA3735.dip0.t-ipconnect.de) has joined #ceph
[17:41] * madkiss (~madkiss@p5DCA3735.dip0.t-ipconnect.de) Quit ()
[17:44] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Read error: Operation timed out)
[17:46] * danieagle (~Daniel@186.214.61.67) has joined #ceph
[17:46] * barnes (barnes@bissa.eu) has left #ceph
[17:55] * sleinen1 (~Adium@2001:620:0:2d:1056:5e54:d350:a88a) has joined #ceph
[17:59] * gregaf1 (~Adium@2607:f298:a:607:10e3:a393:f44c:3d3) Quit (Quit: Leaving.)
[18:02] * sleinen (~Adium@2001:620:0:46:e053:f362:a2a7:c9a0) Quit (Ping timeout: 480 seconds)
[18:03] * sleinen1 (~Adium@2001:620:0:2d:1056:5e54:d350:a88a) Quit (Ping timeout: 480 seconds)
[18:04] * gregaf (~Adium@2607:f298:a:607:10e3:a393:f44c:3d3) has joined #ceph
[18:05] * tnt (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[18:06] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Quit: later)
[18:06] * aliguori (~anthony@66.187.233.207) Quit (Remote host closed the connection)
[18:11] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) Quit (Quit: Leaving.)
[18:12] * noahmehl (~noahmehl@65.127.208.182) has joined #ceph
[18:19] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[18:20] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[18:22] <Azrael> hey folks
[18:22] <Azrael> if an osd has an id of say 35
[18:22] <Azrael> *must* it be mounted at /var/lib/ceph/osd/ceph-35?
[18:22] <Azrael> (or something similar)
[18:22] <Azrael> or does the mountpoint not matter; all that matters is the whoami file?
[18:25] <jluis> Azrael, the osd will look for its data store wherever you specify it is on ceph.conf; if you don't, then yeah, it will look for it in the default location
[18:28] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[18:29] * tkensiski (~tkensiski@209.66.64.134) has joined #ceph
[18:29] * tkensiski (~tkensiski@209.66.64.134) has left #ceph
[18:30] <Azrael> jluis: interesting. i'm lookign through ceph-disk-activate. as long as you mount the device *somewhere*, i think a call to move_mount() occurs which will then move the mount to /var/lib/ceph/osd/$cluster-$id
[18:31] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[18:44] * Wolff_John (~jwolff@ftp.monarch-beverage.com) Quit (Ping timeout: 480 seconds)
[18:45] * tnt (~tnt@109.130.111.54) has joined #ceph
[18:45] * noahmehl (~noahmehl@65.127.208.182) Quit (Quit: noahmehl)
[18:52] * leseb (~Adium@83.167.43.235) Quit (Quit: Leaving.)
[18:52] * noahmehl (~noahmehl@mobile-198-228-212-067.mycingular.net) has joined #ceph
[18:55] * sleinen (~Adium@2001:620:0:25:85f0:59a3:6fa6:10ab) has joined #ceph
[18:56] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[18:58] * aliguori (~anthony@66.187.233.207) has joined #ceph
[19:00] * Tamil (~tamil@38.122.20.226) has joined #ceph
[19:00] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has left #ceph
[19:01] * alram (~alram@38.122.20.226) has joined #ceph
[19:17] * noahmehl (~noahmehl@mobile-198-228-212-067.mycingular.net) Quit (Ping timeout: 480 seconds)
[19:17] * dmick (~dmick@2607:f298:a:607:a5bb:a325:519b:6e2c) has joined #ceph
[19:19] * dwt (~dwt@128-107-239-234.cisco.com) has joined #ceph
[19:21] * rturk-away is now known as rturk
[19:22] * rturk is now known as rturk-away
[19:23] * rturk-away is now known as rturk
[19:26] * dontalton2 (~dwt@wsip-70-166-104-226.ph.ph.cox.net) has joined #ceph
[19:29] * gmason_ (~gmason@hpcc-fw.net.msu.edu) has joined #ceph
[19:31] * gmason (~gmason@hpcc-fw.net.msu.edu) Quit (Ping timeout: 480 seconds)
[19:33] * kyle_ (~kyle@216.183.64.10) has joined #ceph
[19:33] * dwt (~dwt@128-107-239-234.cisco.com) Quit (Ping timeout: 480 seconds)
[19:46] * dpippenger (~riven@206-169-78-213.static.twtelecom.net) has joined #ceph
[19:49] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[19:52] * Wolff_John (~jwolff@vpn.monarch-beverage.com) has joined #ceph
[19:59] * dontalton3 (~dwt@128-107-239-234.cisco.com) has joined #ceph
[20:03] * dwt (~dwt@wsip-70-166-104-226.ph.ph.cox.net) has joined #ceph
[20:07] * dontalton2 (~dwt@wsip-70-166-104-226.ph.ph.cox.net) Quit (Ping timeout: 480 seconds)
[20:08] * dontalton2 (~dwt@rtp-isp-nat1.cisco.com) has joined #ceph
[20:10] * dontalton3 (~dwt@128-107-239-234.cisco.com) Quit (Ping timeout: 480 seconds)
[20:12] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[20:12] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) has joined #ceph
[20:13] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[20:14] * dwt (~dwt@wsip-70-166-104-226.ph.ph.cox.net) Quit (Ping timeout: 480 seconds)
[20:19] * SvenPHX (~scarter@wsip-174-79-34-244.ph.ph.cox.net) Quit (Quit: Leaving.)
[20:20] * SvenPHX (~scarter@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[20:20] * tnt_ (~tnt@91.177.240.165) has joined #ceph
[20:21] * BillK (~BillK@58-7-104-61.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[20:24] * kyle_ (~kyle@216.183.64.10) Quit (Ping timeout: 480 seconds)
[20:26] * tnt (~tnt@109.130.111.54) Quit (Ping timeout: 480 seconds)
[20:31] * SvenPHX (~scarter@wsip-174-79-34-244.ph.ph.cox.net) Quit (Quit: Leaving.)
[20:38] * john_barbee_ (~jbarbee@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[20:41] * alram (~alram@38.122.20.226) Quit (Quit: leaving)
[20:44] * john_barbee_ (~jbarbee@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Quit: ChatZilla 0.9.90 [Firefox 21.0/20130430204233])
[20:56] * wogri_ (~wolf@nix.wogri.at) has joined #ceph
[20:59] * wogri_ (~wolf@nix.wogri.at) Quit ()
[20:59] * wogri (~wolf@nix.wogri.at) Quit (Quit: Lost terminal)
[20:59] * wogri (~wolf@nix.wogri.at) has joined #ceph
[20:59] * sleinen1 (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[21:00] * wogri (~wolf@nix.wogri.at) Quit (Remote host closed the connection)
[21:01] * wogri (~wolf@nix.wogri.at) has joined #ceph
[21:02] * wogri (~wolf@nix.wogri.at) Quit ()
[21:02] * wogri (~wolf@nix.wogri.at) has joined #ceph
[21:04] * yehuda_hm (~yehuda@2602:306:330b:1410:882e:275c:f33f:7cd1) Quit (Ping timeout: 480 seconds)
[21:05] * lofejndif (~lsqavnbok@1SGAAAQEB.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[21:06] * sleinen (~Adium@2001:620:0:25:85f0:59a3:6fa6:10ab) Quit (Ping timeout: 480 seconds)
[21:08] * SvenPHX (~scarter@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[21:09] * yehuda_hm (~yehuda@2602:306:330b:1410:882e:275c:f33f:7cd1) has joined #ceph
[21:23] * Fetch (fetch@gimel.cepheid.org) has joined #ceph
[21:28] * danieagle (~Daniel@186.214.61.67) Quit (Ping timeout: 480 seconds)
[21:28] * yehuda_hm (~yehuda@2602:306:330b:1410:882e:275c:f33f:7cd1) Quit (Ping timeout: 480 seconds)
[21:29] <mrjack> http://pastebin.com/tLidcPjw
[21:29] <mrjack> shortly after 2013-04-30 06:56:47.247269 e9253b70 10 monclient: _check_auth_rotating have uptodate secrets (they expire after 2013-04-30 06:56:17.247266), the osd process died...
[21:32] * diegows (~diegows@190.190.2.126) Quit (Ping timeout: 480 seconds)
[21:36] * danieagle (~Daniel@186.214.56.43) has joined #ceph
[21:37] * gaveen (~gaveen@175.157.185.11) Quit (Remote host closed the connection)
[21:38] * yehuda_hm (~yehuda@2602:306:330b:1410:882e:275c:f33f:7cd1) has joined #ceph
[21:38] * danieagle (~Daniel@186.214.56.43) Quit ()
[21:40] * kfox1111 (bob@leary.csoft.net) has joined #ceph
[21:41] <kfox1111> question. what do I need to tune for storing lots of little files in rados?
[21:42] <kfox1111> Say, I want to store a tens of millions of little 12 byte files.
[21:42] <kfox1111> I have 5600 pg's.
[21:43] <kfox1111> Currently, I have like 1,700,000 imported, and its taking 20178 kb of data, 46353 MB used.
[21:44] <paravoid> 12 byte?
[21:44] <paravoid> is that a figure of speech?
[21:45] <kfox1111> nope. Just trying to push things to see how it handles things.
[21:46] <kfox1111> I have some metadata I'd like to store in rados. Most of the metadata files will be pretty small. and a LOT of them. But I'm never going to know which ones may grow large,
[21:46] <kfox1111> so storing them in rados would allow them to grow as needed.
[21:46] <paravoid> I have ~200 million very small files
[21:46] <kfox1111> I'm expecting, I'll have 100 million of um at some point.
[21:47] * aliguori (~anthony@66.187.233.207) Quit (Remote host closed the connection)
[21:47] <kfox1111> cool. Glad to hear someone else is doing it. :)
[21:47] <paravoid> not 12-byte small though :)
[21:47] <elder> 12 bytes seems like maybe a file system is not the right solution.
[21:47] * coredumb (~coredumb@xxx.coredumb.net) has joined #ceph
[21:47] <kfox1111> I figure they will be more then 12 bytes. maybe 1k.
[21:48] <paravoid> my average size is 50-70K
[21:48] <kfox1111> I was expecting like a 4k minimum file size or something, since ext4 is involved.
[21:48] * sleinen (~Adium@2001:620:0:26:9884:7257:73ab:337b) has joined #ceph
[21:48] <kfox1111> But those number show it is more up in the 300kb range.
[21:48] * aliguori (~anthony@66.187.233.207) has joined #ceph
[21:49] <kfox1111> The really interesting bit though is as I add more 12 byte files, the efficiency is getting better. At 1m 12 byte files, it was more like 400kb per 12 byte file.
[21:51] <kfox1111> Not realy sure why at the moment. Figure it has to do somehow with preallocation in the pg's maybe.
[21:52] <gregaf> are you putting your journal and backing store on the same disk? or sharing them with the OS install?
[21:52] <kfox1111> same disk.
[21:52] <gregaf> that aggregate usage information is just pulled out of df, basically
[21:53] <kfox1111> ah.
[21:53] <darkfader> oh!
[21:53] <darkfader> gregaf: that was to obvious to ever figure :>
[21:54] <gregaf> heh
[21:54] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[21:54] * sleinen1 (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:54] <kfox1111> is there an easy way to see the journal usage? (Does it show up in a particular path?)
[21:56] <gregaf> yeah, wherever you've configured it (default /var/lib/ceph/ceph-osd*)
[21:56] * loicd (~loic@magenta.dachary.org) has joined #ceph
[21:56] <gregaf> also, default size of 5GB, but it may or may not have grown to that size yet
[21:57] <kfox1111> journal's only 100mb per.
[21:57] <kfox1111> tops its using 2.4 gb looks like.
[21:58] <kfox1111> so, I don't think thats not it.
[21:58] * sleinen1 (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[21:58] <kfox1111> so, I don't think thats it.
[21:59] <gregaf> there is per-object overhead as well, I don't know the sizes but on 12-byte objects it'll be significant
[21:59] <gregaf> on 1KB objects, not so much
[21:59] * yehuda_hm (~yehuda@2602:306:330b:1410:882e:275c:f33f:7cd1) Quit (Ping timeout: 480 seconds)
[22:00] * wogri (~wolf@nix.wogri.at) Quit (Quit: Lost terminal)
[22:00] * wogri (~wolf@nix.wogri.at) has joined #ceph
[22:00] <kfox1111> assuming the overhead is 4k based on ext4, 4096/12=341
[22:01] * scuttlemonkey_ (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[22:01] <kfox1111> I'm seeing like 28251 times though, so that means ~83 times is still unaccounted for.
[22:02] * wogri (~wolf@nix.wogri.at) Quit ()
[22:02] * wogri (~wolf@nix.wogri.at) has joined #ceph
[22:02] * sleinen2 (~Adium@2001:620:0:26:a4a1:8f93:366b:c5e6) has joined #ceph
[22:02] * wogri (~wolf@nix.wogri.at) Quit ()
[22:03] * wogri (~wolf@nix.wogri.at) has joined #ceph
[22:03] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[22:04] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Ping timeout: 480 seconds)
[22:04] * yehuda_hm (~yehuda@2602:306:330b:1410:882e:275c:f33f:7cd1) has joined #ceph
[22:05] * sleinen (~Adium@2001:620:0:26:9884:7257:73ab:337b) Quit (Ping timeout: 480 seconds)
[22:06] <gregaf> kfox1111: sjust points out that if you can do some very basic sharding you can store the objects in the "omap" structure, which has much lower per-entry overhead and is actually built for objects of that size
[22:06] <gregaf> (RADOS is built for much larger objects)
[22:06] <sjust> kfox1111: the omap part of an object is actually a chunk of a leveldb backend
[22:06] * ScOut3R (~ScOut3R@BC065770.dsl.pool.telekom.hu) has joined #ceph
[22:06] <sjust> which has quite good performance and overhead
[22:08] <Fetch> I'm trying to have Glance use the rbd backend to store an image, and I'm consistently getting a failure with error code 95 in the create
[22:08] <Fetch> does that code mean anything particularly? Is there a good way to go about getting more debugging info?
[22:09] <Fetch> I can write to the images pool using the glance use using rbd cli without issue
[22:09] * sleinen1 (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[22:11] <Tamil> fetch: have you had a chance to look into the logs?
[22:12] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[22:12] <Fetch> I'm not seeing connections on the mon hosts, is that where I should be looking?
[22:12] <Fetch> or on the osds
[22:13] <Tamil> fetch: mons would be a good place to start
[22:13] * sagelap (~sage@2600:1012:b02e:9e54:f5e1:29df:d149:b88c) has joined #ceph
[22:14] * Cube (~Cube@12.248.40.138) has joined #ceph
[22:14] <Fetch> I'm not seeing any acty in the mon logs coinciding with the glance rbd client connect
[22:15] <Tamil> fetch:what does your osd logs say?
[22:16] <Fetch> Likewise, no entries apparently coinciding with glance
[22:17] <Fetch> (very underused cluster, so there's no other traffic to confuse the issue)
[22:18] <Tamil> is this on your client, you are getting a failure with error code:95?
[22:18] <Fetch> the Glance client
[22:18] <Fetch> and, that's weird, just did ls with rbd
[22:18] <Fetch> and it did create files
[22:18] <Fetch> maybe it wasn't able to open them
[22:19] <Fetch> http://pastebin.com/WdDgcGvV
[22:20] <dmick> ah. that sounds like "missing /usr/lib/rados-classes"
[22:20] <dmick> or damage to that dir. What version, and how was it installed?
[22:21] * Vjarjadian (~IceChat77@90.214.208.5) Quit (Quit: We be chillin - IceChat style)
[22:21] <Fetch> rpm, 0.56.3-1
[22:21] <Fetch> got ceph and ceph-libs
[22:21] <dmick> we have, in the past, had people build from source but not get installed correctly, so that symlinks were wrong to /usr/lib/rados-classes
[22:21] <Fetch> would it be in another package?
[22:21] <dmick> do you have that dir, and what files mention rbd?
[22:22] <Fetch> it's in /usr/lib64/rados-classes
[22:22] <Fetch> should I try a symlink for grins?
[22:22] <dmick> no, /usr/lib64 is ok
[22:22] <dmick> but what does ls *rbd* show
[22:23] <Fetch> in that directory? or in /usr/bin or whatnot
[22:23] <dmick> yes, in the rados-classes directory. there's something wrong there, almost certainly
[22:23] <Fetch> /usr/lib64/rados-classes/libcls_rbd.so.1, /usr/lib64/rados-classes/libcls_rbd.so.1.0.0
[22:23] <Fetch> any other files it should have?
[22:24] <kfox1111> sjust: very interesting. Thankks for the pointer.
[22:24] * loicd (~loic@magenta.dachary.org) has joined #ceph
[22:24] <kfox1111> do you need to change the code using librados to use the omap part, or do you just have to change some settings?
[22:28] <Tamil> fetch: could you please share output of "rbd ls"
[22:29] <Fetch> tamil: it's nil (nothing in / ) please see my pastebin above for an example ls with error 95 popping up
[22:29] * gucki (~smuxi@84-73-204-178.dclient.hispeed.ch) has joined #ceph
[22:31] * sagelap (~sage@2600:1012:b02e:9e54:f5e1:29df:d149:b88c) Quit (Ping timeout: 480 seconds)
[22:32] <Tamil> fetch: could you please try the command with some debugs on
[22:32] <Tamil> fetch: debug objclass = 20 in ceph.conf
[22:33] * Hefeweizen (~oftc-webi@dyn160091089133.dz.ornl.gov) has joined #ceph
[22:33] <Hefeweizen> Howdy!
[22:34] <sjust> kfox1111: they are different calls
[22:34] <sjust> it looks somewhat like the xattr interface
[22:35] <dmick> I have librbd.so.1.0.0 in my /usr/lib/rados-classes (Ubuntu, so not lib64), and two symlinks to it: librbd.so and librbd.so.1
[22:35] <dmick> argh, sorry, ignore that
[22:35] <dmick> I mean
[22:35] <dmick> libcls_rbd.so.1.0.0
[22:35] <dmick> and two symlinks
[22:36] <dmick> libcls_rbd.so and libcls_rbd.so.1
[22:36] <Hefeweizen> following along the 5-minute guide and "block-device quick start" and am stumbling over the 'modprobe rbd'. Using the rpm-bobtail packages from the public repo and none of these packages seem to include a rbd.ko
[22:36] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:36] <Hefeweizen> using Centos 6.4. is that too old?
[22:39] <kfox1111> sjust: So, the code I did does things very atommically. There is a create, a delete, an read that tells you if the file changes while reading, and an atomic modify which is a read, change something, and write but fail the write if the file changed while doing the read/modify/write.
[22:40] * KindOne (KindOne@0001a7db.user.oftc.net) has joined #ceph
[22:40] * yehuda_hm (~yehuda@2602:306:330b:1410:882e:275c:f33f:7cd1) Quit (Ping timeout: 480 seconds)
[22:40] * sagelap (~sage@2600:1012:b02e:9e54:6c88:ecfd:9e2d:5f39) has joined #ceph
[22:40] <kfox1111> Can the same thing be done with the omap stuff? Can a omap file be replaced with the other interface transparently when the change makes the file too big?
[22:41] <sjust> kfox1111: so omap entries are like xattrs -- they are attached to a normal rados object
[22:41] <sjust> you can't (normally) do atomic operations across objects
[22:41] * yehuda_hm (~yehuda@2602:306:330b:1410:882e:275c:f33f:7cd1) has joined #ceph
[22:41] <sjust> but if the client cooperate appropriately, maybe?
[22:41] <sjust> were you using an object class?
[22:41] <kfox1111> sjust: https://github.com/EMSL-MSC/pacifica/blob/master/src/rmds/common.cpp :)
[22:42] <dmick> Hefeweizen: rbd.ko isn't in the Ceph-distributed packages; it's part of the kernel
[22:42] <kfox1111> Some test code here: https://github.com/EMSL-MSC/pacifica/blob/master/src/rmds/testmerge.cpp Does the whole merging of two json documents atomically thingy.
[22:43] <coredumb> dmick: EL6 doesn't provide ceph drivers afaik
[22:44] <kfox1111> I'm using ObjectReadOperation and ObjectWriteOperation's to group things atomically.
[22:44] <dmick> coredumb: Hefeweizen: There's A Doc For That
[22:44] <sjust> kfox1111: ok, so instead you would just work with omap entries sharded across objects
[22:44] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[22:44] <sjust> you'd have some trouble promoting such an entry to a full object though
[22:44] <kfox1111> yeah.
[22:44] * loicd (~loic@magenta.dachary.org) has joined #ceph
[22:44] <dmick> http://ceph.com/docs/master/install/os-recommendations/
[22:45] <kfox1111> I see what you are saying.... hmmm..
[22:46] <kfox1111> In that case, I'd probably be better off just sharding and putting multiple json documents per rados object.
[22:47] <Elbandi_> is any delete timeout on cephfs?
[22:48] <coredumb> dmick: so it means that kernel must be updated on EL6
[22:48] <Elbandi_> i just deleted all file, but there are 399 objects on metadata pool, and ~61k objects on data pool
[22:49] <Elbandi_> s/file/files/
[22:49] <dmick> coredumb: probable, if you must have kernel support
[22:49] <dmick> note that kernel modules are not necessary in many use cases
[22:50] <coredumb> ok
[22:51] <janos> has anyone here tried the ceph samba fork?
[22:51] <janos> curious if it's ready to use for pretty standard basic usage
[22:52] * sagelap (~sage@2600:1012:b02e:9e54:6c88:ecfd:9e2d:5f39) Quit (Ping timeout: 480 seconds)
[22:56] <gregaf> janos: you don't actually need the fork; the Ceph bindings are in upstream now :)
[22:57] <gregaf> as for how good it is, we just got them in and they haven't seen a ton of testing
[22:57] <gregaf> in particular you'll want to watch out because they don't implement the full set of locking hooks
[22:59] <sjust> Elbandi_: delete timeout?
[22:59] <janos> gregaf:cool. i didn't realize it was upstream
[22:59] <janos> thanks
[22:59] <gregaf> it was on like Wednesday or something, I wouldn't expect you to have noticed yet ;)
[22:59] <janos> ahhhh. haha ok
[22:59] <janos> yeah i wouldn't
[23:00] * rustam (~rustam@94.15.91.30) has joined #ceph
[23:01] <Elbandi_> sjust: dunno, i just (=4-5 hours ago) deleted all files but "some" objects are exists
[23:01] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[23:03] * yehuda_hm (~yehuda@2602:306:330b:1410:882e:275c:f33f:7cd1) Quit (Ping timeout: 480 seconds)
[23:03] <kfox1111> does each object in rados corespond to one file in ext4?
[23:03] <sjust> kfox1111: yeah
[23:04] <sjust> generally, we recommend xfs over ext4, btw
[23:04] <sjust> gregaf: ideas about Elbandi_?
[23:04] <Fetch> Tamil: apologies for the delay. I added debug objclass = 20 to ceph.conf and reran the rbd ls -l command
[23:04] <gregaf> hmm?
[23:04] <kfox1111> k. thanks.
[23:05] <Tamil> fetch: what did you get?
[23:05] <gregaf> oh, background delete of the objects backing deleted files; is the data count continuing to go down?
[23:05] <Fetch> same errors, no extra debug info that I could see. Would I be getting extra messages in a logfil? Do I need to restart mons?
[23:05] <gregaf> err, for more clarity: deleting a file removes the links to the data, but the actual objects are removed asynchronously by the MDS
[23:06] <gregaf> I'm not sure how quickly it does that, but I think it's only doing one at a time so it could take a while if you have many objects
[23:07] <Tamil> fetch: i hope you copied the keyring files to client machine http://ceph.com/docs/master/start/quick-start/
[23:07] <gregaf> also, it won't remove the objects until the clients drop their caps on the file, which could be a while (inode caching is an area that we need to do some work in)
[23:07] <Fetch> tamil: http://pastebin.com/Yg4DQsji
[23:07] <Tamil> as mentioned in the doc
[23:07] <Fetch> the client machine is the first mon
[23:08] <Fetch> the keyring has 4 client keys in it (admin, volumes, images, glance)
[23:08] <dmick> Fetch: yes, the extra info would come to the OSD logfile, and it would come after restarting the OSDs
[23:08] <Fetch> dmick: thanks, will do that
[23:08] <dmick> what's going wrong: the OSD can't successfully load the rbd class driver file (cls_rbd.so) for some reason
[23:09] <dmick> so we're trying to debug why that is
[23:09] <dmick> it does that when you ask for an rbd operation like listing snapshots
[23:09] * sagelap (~sage@2600:1012:b02e:9e54:6c88:ecfd:9e2d:5f39) has joined #ceph
[23:09] <dmick> but the debug isn't reread from ceph.conf until restart
[23:09] * aliguori (~anthony@66.187.233.207) Quit (Quit: Ex-Chat)
[23:09] <Fetch> I have an idea - the package doesn't create libcls_rbd.so symlink, it creates so.1 and so.1.0.0
[23:09] <dmick> so you won't see any more debug without restarting the daemon (or injecting debug flags into the live daemon; it's much harder)
[23:10] <Fetch> unless I install ceph-devel and ceph-libcephfs
[23:10] <Fetch> which I will do immediately
[23:10] <dmick> well, that's why I was asking you for ls *rbd* in /usr/lib64/rados-classes some time ago
[23:10] <Fetch> and I did, friend :)
[23:10] <dmick> was to verify that all the fiels were in the right place :)
[23:11] <dmick> if that's so, that's a packaging error
[23:11] <Fetch> should the osd need a restart to see the lib?
[23:11] <dmick> I don't think so
[23:13] <Fetch> looks like the packaging error might be fixed in 0.56.4 package
[23:14] <Fetch> you might wonder why one of my controllers has that version installed. I wonder as well.
[23:14] * yehuda_hm (~yehuda@2602:306:330b:1410:882e:275c:f33f:7cd1) has joined #ceph
[23:15] <Fetch> dmick: that fixed the problem. I'll search for a jira to file against that packaging
[23:15] <dmick> ceph.com/tracker
[23:15] <dmick> but if it's fixed in 56.4...
[23:15] * mega_au (~chatzilla@84.244.21.218) Quit (Quit: ChatZilla 0.9.90 [Firefox 20.0.1/20130409194949])
[23:17] <dmick> also, someone should fix http://tracker.ceph.com/issues/4639 ;)
[23:18] <Fetch> hmm, looks like I'm 2 minor behind. I'll check the current packages and file if it's still broke. Thanks for all the help, apologies for not checking osd directories as well (was thinking it was a client-only issue)
[23:19] <Fetch> tamil: thanks as well :)
[23:20] <dmick> np Fetch, gl
[23:22] * Rorik (~rorik@199.182.216.68) has joined #ceph
[23:24] <Tamil> fetch:np
[23:24] * madkiss (~madkiss@217.194.70.226) has joined #ceph
[23:25] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[23:25] * rturk is now known as rturk-away
[23:33] <sagelap> dmick: easy to make it derr
[23:41] <dmick> oh the class thing? Yeah, there's a pile of things I see I've suggested and then received the ultimate reward for suggesting :)
[23:41] <dmick> so pondering remove: there isn't really any current 'administrative disable' for a disk-or-dir-that-could-be-OSD, right?
[23:42] <dmick> I mean, maybe the sysvinit/upstart marker file I guess
[23:42] <dmick> assuming we keep that across systemd etc.
[23:45] <dmick> sagelap: wondering if osd remove is: verify it's down, and cluster healthy; remove "init" file; stop daemon; remove any lockfile; unmount/crush remove, auth del, osd rm
[23:45] <dmick> (should it respect/try to deal with pre_stop/post_stop? are those actively used?)
[23:49] * madkiss (~madkiss@217.194.70.226) Quit (Quit: Leaving.)
[23:50] * Havre (~Havre@2a01:e35:8a2c:b230:307b:cbf1:6dd5:5164) Quit ()
[23:50] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[23:51] * ScOut3R (~ScOut3R@BC065770.dsl.pool.telekom.hu) Quit (Ping timeout: 480 seconds)
[23:52] * Wolff_John (~jwolff@vpn.monarch-beverage.com) Quit (Quit: ChatZilla 0.9.90 [Firefox 20.0.1/20130409194949])
[23:59] * diegows (~diegows@190.190.2.126) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.