#ceph IRC Log

Index

IRC Log for 2013-03-19

Timestamps are in GMT/BST.

[0:02] * dmick (~dmick@2607:f298:a:607:30ef:9d9d:28d6:4654) Quit (Quit: Leaving.)
[0:02] * dmick (~dmick@2607:f298:a:607:30ef:9d9d:28d6:4654) has joined #ceph
[0:08] * diegows (~diegows@190.190.2.126) Quit (Ping timeout: 480 seconds)
[0:12] * Cube (~Cube@12.248.40.138) Quit (Quit: Leaving.)
[0:12] * markbby (~Adium@168.94.245.1) Quit (Remote host closed the connection)
[0:16] * dosaboy (~gizmo@HSI-KBW-46-237-220-2.hsi.kabel-badenwuerttemberg.de) Quit (Ping timeout: 480 seconds)
[0:20] * jtang1 (~jtang@79.97.135.214) Quit (Quit: Leaving.)
[0:22] * Cube (~Cube@12.248.40.138) has joined #ceph
[0:27] * Active2 (~matthijs@callisto.vps.ar-ix.net) Quit (Read error: Connection reset by peer)
[0:44] * tnt (~tnt@54.211-67-87.adsl-dyn.isp.belgacom.be) Quit (Read error: Operation timed out)
[0:45] * KevinPerks1 (~Adium@cpe-066-026-239-136.triad.res.rr.com) Quit (Quit: Leaving.)
[1:10] * ScOut3R (~ScOut3R@c83-249-233-227.bredband.comhem.se) Quit (Remote host closed the connection)
[1:21] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[1:21] * loicd (~loic@magenta.dachary.org) has joined #ceph
[1:23] * LeaChim (~LeaChim@b0faff75.bb.sky.com) Quit (Ping timeout: 480 seconds)
[1:24] * jtang1 (~jtang@79.97.135.214) has joined #ceph
[1:25] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) Quit (Quit: Leaving.)
[1:30] * fred1 (~fredl@2a00:1a48:7803:107:8532:c238:ff08:354) Quit (Ping timeout: 480 seconds)
[1:33] * fred1 (~fredl@2a00:1a48:7803:107:8532:c238:ff08:354) has joined #ceph
[1:43] * jtang1 (~jtang@79.97.135.214) Quit (Quit: Leaving.)
[1:43] * Kioob (~kioob@2a01:e35:2432:58a0:21a:92ff:fe90:42c5) Quit (Quit: Leaving.)
[1:44] * alram (~alram@38.122.20.226) Quit (Ping timeout: 480 seconds)
[1:57] * Cube (~Cube@12.248.40.138) Quit (Ping timeout: 480 seconds)
[2:07] * jlogan1 (~Thunderbi@2600:c00:3010:1:74e2:3ecb:40cd:3b85) Quit (Ping timeout: 480 seconds)
[2:11] * Cube (~Cube@cpe-76-95-217-215.socal.res.rr.com) has joined #ceph
[2:11] * nick5 (~nick@74.222.153.12) Quit (Remote host closed the connection)
[2:12] * nick5 (~nick@74.222.153.12) has joined #ceph
[2:12] * nick5 (~nick@74.222.153.12) Quit (Remote host closed the connection)
[2:13] * nick5 (~nick@74.222.153.12) has joined #ceph
[2:14] * nick5 (~nick@74.222.153.12) Quit (Remote host closed the connection)
[2:14] * nick5 (~nick@74.222.153.12) has joined #ceph
[2:15] * nick5 (~nick@74.222.153.12) Quit (Remote host closed the connection)
[2:16] * diegows (~diegows@190.190.2.126) has joined #ceph
[2:16] * nick5 (~nick@74.222.153.12) has joined #ceph
[2:16] * nick5 (~nick@74.222.153.12) Quit (Remote host closed the connection)
[2:17] * nick5 (~nick@74.222.153.12) has joined #ceph
[2:18] * nick5 (~nick@74.222.153.12) Quit (Remote host closed the connection)
[2:18] * nick5 (~nick@74.222.153.12) has joined #ceph
[2:19] * nick5 (~nick@74.222.153.12) Quit (Remote host closed the connection)
[2:20] * nick5 (~nick@74.222.153.12) has joined #ceph
[2:21] * nick5 (~nick@74.222.153.12) Quit (Remote host closed the connection)
[2:21] * nick5 (~nick@74.222.153.12) has joined #ceph
[2:24] * The_Bishop (~bishop@e179012097.adsl.alicedsl.de) has joined #ceph
[2:29] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) has joined #ceph
[2:31] * diegows (~diegows@190.190.2.126) Quit (Read error: Operation timed out)
[2:42] * buck (~buck@bender.soe.ucsc.edu) has left #ceph
[2:58] * nick5 (~nick@74.222.153.12) Quit (Remote host closed the connection)
[2:58] * rturk is now known as rturk-away
[3:12] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[3:14] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[3:29] * chutzpah (~chutz@199.21.234.7) Quit (Quit: Leaving)
[3:46] * cjitjit (~cjitjit@c-98-228-194-188.hsd1.il.comcast.net) has joined #ceph
[3:54] * cjitjit (~cjitjit@c-98-228-194-188.hsd1.il.comcast.net) Quit (Quit: jIRCii - http://www.oldschoolirc.com)
[4:01] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) Quit (Quit: Leaving.)
[4:08] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Quit: ChatZilla 0.9.90 [Firefox 19.0.2/20130307023931])
[4:51] * Tabrenus (~Tabrenus@0001ab1a.user.oftc.net) has joined #ceph
[4:52] * Tabrenus (~Tabrenus@0001ab1a.user.oftc.net) Quit (Remote host closed the connection)
[5:17] * DLange is now known as Guest2300
[5:18] * DLange (~DLange@dlange.user.oftc.net) has joined #ceph
[5:21] * Guest2300 (~DLange@dlange.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:17] * jochen (~jochen@laevar.de) Quit (Remote host closed the connection)
[6:17] * jochen (~jochen@laevar.de) has joined #ceph
[6:21] * SvenPHX-home (~scarter@71-209-155-46.phnx.qwest.net) has joined #ceph
[6:31] <SvenPHX-home> anyone know if I can use IP addresses in place of FQDNs in the ceph.conf file for the 'host =' configuration?
[6:34] * themgt (~themgt@24-177-232-181.dhcp.gnvl.sc.charter.com) Quit (Quit: Pogoapp - http://www.pogoapp.com)
[6:43] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:50] * Cube1 (~Cube@cpe-76-95-217-215.socal.res.rr.com) has joined #ceph
[6:53] * Cube1 (~Cube@cpe-76-95-217-215.socal.res.rr.com) Quit ()
[6:56] * Cube (~Cube@cpe-76-95-217-215.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[7:05] * Cube (~Cube@cpe-76-95-217-215.socal.res.rr.com) has joined #ceph
[7:08] * scuttlemonkey (~scuttlemo@HSI-KBW-46-237-220-2.hsi.kabel-badenwuerttemberg.de) has joined #ceph
[7:08] * ChanServ sets mode +o scuttlemonkey
[7:14] * Cube1 (~Cube@cpe-76-95-217-215.socal.res.rr.com) has joined #ceph
[7:20] * StormBP (~StormBP@109.195.66.120) has joined #ceph
[7:21] * Cube (~Cube@cpe-76-95-217-215.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[7:21] * dosaboy (~gizmo@HSI-KBW-46-237-220-2.hsi.kabel-badenwuerttemberg.de) has joined #ceph
[7:21] * dosaboy (~gizmo@HSI-KBW-46-237-220-2.hsi.kabel-badenwuerttemberg.de) Quit ()
[7:21] * dosaboy (~gizmo@HSI-KBW-46-237-220-2.hsi.kabel-badenwuerttemberg.de) has joined #ceph
[7:22] <StormBP> Привет! Есть русскоязычные на канале?
[7:26] * mo- (~mo@2a01:4f8:141:3264::3) Quit (Remote host closed the connection)
[7:26] * mo- (~mo@2a01:4f8:141:3264::3) has joined #ceph
[7:28] * StormBP (~StormBP@109.195.66.120) Quit (Quit: qutIM: IRC plugin)
[7:28] * StormBP (~StormBP@109.195.66.120) has joined #ceph
[7:30] <StormBP> Hi! Does anyone speak Russian
[7:30] <StormBP> ?
[7:31] * dosaboy (~gizmo@HSI-KBW-46-237-220-2.hsi.kabel-badenwuerttemberg.de) Quit (Quit: Leaving.)
[7:41] <scuttlemonkey> hey stormbp: I can understand a little russian...as long as you don't mind answers in english
[7:44] <scuttlemonkey> but I have to leave in about 10 minutes
[7:53] <scuttlemonkey> ok, I'm off... ?????!
[7:53] <StormBP> scuttlemonkey: Интересуюсь возможностью создания хранилища данных, и пытаюсь понять, подойдет ли мне ceph
[7:54] <StormBP> scuttlemonkey: may be google translate?
[7:55] <StormBP> Имеется большое количество комплектующих для ПК и еще большее количество дисков (немного б.у)
[7:56] <StormBP> и есть задача собрать из всего этого единое хранилище для длительных архивов
[8:01] * scuttlemonkey (~scuttlemo@HSI-KBW-46-237-220-2.hsi.kabel-badenwuerttemberg.de) Quit (Ping timeout: 480 seconds)
[8:06] * loicd (~loic@magenta.dachary.org) has joined #ceph
[8:14] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[8:14] * StormBP (~StormBP@109.195.66.120) Quit (Read error: Connection reset by peer)
[8:19] * StormBP (~StormBP@109.195.66.120) has joined #ceph
[8:22] * sleinen (~Adium@2001:620:0:25:c835:c285:a295:105a) has joined #ceph
[8:25] * tnt (~tnt@54.211-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[8:27] * topro (~topro@host-62-245-142-50.customer.m-online.net) Quit (Ping timeout: 480 seconds)
[8:42] * sleinen (~Adium@2001:620:0:25:c835:c285:a295:105a) Quit (Quit: Leaving.)
[8:47] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[8:48] * loicd (~loic@lvs-gateway1.teclib.net) has joined #ceph
[8:50] * madkiss1 (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[8:55] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[8:56] * ScOut3R (~ScOut3R@c83-249-233-227.bredband.comhem.se) has joined #ceph
[9:01] * xiaoxi (~xiaoxiche@134.134.137.75) has joined #ceph
[9:04] * ScOut3R (~ScOut3R@c83-249-233-227.bredband.comhem.se) Quit (Ping timeout: 480 seconds)
[9:05] * wogri (~wolf@nix.wogri.at) has joined #ceph
[9:11] * wogri (~wolf@nix.wogri.at) Quit (Quit: leaving)
[9:12] * wogri (~wolf@nix.wogri.at) has joined #ceph
[9:13] * wogri (~wolf@nix.wogri.at) Quit ()
[9:16] * wogri (~wolf@nix.wogri.at) has joined #ceph
[9:20] * loicd1 (~loic@lvs-gateway1.teclib.net) has joined #ceph
[9:20] * loicd (~loic@lvs-gateway1.teclib.net) Quit (Read error: No route to host)
[9:20] * capri (~capri@212.218.127.222) has joined #ceph
[9:21] * jtang1 (~jtang@79.97.135.214) has joined #ceph
[9:23] * tnt (~tnt@54.211-67-87.adsl-dyn.isp.belgacom.be) Quit (Read error: Operation timed out)
[9:23] * wogri (~wolf@nix.wogri.at) Quit (Quit: leaving)
[9:28] * gerard_dethier (~Thunderbi@85.234.217.115.static.edpnet.net) has joined #ceph
[9:31] * wogri (~wolf@nix.wogri.at) has joined #ceph
[9:32] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[9:32] * wogri (~wolf@nix.wogri.at) Quit ()
[9:32] * wogri (~wolf@nix.wogri.at) has joined #ceph
[9:34] * vipr (~root@78-23-115-231.access.telenet.be) has joined #ceph
[9:34] * l0nk (~alex@83.167.43.235) has joined #ceph
[9:34] * wogri (~wolf@nix.wogri.at) Quit ()
[9:35] * wogri (~wogri@ro.risc.uni-linz.ac.at) has joined #ceph
[9:35] * wogri (~wogri@ro.risc.uni-linz.ac.at) Quit ()
[9:36] * vipr (~root@78-23-115-231.access.telenet.be) Quit ()
[9:37] * morse (~morse@supercomputing.univpm.it) Quit (Ping timeout: 480 seconds)
[9:37] * vipr (~root@78-23-115-231.access.telenet.be) has joined #ceph
[9:38] * vipr (~root@78-23-115-231.access.telenet.be) Quit ()
[9:39] * vipr (~root@78-23-115-231.access.telenet.be) has joined #ceph
[9:40] * vipr (~root@78-23-115-231.access.telenet.be) Quit ()
[9:40] * vipr (~root@78-23-115-231.access.telenet.be) has joined #ceph
[9:41] * wogri (~wolf@nix.wogri.at) has joined #ceph
[9:41] * wogri (~wolf@nix.wogri.at) Quit ()
[9:42] * wogri (~wolf@nix.wogri.at) has joined #ceph
[9:45] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:45] * tnt (~tnt@212-166-48-236.win.be) has joined #ceph
[9:48] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[9:49] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[9:50] * ScOut3R (~ScOut3R@c83-249-233-227.bredband.comhem.se) has joined #ceph
[9:56] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[9:57] * jtang1 (~jtang@79.97.135.214) Quit (Quit: Leaving.)
[9:58] * rtm (~rtm@14.140.216.194) has joined #ceph
[10:00] * vipr (~root@78-23-115-231.access.telenet.be) Quit (Quit: leaving)
[10:00] * vipr (~root@78-23-115-231.access.telenet.be) has joined #ceph
[10:00] * eschnou (~eschnou@85.234.217.115.static.edpnet.net) has joined #ceph
[10:01] <sig_wall> Hello
[10:01] <sig_wall> I have a problem. When recovering cluster, there is a lot of slow requests > 30 sec.
[10:01] <rtm> Hi all, I am running ceph 56.3 and have bumped into the this issue. sudo rbd map foo --pool rtm --name client.admin --keyring /etc/ceph/ceph.keyring is not completing. However if I Ctrl^C and run rbd showmapped, I see the devices listed and if I try to unmap it using rbd unmap /dev/rbd/rbd/foo, it says foo is not a block device..
[10:01] <rtm> can someone help plesae
[10:02] * vipr (~root@78-23-115-231.access.telenet.be) Quit ()
[10:02] * vipr (~root@78-23-115-231.access.telenet.be) has joined #ceph
[10:02] <sig_wall> Can anyone tell why slow requests is happening usually during recovery ?
[10:03] * vipr (~root@78-23-115-231.access.telenet.be) Quit ()
[10:03] * vipr (~root@78-23-115-231.access.telenet.be) has joined #ceph
[10:05] <sig_wall> As I see in the source code, PGs is not blocked for a long time.
[10:07] * xiaoxi (~xiaoxiche@134.134.137.75) Quit (Ping timeout: 480 seconds)
[10:08] * Vjarjadian (~IceChat77@5ad6d005.bb.sky.com) Quit (Quit: For Sale: Parachute. Only used once, never opened, small stain.)
[10:08] * morse (~morse@supercomputing.univpm.it) Quit (Ping timeout: 480 seconds)
[10:08] <wogri> sig_wall there is an option where you can define how many backfills are allowed to happen simulatenously. maybe this helps.
[10:11] <absynth> recovery_max_backfills, specifically
[10:12] <absynth> if you see slow requests, do
[10:12] <absynth> ceph osd tell \* injectargs '--osd-recovery-max-active 0'
[10:12] * Morg (b2f95a11@ircip2.mibbit.com) has joined #ceph
[10:12] <sig_wall> that helps
[10:14] <sig_wall> but if I set this parameter > 0 (e.g. 1) then all user i/o slows to 30 sec latency and 500kb/s overral throughput.
[10:15] <absynth> then there is something very very wrong with your setup
[10:15] <sig_wall> that's not good for production
[10:16] <absynth> recovery with 1 thread should never impact the other i/o
[10:17] <absynth> can you see which OSD is issuing the slow requests?
[10:17] <sig_wall> all 30 osds.
[10:18] <absynth> that was not really an answer to my question
[10:18] <absynth> not when, _which_
[10:20] <sig_wall> in log I see slow request from most of osds... osd.2, osd.20, osd.26, osd.14. I can provide a full log
[10:21] <vipr> What log file is that?
[10:21] <sig_wall> ceph -w
[10:22] <absynth> hm
[10:22] <absynth> are you sure your OSDs are configured correctly and not too weak to sustain the I/O?
[10:23] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[10:24] * LeaChim (~LeaChim@b0faff75.bb.sky.com) has joined #ceph
[10:26] <vipr> I also have a question
[10:26] <vipr> I'm very new to ceph, and I am running a little testsystem at the moment
[10:27] <sig_wall> absynth: osds are linked with infiniband in DC, iowait is about 20%
[10:27] <vipr> it has 4 servers: server1(rbd client, MON) server2(3OSD) server3(MON,2OSD) server4(MON,2OSD)
[10:27] <sig_wall> during recovery
[10:27] <vipr> but when I try to copy a basic bigfile, to the rbd mount, I only see write activity on 3 OSDs
[10:27] <vipr> as u can see here http://i.imgur.com/tsZWPhD.png
[10:27] <vipr> is this normal behaviour?
[10:30] <absynth> sig_wall: you have 20% iowait on the OSDs?
[10:30] <absynth> sig_wall: that is *way* too much
[10:30] <absynth> something is wrong with your setup
[10:30] * stxShadow (~jens@p4FD07FFD.dip.t-dialin.net) has joined #ceph
[10:31] <vipr> and i'm also experiencing high iowait on osds
[10:32] <absynth> vipr: for the first part - yes, that is normal behavior, since the objects are not uniformly distributed across OSDs
[10:33] <absynth> if a file is X MB, it's basically split into Y 4MB chunks, with Y = X/4 (obviously)
[10:33] <sig_wall> absynth: I thought that 100% is "too much"
[10:33] <absynth> these are then distributed across the OSDs according to your crushmap
[10:33] <absynth> since you have 3 machines with OSDs, the objects will be distributed across three OSDs, with replicas being on another 3 OSDs
[10:34] <absynth> so, for object 1, you will have it on OSD (1,2), obj2 will be on (2,3), obj3 will be on (3,1) etc.
[10:34] <absynth> since the replica of an object should (in the current default crush configuration) never be on the same host as the original
[10:34] <absynth> make that "should" a "must"
[10:35] <absynth> when writing, you will see writes only to the primary OSDs, which might be the same 3 (because the placement gorup that contains your file is only on these 3)
[10:35] <absynth> so, basically, i would consider this behavior as normal. when you put more files on ceph, you should see more uniform write patterns
[10:35] <absynth> sig_wall: regarding iowait, i personally think 1% is too much, but 20% means that your server spends 20% of its time waiting for the i/o subsystem
[10:36] <absynth> and that is *not* good
[10:36] <vipr> Aha, interesting, thank you for this clear information!
[10:40] <sig_wall> absynth: I don't think that it may lead to timeouts... anyway 80% of time osd does something that does not help to process io queue.
[10:41] <vipr> I'm trying to troubleshoot the high iowait, but I don't really know the best way to do this.
[10:42] <vipr> r b swpd free buff cache si so bi bo in cs us sy id wa
[10:42] <vipr> 0 2 0 127152 5232 7195876 0 0 11 2232 266 433 0 1 95 4
[10:42] <vipr> 0 1 0 119456 5232 7195180 0 0 39 51275 5561 9040 2 3 69 26
[10:42] <vipr> vmstat output, I don't have swap partition, and that machine has 8GB of ral
[10:42] <vipr> ram
[10:43] <stxShadow> 17 0 0 407752 50180 35740048 0 0 40 232 184412 306661 29 23 47 1
[10:43] <stxShadow> 23 0 0 410124 50180 35740148 0 0 20 120 181416 305850 25 22 52 1
[10:43] <stxShadow> 20 0 0 411616 50180 35740292 0 0 44 1008 187524 314465 28 24 48 0
[10:43] <stxShadow> 12 0 0 412068 50184 35740364 0 0 24 4492 186271 313845 26 23 51 0
[10:43] <stxShadow> -> our system
[10:43] <stxShadow> with nearly 100 VMs per node
[10:44] <absynth> my guess is: your infiniband setup is borked
[10:45] <vipr> mmh
[10:45] <vipr> I have no idea what you mean
[10:45] <vipr> but the output is simillar on all machines
[10:46] <absynth> sorry, i confused you two
[10:47] <absynth> vipr: basically, your system is idling
[10:47] <absynth> nearly no context switches, no running processes
[10:47] <absynth> but you still have 25% iowait
[10:47] <absynth> is that inside a VM or is that on an OSD host?
[10:47] <vipr> osd host
[10:47] <vipr> no vms, just basic test setup 4 debian hosts
[10:47] <absynth> ok, then something's weird there
[10:48] <absynth> can you check which disk/subsystem is issuing the iowait?
[10:48] <absynth> iostat -dx
[10:48] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[10:49] <vipr> Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
[10:49] <vipr> sda 0.09 1.61 1.92 20.24 36.79 4451.57 405.05 1.07 48.48 20.08 51.17 7.80 17.29
[10:49] <vipr> sdb 0.00 1.00 0.73 17.63 4.56 3839.96 418.66 0.72 38.92 27.40 39.40 7.90 14.50
[10:49] <vipr> sdc 0.00 1.01 0.92 20.51 6.29 4560.58 426.30 1.28 59.56 33.38 60.73 8.26 17.70
[10:49] <vipr> machine with 3 OSDs
[10:49] <vipr> they all seem to have issues
[10:50] <vipr> btw (sda 10g partition OS, rest xfs partition osd)
[10:50] <absynth> hm, waiting for i/o 50% of the time?
[10:50] <absynth> that is... suboptimal
[10:50] <vipr> understatement :-)
[10:50] <absynth> what disk controller is that?
[10:51] <absynth> overly conservative write policy? write cache disabled?
[10:51] <vipr> 00:1f.2 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02)
[10:51] <absynth> okay, onboard stuff
[10:51] <vipr> I'm not using a raid controller for the moemnt
[10:51] <vipr> yes
[10:51] <vipr> But that shouldn't have such low performance anyway?
[10:51] <absynth> dunno, i have never touched them
[10:52] <absynth> let's see if there's benchmarks for the ICH9
[10:52] <absynth> hrm, no, nhm only benchmarked add-on controllers
[10:53] * scuttlemonkey (~scuttlemo@HSI-KBW-46-237-220-11.hsi.kabel-badenwuerttemberg.de) has joined #ceph
[10:53] * ChanServ sets mode +o scuttlemonkey
[10:53] <sig_wall> may using of xfs instead of btrfs seriously affect performance ?
[10:53] <absynth> scuttlemonkey: you in .de?
[10:53] * loicd (~loic@lvs-gateway1.teclib.net) has joined #ceph
[10:53] * loicd1 (~loic@lvs-gateway1.teclib.net) Quit (Quit: Leaving.)
[10:53] <absynth> sig_wall: no, we use xfs too
[10:53] <absynth> it's no problem
[10:53] <absynth> what kernel is that, vipr?
[10:53] <joao> absynth, world hosting days
[10:53] <absynth> oh yeah, that time of the year again
[10:54] <absynth> you too, joao?
[10:54] <joao> not this year
[10:54] <absynth> lucky you
[10:54] <scuttlemonkey> absynth: yeah, at WHD
[10:54] <absynth> scuttlemonkey: say hi to everyone from me
[10:54] <vipr> Linux 3.2.0-4-amd64
[10:54] <scuttlemonkey> hehe
[10:55] <absynth> vipr: that's stock debian squeeze or something?
[10:55] <absynth> or wheezy
[10:55] <vipr> wheezy
[10:55] <vipr> only serrver2
[10:55] <vipr> but the other servers squeeze, with 3.2 bpo
[10:55] <vipr> so ~ same kernel
[10:55] <absynth> well, i can't pinpoint why you have such a massive iowait, but i think it is not a ceph issue
[10:56] <absynth> what i would do is upgrade to a recent kernel (~3.7 or such) and check if i/o performance gets better
[10:56] <vipr> ok will do
[10:56] <vipr> btw
[10:56] <vipr> server 4 seems to have an even nastier case of the waits
[10:56] <vipr> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
[10:56] <vipr> sda 0.07 1.67 1.25 24.10 32.73 11531.64 456.10 10.43 410.96 12.43 31.52
[10:56] <vipr> sdb 0.00 1.36 1.09 22.24 87.99 11696.56 505.17 5.09 217.86 12.69 29.60
[10:58] <tnt> what is the cluster doing mostly ? I mean lots of random IO could certainly generate high io wait.
[10:59] <vipr> for the moment just copying a 160gb file from server 1, which is only a MON, to an rbd mounted block device
[10:59] <vipr> doesn't seem very random :p
[10:59] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[11:00] <tnt> No indeed. What's your RBD client ?
[11:02] <absynth> oh, interesting, CERN is playing with ceph
[11:02] <tnt> (i.e kernel ? and if yes, which version. or qemu-rbd or ...)
[11:02] <absynth> tnt: i guess kernel client, since he is not doing any virt yet
[11:03] * scuttlemonkey (~scuttlemo@HSI-KBW-46-237-220-11.hsi.kabel-badenwuerttemberg.de) Quit (Ping timeout: 480 seconds)
[11:03] <tnt> ok. and using 3.2, definitely not a good idea. Client needs to be more recent than that. I'm using a 3.6.x with all ceph patch backported.
[11:04] <absynth> seems WHD wifi is filling up ;)
[11:11] * diegows (~diegows@190.190.2.126) has joined #ceph
[11:15] <rtm> I am trying to do the 5-minute-quick-start - a doubt does the client need passwd less ssh access to the server?
[11:15] * KindTwo (KindOne@h246.2.40.162.dynamic.ip.windstream.net) has joined #ceph
[11:16] * tryggvil_ (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[11:16] * mcclurmc (~mcclurmc@cpc10-cmbg15-2-0-cust205.5-4.cable.virginmedia.com) Quit (Ping timeout: 480 seconds)
[11:18] * morse_ (~morse@supercomputing.univpm.it) has joined #ceph
[11:19] * psieklFH (psiekl@wombat.eu.org) has joined #ceph
[11:19] * morse (~morse@supercomputing.univpm.it) Quit (Read error: Connection reset by peer)
[11:20] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (resistance.oftc.net larich.oftc.net)
[11:20] * LeaChim (~LeaChim@b0faff75.bb.sky.com) Quit (resistance.oftc.net larich.oftc.net)
[11:20] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) Quit (resistance.oftc.net larich.oftc.net)
[11:20] * __jt___ (~james@rhyolite.bx.mathcs.emory.edu) Quit (resistance.oftc.net larich.oftc.net)
[11:20] * raso (~raso@deb-multimedia.org) Quit (resistance.oftc.net larich.oftc.net)
[11:20] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (resistance.oftc.net larich.oftc.net)
[11:20] * jantje_ (~jan@paranoid.nl) Quit (resistance.oftc.net larich.oftc.net)
[11:20] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) Quit (resistance.oftc.net larich.oftc.net)
[11:20] * Anticimex (anticimex@netforce.csbnet.se) Quit (resistance.oftc.net larich.oftc.net)
[11:20] * TMM (~hp@535240C7.cm-6-3b.dynamic.ziggo.nl) Quit (resistance.oftc.net larich.oftc.net)
[11:20] * sileht (~sileht@sileht.net) Quit (resistance.oftc.net larich.oftc.net)
[11:20] * phantomcircuit (~phantomci@covertinferno.org) Quit (resistance.oftc.net larich.oftc.net)
[11:20] * psiekl (psiekl@wombat.eu.org) Quit (resistance.oftc.net larich.oftc.net)
[11:20] * scheuk (~scheuk@204.246.67.78) Quit (resistance.oftc.net larich.oftc.net)
[11:20] * dec (~dec@ec2-54-251-62-253.ap-southeast-1.compute.amazonaws.com) Quit (resistance.oftc.net larich.oftc.net)
[11:20] * tryggvil_ is now known as tryggvil
[11:20] * KindTwo is now known as KindOne
[11:22] * jantje (~jan@paranoid.nl) has joined #ceph
[11:25] * LeaChim (~LeaChim@b0faff75.bb.sky.com) has joined #ceph
[11:25] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) has joined #ceph
[11:25] * __jt___ (~james@rhyolite.bx.mathcs.emory.edu) has joined #ceph
[11:25] * raso (~raso@deb-multimedia.org) has joined #ceph
[11:25] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) has joined #ceph
[11:25] * Anticimex (anticimex@netforce.csbnet.se) has joined #ceph
[11:25] * TMM (~hp@535240C7.cm-6-3b.dynamic.ziggo.nl) has joined #ceph
[11:25] * sileht (~sileht@sileht.net) has joined #ceph
[11:25] * phantomcircuit (~phantomci@covertinferno.org) has joined #ceph
[11:25] * dec (~dec@ec2-54-251-62-253.ap-southeast-1.compute.amazonaws.com) has joined #ceph
[11:25] * scheuk (~scheuk@204.246.67.78) has joined #ceph
[11:26] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[11:27] * stacker666 (~stacker66@215.pool85-58-189.dynamic.orange.es) has joined #ceph
[11:28] <stacker666> hi
[11:29] <stxShadow> hi
[11:29] <stacker666> anybody can helpme? my mds doesn't start when i enable authx
[11:29] <stacker666> it says: 2013-03-19 11:18:30.864074 7fbd1a8f3700 1 -- 192.168.2.51:6806/13913 <== mon.0 192.168.2.51:6789/0 3 ==== auth_reply(proto 2 -1 Operation not permitted) v1 ==== 24+0+0 (2942628883 0 0) 0x2adf200 con 0x2abd580
[11:29] <stacker666> hi stxShadow
[11:30] <stacker666> if i disable authx all works well
[11:31] <stacker666> ceph version 0.56.3 (6eb7e15a4783b122e9b0c85ea9ba064145958aa5)
[11:32] * eschenal (~eschnou@85.234.217.115.static.edpnet.net) has joined #ceph
[11:32] * ninkotech (~duplo@ip-89-102-24-167.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[11:32] * ninkotech (~duplo@ip-89-102-24-167.net.upcbroadband.cz) has joined #ceph
[11:34] * diegows (~diegows@190.190.2.126) Quit (Read error: Operation timed out)
[11:34] <absynth> i saw that before, but i don't remember the solution
[11:35] <vipr> stacker666: from what machine are you issueing the start command?
[11:36] * eschnou (~eschnou@85.234.217.115.static.edpnet.net) Quit (Ping timeout: 480 seconds)
[11:36] <stacker666> i have 2 nodes. only 1 have mds, and mon. I started the command from the mon,mds node
[11:37] <vipr> maybe it has something to do with your .keyring file not being in /etc/ceph?
[11:37] <stacker666> yes, is in my /etc/ceph
[11:37] <stacker666> generated with mkceph
[11:38] <vipr> and you can run ceph status?
[11:38] <vipr> (from that machine)
[11:38] <stacker666> health HEALTH_WARN mds a is laggy
[11:38] <stacker666> monmap e1: 1 mons at {a=192.168.2.51:6789/0}, election epoch 1, quorum 0 a
[11:38] <stacker666> osdmap e54: 4 osds: 4 up, 4 in
[11:38] <stacker666> pgmap v439: 960 pgs: 960 active+clean; 1004 MB data, 62817 MB used, 6899 GB / 7333 GB avail
[11:38] <stacker666> mdsmap e223: 1/1/1 up {0=a=up:active(laggy or crashed)}
[11:38] <stacker666> 2013-03-19 11:38:17.883757 7fd1f55fd700 1 -- 192.168.2.51:0/14519 <== mon.0 192.168.2.51:6789/0 7 ==== mon_command_ack([status]=0 health HEALTH_WARN mds a is laggy
[11:38] <stacker666> monmap e1: 1 mons at {a=192.168.2.51:6789/0}, election epoch 1, quorum 0 a
[11:38] <stacker666> osdmap e54: 4 osds: 4 up, 4 in
[11:38] <stacker666> pgmap v439: 960 pgs: 960 active+clean; 1004 MB data, 62817 MB used, 6899 GB / 7333 GB avail
[11:38] <stacker666> mdsmap e223: 1/1/1 up {0=a=up:active(laggy or crashed)}
[11:38] <stacker666> v0) v1 ==== 344+0+0 (1598216833 0 0) 0x7fd1e0001430 con 0x177a750
[11:38] <stacker666> 2013-03-19 11:38:17.883874 7fd1f9d5e780 1 -- 192.168.2.51:0/14519 mark_down_all
[11:38] <stacker666> 2013-03-19 11:38:17.883975 7fd1f9d5e780 1 -- 192.168.2.51:0/14519 shutdown complete.
[11:39] <vipr> I have no clue... Did you add the mds afterwards?
[11:39] <stacker666> nope
[11:39] <joao> stacker666, my guess is that your mon doesn't have your mds key, your mds doesn't have the key, or both
[11:39] <stacker666> mmmmmm
[11:40] <stacker666> one moment
[11:40] <stacker666> i will try one thing
[11:40] <stacker666> probably you all right
[11:41] <stacker666> I remember when I launched the command was not enabled authx
[11:42] <stacker666> mkceph
[11:42] <stacker666> i will lauched again with authx enabled
[11:42] <vipr> so there was no keyring generated for the mds?
[11:44] <stacker666> === mds.a ===
[11:44] <stacker666> mds.a: not running.
[11:44] <stacker666> nothing
[11:45] <stacker666> something miss in the ceph.conf probably?
[11:45] <stacker666> vipr: yes but something goes wrong in the auth
[11:46] <stacker666> if you want to see the ceph.conf
[11:46] <stacker666> no problem
[11:47] <vipr> show me
[11:47] <vipr> pastebin maybe
[11:48] <stacker666> ok
[11:50] <stacker666> http://pastebin.com/p5BBeqwb
[11:51] <stacker666> thanks for advance :)
[11:51] <vipr> it looks a bit different from my config
[11:51] <vipr> but not really
[11:51] <stacker666> can you show me your conf please?
[11:51] <vipr> I just don't specify my keyring in the global section
[11:52] <stacker666> ahp
[11:52] <vipr> but that shouldn't be the problem I think
[11:52] <stacker666> :/
[11:52] <joao> that should be fine if your keyring does have the keys anyway
[11:52] <vipr> I am trying to help, but I'm also only just starting with ceph :-)
[11:52] <stacker666> jeje
[11:52] <stacker666> no problem
[11:53] <stacker666> with previous versions of ceph i dont saw this error
[11:53] <joao> you'll have to make sure that /etc/ceph/keyring.admin has the keys for mds, osds, client, etc
[11:53] <joao> otherwise, you'll end up with services being unable to authenticate
[11:54] <joao> but from the name of the file, I'd say that you'll only find the client's admin key in there
[11:54] <rtm> @joao does the filename have to be keyring.admin?
[11:54] <cephalobot> rtm: Error: "joao" is not a valid command.
[11:54] <joao> rtm, the default file, iirc, is /etc/ceph/ceph.keyring; but the filename can really be whatever you fancy, as long as properly configured
[11:55] <vipr> I have the same joao
[11:55] <vipr> only client.admin key in ceph.keyring, but I have no problem communicating...
[11:55] <vipr> doesn't it get propagated automatically when you run mkcephfs?
[11:55] <rtm> vipr, do you have password less ssh between the client and server?
[11:55] <stacker666> mkceph generate the following files
[11:55] <vipr> yes
[11:56] <joao> vipr, I'm assuming you don't have mds, osd and the likes running on that machine then?
[11:56] <stacker666> keyring.admin keyring.osd.0 keyring.osd.1
[11:56] <vipr> indeed, not on that machine, only a mon
[11:56] <stacker666> keyring.admin doesnt propagate
[11:56] <joao> stacker666, that's okay if you have a keyring = foo on your osds entries
[11:56] <stacker666> i need to copy to another node
[11:56] <rtm> vipr, was your 'yes' for the passwd less question?
[11:57] <vipr> rtm: yes
[11:57] <rtm> thnaks
[11:58] <stacker666> one question. i need ony one mds and mon for cluster?
[11:58] <stacker666> or the another node needs also its mds service running
[11:58] * jtang1 (~jtang@2001:770:10:500:a414:3e28:d4b8:688d) has joined #ceph
[12:00] <joao> stacker666, technically you only need one of each to get a cluster working
[12:01] * jtang2 (~jtang@2001:770:10:500:8559:7d0d:bc75:466d) has joined #ceph
[12:01] <joao> you can have a cluster composed of one mon, one mds and one osd
[12:01] <joao> not sure if that's what you really want to
[12:01] <stxShadow> you should always use an odd number for MDS / MON
[12:01] <stxShadow> -> to get quorum
[12:01] <joao> you should at least throw a couple more monitors and osds to the setup
[12:01] <stxShadow> so 1 is ok
[12:01] <stxShadow> or 3
[12:01] <stxShadow> 5
[12:01] <stxShadow> etc
[12:02] <stacker666> ok ok
[12:02] <stacker666> i have 1 mon, 1 mds and a lot of osd's
[12:04] <stacker666> i launch the mkceph command like this: mkcephfs -k /etc/ceph/keyring.admin -c /etc/ceph/ceph.conf -a
[12:04] <stacker666> correct?
[12:06] * jtang1 (~jtang@2001:770:10:500:a414:3e28:d4b8:688d) Quit (Ping timeout: 480 seconds)
[12:07] <stacker666> i will launch the command without reference to keyring.admin
[12:08] <stacker666> YEAH!
[12:08] <stacker666> WORKING
[12:08] <barryo> i modified my crush map, ceph did a lot of backfilling, now that the backfilling has finished ceph health is reporting "HEALTH_WARN 60 pgs stuck unclean", how do i go about unsticking the pgs?
[12:10] <stacker666> if I do not put any reference to me keyring admin generates a "keyring" and use it correctly. Simply put there is no reference to either the ring or the ceph.conf mkceph
[12:11] <rtm> has anyone seen this before?
[12:11] <rtm> rbd showmapped
[12:11] <rtm> id pool image snap device
[12:11] <rtm> 0 rbd foo - /dev/rbd0
[12:11] <rtm> 1 rbd foo - /dev/rbd1
[12:11] <rtm> 2 rbd foo - /dev/rbd2
[12:11] <rtm> 3 rbd foo - /dev/rbd3
[12:11] <rtm> 4 rbd foo - /dev/rbd4
[12:11] <rtm> i am not able to unmap them
[12:11] <rtm> it says invalid block device
[12:11] <stacker666> tnx for your time!
[12:12] <rtm> rbd unmap /dev/rbd/rbd/foo
[12:15] <vipr> try
[12:16] <vipr> rbd unmap /dev/rbd0
[12:17] <rtm> rbd: /dev/rbd0 is not a block device
[12:17] <rtm> rbd: remove failed: (22) Invalid argument
[12:17] <rtm> this is how i created it: sudo rbd map bar --pool rbd --name client.admin
[12:18] <rtm> however the command never completed the mapping.. I had to Ctrl^C
[12:18] <rtm> the map command hanged for ever...
[12:19] <rtm> but showmapped keeps list another additional device
[12:19] <rtm> id pool image snap device
[12:19] <rtm> 0 rbd foo - /dev/rbd0
[12:19] <rtm> 1 rbd foo - /dev/rbd1
[12:19] <rtm> 2 rbd foo - /dev/rbd2
[12:19] <rtm> 3 rbd foo - /dev/rbd3
[12:19] <rtm> 4 rbd foo - /dev/rbd4
[12:19] <rtm> 5 rbd bar - /dev/rbd5
[12:19] <vipr> strange
[12:20] <vipr> no idea, I think i unmapped it like that
[12:20] <rtm> OK, thanks, what is your kernel version?
[12:22] <vipr> 3.2
[12:24] <rtm> Thanks, vipr
[12:26] <vipr> np :)
[12:26] <vipr> What filesystem is best for ssd journal?
[12:27] * mcclurmc (~mcclurmc@firewall.ctxuk.citrix.com) has joined #ceph
[12:27] <stxShadow> you dont need to specify a filesystem .... just a partition for the osd journal
[12:28] <vipr> aha
[12:32] <vipr> so how do i specify?
[12:32] <vipr> e.g. osd journal = /dev/sdc1
[12:32] <vipr> ?
[12:36] * ninkotech (~duplo@ip-89-102-24-167.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[12:36] <vipr> I took a different approach and created symbolic links to /dev/sdc1 named journal in the osd-$i dirs
[12:36] <vipr> mkcephfs succeeded
[12:36] <vipr> let's see if it runs
[12:36] * ninkotech (~duplo@ip-89-102-24-167.net.upcbroadband.cz) has joined #ceph
[12:37] * jtangwk (~Adium@2001:770:10:500:8576:dd71:b404:785) has joined #ceph
[12:43] <rtm> generally how long does it take to map an rdb image?
[12:43] <rtm> *rbd
[12:44] <vipr> not long
[12:45] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) has joined #ceph
[12:45] <rtm> less than a minute?
[12:46] <vipr> took me 1 sec
[12:46] <rtm> somethings definitely wrong here.. :(
[12:51] * sleinen (~Adium@2001:620:0:25:c835:c285:a295:105a) has joined #ceph
[12:52] * ninkotech (~duplo@ip-89-102-24-167.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[12:52] * ninkotech (~duplo@ip-89-102-24-167.net.upcbroadband.cz) has joined #ceph
[12:58] * sleinen (~Adium@2001:620:0:25:c835:c285:a295:105a) Quit (Quit: Leaving.)
[12:58] * sleinen (~Adium@130.59.94.205) has joined #ceph
[13:01] * sleinen1 (~Adium@2001:620:0:25:187b:f393:5a74:e319) has joined #ceph
[13:08] * ninkotech (~duplo@ip-89-102-24-167.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[13:08] * ninkotech (~duplo@ip-89-102-24-167.net.upcbroadband.cz) has joined #ceph
[13:08] * sleinen (~Adium@130.59.94.205) Quit (Ping timeout: 480 seconds)
[13:18] * jtangwk (~Adium@2001:770:10:500:8576:dd71:b404:785) Quit (Quit: Leaving.)
[13:19] * jtangwk (~Adium@2001:770:10:500:8576:dd71:b404:785) has joined #ceph
[13:24] * ninkotech (~duplo@ip-89-102-24-167.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[13:24] * ninkotech (~duplo@ip-89-102-24-167.net.upcbroadband.cz) has joined #ceph
[13:39] * markbby (~Adium@168.94.245.2) has joined #ceph
[13:40] * ninkotech (~duplo@ip-89-102-24-167.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[13:40] * ninkotech (~duplo@ip-89-102-24-167.net.upcbroadband.cz) has joined #ceph
[13:42] * sleinen1 (~Adium@2001:620:0:25:187b:f393:5a74:e319) Quit (Quit: Leaving.)
[13:42] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[13:43] * morse_ (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[13:50] * sleinen (~Adium@130.59.94.205) has joined #ceph
[13:51] * sleinen1 (~Adium@2001:620:0:25:e8cb:2e34:c03:8300) has joined #ceph
[13:57] * nhm_ (~nh@184-97-137-60.mpls.qwest.net) has joined #ceph
[13:57] * markbby (~Adium@168.94.245.2) Quit (Remote host closed the connection)
[13:58] * sleinen (~Adium@130.59.94.205) Quit (Ping timeout: 480 seconds)
[13:59] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[13:59] * nhm (~nh@184-97-130-55.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[14:00] * rtm (~rtm@14.140.216.194) Quit (Quit: leaving)
[14:05] <barryo> I've been experimenting with my crush map this morning in an attempt to set replication at 3, I'm pretty sure I've set it up correctly yet I'm seeing "HEALTH_WARN 1 pgs degraded; 94 pgs stuck unclean"
[14:08] <barryo> how can i go about diagnosing this?
[14:08] <absynth> looks like the replication was set correctly, but did not actually go through yet
[14:08] <absynth> not sure how to diagnose
[14:08] <barryo> it's been sitting like that for a while now
[14:08] <barryo> pgs: 1 active, 610 active+clean, 92 active+remapped, 1 active+degraded
[14:12] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[14:13] <vipr> I just built a new cluster from scratch
[14:13] <absynth> did it rebalance before?
[14:13] <vipr> is it normal that ik keeps creating new pgmaps?
[14:13] <vipr> i'm already at pgmap v464 and it keeps going
[14:14] <absynth> if you copy data on it, yeah, i guess that's normal :)
[14:14] <absynth> PGs are where the data sits
[14:14] <vipr> i'm not copying anything yet :D
[14:14] <vipr> but it seems to have stopped
[14:15] <vipr> I now have an ssd journal on one of the machines
[14:15] <vipr> and on that machine the await values have dropped substantially
[14:15] <vipr> from ~400 to ~40
[14:15] <absynth> no shit
[14:15] <absynth> :D
[14:15] <vipr> but there's still wait :p
[14:21] * paul_mezo (~pkilar@38.122.241.26) has joined #ceph
[14:26] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[14:37] * scuttlemonkey (~scuttlemo@HSI-KBW-46-237-220-11.hsi.kabel-badenwuerttemberg.de) has joined #ceph
[14:37] * ChanServ sets mode +o scuttlemonkey
[14:39] <barryo> I just pushed a crush map into my cluster and seem to have broken it
[14:40] <barryo> nevermind, fixed it :)
[14:41] <absynth> that's why it's called crashmap
[14:41] <absynth> err, scratch that ;)
[14:41] <barryo> i just pushed the old one back in and restarted the cluster, I'm not sure if that's what fixed it or not
[14:48] * wer (~wer@84.sub-70-192-192.myvzw.com) has joined #ceph
[14:53] <stxShadow> barryo, -> have you tried to set the replication level with "ceph osd pool set xxx size 3" ?
[14:53] <stxShadow> should work without touching the crush map
[14:55] <barryo> I think I'd figured it out now, I had only set size 3 on one of my pools
[14:58] <stxShadow> ok ;)
[15:00] <barryo> i hope that'll fix it anyway, the test cluster under my desk sounds like it's about to take off so it must be doing something ;)
[15:02] * markbby (~Adium@168.94.245.6) has joined #ceph
[15:04] * loicd (~loic@lvs-gateway1.teclib.net) Quit (Ping timeout: 480 seconds)
[15:08] * PerlStalker (~PerlStalk@72.166.192.70) has joined #ceph
[15:09] * markbby1 (~Adium@168.94.245.6) has joined #ceph
[15:11] * paul_mezo (~pkilar@38.122.241.26) Quit (Remote host closed the connection)
[15:13] * gaveen (~gaveen@123.231.13.12) has joined #ceph
[15:15] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[15:15] * markbby (~Adium@168.94.245.6) Quit (Ping timeout: 480 seconds)
[15:17] * vata (~vata@2607:fad8:4:6:1a8:d976:3b01:35f5) has joined #ceph
[15:24] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) Quit (Quit: Ex-Chat)
[15:25] * scuttlemonkey (~scuttlemo@HSI-KBW-46-237-220-11.hsi.kabel-badenwuerttemberg.de) Quit (Ping timeout: 480 seconds)
[15:33] * wer (~wer@84.sub-70-192-192.myvzw.com) Quit (Ping timeout: 480 seconds)
[15:36] * jlogan (~Thunderbi@2600:c00:3010:1:74e2:3ecb:40cd:3b85) has joined #ceph
[15:37] * scuttlemonkey (~scuttlemo@HSI-KBW-46-237-220-11.hsi.kabel-badenwuerttemberg.de) has joined #ceph
[15:37] * ChanServ sets mode +o scuttlemonkey
[15:41] * gerard_dethier (~Thunderbi@85.234.217.115.static.edpnet.net) Quit (Ping timeout: 480 seconds)
[15:42] * Morg (b2f95a11@ircip2.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[15:43] * eschenal (~eschnou@85.234.217.115.static.edpnet.net) Quit (Ping timeout: 480 seconds)
[15:45] * scuttlemonkey (~scuttlemo@HSI-KBW-46-237-220-11.hsi.kabel-badenwuerttemberg.de) Quit (Ping timeout: 480 seconds)
[15:51] * doubleg (~doubleg@69.167.130.11) Quit (Quit: Lost terminal)
[15:56] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[15:56] * aliguori (~anthony@32.97.110.51) has joined #ceph
[16:05] * doubleg (~doubleg@69.167.130.11) has joined #ceph
[16:08] * gerard_dethier (~Thunderbi@85.234.217.115.static.edpnet.net) has joined #ceph
[16:13] * serverascode (~serverasc@wwdyn182.cs.ualberta.ca) has joined #ceph
[16:22] * agh (~agh@www.nowhere-else.org) has joined #ceph
[16:22] <agh> Hello to all
[16:22] <agh> I have a big question about split brain.
[16:23] <agh> Let's imagine i have a cluster with 4 nodes
[16:23] <agh> And i install 3 monitors on 3 differents nodes
[16:24] <agh> OK. But what if one monitor fails ? I will have only 2 monitors left, and the doc says to avoid even number of mon.
[16:24] <agh> so ?
[16:24] <agh> will it be problematic ?
[16:25] <joao> says to avoid having even numbers of monitors on the monmap
[16:25] <joao> if you have 3 monitors on the monmap and one of them is down/lost connectivity/wtv, then you should be just fine
[16:28] <joao> the biggest availability problem is when you have, say, 4 monitors, each pair on a different failure domain; if one of those failure domains goes boom, the monitors won't work to prevent split brain, hence why you should keep the monitors in odd numbers
[16:28] <joao> if you had 5 instead of 4, then you'd end up with one of the failure domains being able to continue with business as usual
[16:29] <janos> depends on which failure domain went boom
[16:29] <joao> yeah, of course
[16:29] <janos> i haven't worked through the problems on that. seems hairy
[16:29] <janos> been wondeirng if i should keep odd number of failure domians too
[16:29] <janos> but i haven't really thought it through
[16:29] <agh> mm. ok i understand.
[16:30] <agh> thanks for your explanation
[16:30] <janos> i've been wondeirng if there should be some sort of standby offline (but map-updated) monitor facility
[16:31] <joao> janos, thought about something of the likes a while back
[16:31] <janos> some sort of observer mon that doesn't take place in quorum, but keeps up to speed
[16:31] <joao> yeah
[16:31] <janos> unless quorum cant be reached, then it steps in
[16:31] <joao> but I believe that the only correct way to turn it into active is through user intervention
[16:31] <joao> we can't automate that process
[16:32] <janos> kinda like the Vice President in US structure. has a split vote in congress if 50/50 otherwise no
[16:32] <joao> we shouldn't automate that process
[16:32] <janos> yeah
[16:32] <joao> otherwise, we could incur in a split brain
[16:32] <janos> i wonder of the observer mon part could be done, but actually pulling the trigger would be human intervention
[16:33] <joao> imo, for such thing to work, the toggle of standby -> active should have to be delegated to an external mechanism, some monitoring tool or an administrator
[16:33] <joao> yeah
[16:33] <joao> the observer mon part could be done, yes; there would have to be some effort involved in that, assessing requirements and whatnot
[16:34] <janos> nod
[16:34] <janos> this sort of thing is always much more complex in reality than it looks on the surface
[16:34] <joao> and I'm sure there's something major I'm completely missing that could invalidate the approach, so this would have to be discussed (a lot) :p
[16:34] <janos> haha yep
[16:34] <janos> i know that feeling well
[16:36] * Jimmywong (~jimmywong@ext.cscinfo.com) has joined #ceph
[16:39] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) Quit (Quit: Leaving.)
[16:50] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[16:52] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[16:56] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Quit: tryggvil)
[16:57] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) has joined #ceph
[16:57] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[16:58] * gerard_dethier (~Thunderbi@85.234.217.115.static.edpnet.net) Quit (Quit: gerard_dethier)
[17:02] * Jimmywong (~jimmywong@ext.cscinfo.com) Quit (Quit: Leaving.)
[17:03] <tnt> Is there an explanation somewhere of how RGW maps operatons to RADOS ?
[17:06] * gregaf (~Adium@2607:f298:a:607:555a:b774:64de:6e97) Quit (Quit: Leaving.)
[17:07] * alram (~alram@38.122.20.226) has joined #ceph
[17:09] * gregaf (~Adium@2607:f298:a:607:555a:b774:64de:6e97) has joined #ceph
[17:14] * serverascode (~serverasc@wwdyn182.cs.ualberta.ca) Quit (Quit: Computer has gone to sleep.)
[17:16] * Jimmywong (~jgesty@ext.cscinfo.com) has joined #ceph
[17:19] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Quit: tryggvil)
[17:20] <infernix> 2013-03-19 12:19:13.605415 7f234efd1700 0 -- :/1016334 >> 10.248.0.19:6789/0 pipe(0x1546420 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
[17:20] <infernix> any idea what this cryptic "fault" is?
[17:20] <infernix> everything seems to just work
[17:22] * diegows (~diegows@200.68.116.185) has joined #ceph
[17:22] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[17:22] <infernix> ah, a mon isn't up
[17:22] <joao> infernix, means the pipe was closed or something very similar
[17:22] <joao> it's nothing to worry about
[17:25] <nhm_> huh, uninstalling ceph just ate my ceph.conf file and everything in /var/lib/ceph/mon.
[17:25] <infernix> i also seem to have 18 stale pgs
[17:26] * agh (~agh@www.nowhere-else.org) Quit (Remote host closed the connection)
[17:26] * agh (~agh@www.nowhere-else.org) has joined #ceph
[17:27] <infernix> and half my osds are down
[17:27] <infernix> that explains
[17:27] <nhm_> lets see if photorec can recovery it
[17:31] <infernix> why does 'service ceph restart' do nothing?
[17:32] <infernix> not a single logfile is made
[17:35] * vipr_ (~root@78-23-114-100.access.telenet.be) has joined #ceph
[17:36] <infernix> well, rebooting the servers worked
[17:37] <infernix> but it still bothers me that a 'service ceph restart' nor 'service ceph-mon-all restart' does anything
[17:37] <infernix> is that not the right way to restart things?
[17:39] * noob2 (~cjh@173.252.71.3) has joined #ceph
[17:42] * vipr (~root@78-23-115-231.access.telenet.be) Quit (Ping timeout: 480 seconds)
[17:43] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:43] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[17:43] * BManojlovic (~steki@91.195.39.5) Quit ()
[17:44] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) Quit (Quit: Leaving.)
[17:44] * jtang1 (~jtang@2001:770:10:500:dc06:6579:933c:6bd3) has joined #ceph
[17:46] * tnt (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[17:46] * The_Bishop (~bishop@e179012097.adsl.alicedsl.de) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[17:49] * jtang2 (~jtang@2001:770:10:500:8559:7d0d:bc75:466d) Quit (Ping timeout: 480 seconds)
[17:49] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) has joined #ceph
[17:56] * stxShadow (~jens@p4FD07FFD.dip.t-dialin.net) Quit (Quit: Ex-Chat)
[17:59] * capri (~capri@212.218.127.222) Quit (Quit: Verlassend)
[18:02] * Jimmywong (~jgesty@ext.cscinfo.com) has left #ceph
[18:03] * chutzpah (~chutz@199.21.234.7) has joined #ceph
[18:03] <gregaf> infernix: sounds like you're on Ubuntu (or at least using Upstart), in which case I think you were after "ceph-all" or "ceph-osd-all"
[18:03] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Ping timeout: 480 seconds)
[18:03] <infernix> tried that too to no avail
[18:05] <joelio> infernix: did yhe user you ran the script have have permissions to the keyring?
[18:05] <joelio> and to restart service obv..
[18:06] <infernix> root? yes
[18:07] <joelio> and stop and start on their own work as intended?
[18:08] <infernix> you mean ceph stop and ceph start?
[18:08] <joelio> yea sure
[18:08] <infernix> there's no such command
[18:08] <joelio> service ceph stop?
[18:08] <infernix> ah
[18:08] <infernix> that did nothing
[18:09] <joelio> and your osds are running?
[18:09] <infernix> and right now with all the osd up, it does nothing
[18:10] <joelio> installed from package (i.e. perms all correct on /var/run and pids etc.)
[18:10] * tnt (~tnt@54.211-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[18:11] <infernix> yeah
[18:11] <infernix> just through the apt ubuntu repository
[18:11] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) Quit (Read error: Operation timed out)
[18:14] <joelio> what does 'service ceph status' give you?
[18:16] <joelio> infernix: does your ceph.conf have the same host= secion in the osd than what `hostname -s` gives?
[18:20] * l0nk (~alex@83.167.43.235) Quit (Quit: Leaving.)
[18:24] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (Quit: Leaving.)
[18:26] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[18:26] * themgt (~themgt@97-95-235-55.dhcp.sffl.va.charter.com) has joined #ceph
[18:26] <dmick> joelio: good question
[18:29] * dosaboy (~gizmo@HSI-KBW-46-237-220-2.hsi.kabel-badenwuerttemberg.de) has joined #ceph
[18:30] * markbby1 (~Adium@168.94.245.6) Quit (Quit: Leaving.)
[18:31] * markbby (~Adium@168.94.245.3) has joined #ceph
[18:37] * Kioob (~kioob@2a01:e35:2432:58a0:21a:92ff:fe90:42c5) has joined #ceph
[18:38] * The_Bishop (~bishop@2001:470:50b6:0:25a0:cc49:4f3d:68df) has joined #ceph
[19:06] * buck (~buck@bender.soe.ucsc.edu) has joined #ceph
[19:07] * TMM (~hp@535240C7.cm-6-3b.dynamic.ziggo.nl) Quit (Quit: Bye)
[19:07] * TMM (~hp@535240C7.cm-6-3b.dynamic.ziggo.nl) has joined #ceph
[19:07] * miroslav (~miroslav@c-98-248-210-170.hsd1.ca.comcast.net) has joined #ceph
[19:09] * themgt (~themgt@97-95-235-55.dhcp.sffl.va.charter.com) Quit (Quit: themgt)
[19:09] * sleinen1 (~Adium@2001:620:0:25:e8cb:2e34:c03:8300) Quit (Quit: Leaving.)
[19:10] * sleinen (~Adium@130.59.94.205) has joined #ceph
[19:16] * miroslav (~miroslav@c-98-248-210-170.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[19:17] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[19:18] * sleinen (~Adium@130.59.94.205) Quit (Ping timeout: 480 seconds)
[19:18] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[19:22] * jtang1 (~jtang@2001:770:10:500:dc06:6579:933c:6bd3) Quit (Quit: Leaving.)
[19:38] * eschnou (~eschnou@223.86-201-80.adsl-dyn.isp.belgacom.be) has joined #ceph
[19:41] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Quit: tryggvil)
[19:43] * The_Bishop (~bishop@2001:470:50b6:0:25a0:cc49:4f3d:68df) Quit (Ping timeout: 480 seconds)
[19:44] * allsystemsarego (~allsystem@188.27.165.172) has joined #ceph
[19:52] * The_Bishop (~bishop@2001:470:50b6:0:658b:f0ee:70f9:7308) has joined #ceph
[19:52] * themgt (~themgt@24-177-232-181.dhcp.gnvl.sc.charter.com) has joined #ceph
[19:55] * Cube1 (~Cube@cpe-76-95-217-215.socal.res.rr.com) Quit (Quit: Leaving.)
[19:56] * eschnou (~eschnou@223.86-201-80.adsl-dyn.isp.belgacom.be) Quit (Read error: Operation timed out)
[20:03] * dosaboy1 (~gizmo@HSI-KBW-46-237-220-2.hsi.kabel-badenwuerttemberg.de) has joined #ceph
[20:06] * dosaboy (~gizmo@HSI-KBW-46-237-220-2.hsi.kabel-badenwuerttemberg.de) Quit (Ping timeout: 480 seconds)
[20:07] <sagewk> jamespage: quick question about .deb manners: should purge remove user data?
[20:07] <sagewk> jamespage: specifically, i'm wondering if it should remove /var/lib/ceph/* (e.g., monitor data)
[20:08] * LeaChim (~LeaChim@b0faff75.bb.sky.com) Quit (Ping timeout: 480 seconds)
[20:10] * gaveen (~gaveen@123.231.13.12) Quit (Ping timeout: 480 seconds)
[20:11] * SvenPHX (~scarter@wsip-174-79-34-244.ph.ph.cox.net) has left #ceph
[20:11] * SvenPHX (~scarter@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[20:11] * sjust (~sam@2607:f298:a:607:baac:6fff:fe83:5a02) Quit (Quit: Leaving.)
[20:12] * sjust (~sam@2607:f298:a:607:baac:6fff:fe83:5a02) has joined #ceph
[20:14] * Cube (~Cube@12.248.40.138) has joined #ceph
[20:15] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) Quit (Quit: Leaving.)
[20:15] * ScOut3R (~ScOut3R@c83-249-233-227.bredband.comhem.se) Quit (Remote host closed the connection)
[20:15] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[20:16] * ScOut3R (~ScOut3R@c83-249-233-227.bredband.comhem.se) has joined #ceph
[20:18] * LeaChim (~LeaChim@5ad4a53c.bb.sky.com) has joined #ceph
[20:20] * gaveen (~gaveen@175.157.2.75) has joined #ceph
[20:20] * sleinen (~Adium@2001:620:0:26:35dc:e2a4:245b:3e96) has joined #ceph
[20:22] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[20:22] * agh (~agh@www.nowhere-else.org) Quit (Remote host closed the connection)
[20:23] * agh (~agh@www.nowhere-else.org) has joined #ceph
[20:26] * allsystemsarego (~allsystem@188.27.165.172) Quit (Quit: Leaving)
[20:28] * dosaboy1 (~gizmo@HSI-KBW-46-237-220-2.hsi.kabel-badenwuerttemberg.de) Quit (Ping timeout: 480 seconds)
[20:30] * eschnou (~eschnou@223.86-201-80.adsl-dyn.isp.belgacom.be) has joined #ceph
[20:31] * gaveen (~gaveen@175.157.2.75) Quit (Remote host closed the connection)
[20:34] * danieagle (~Daniel@186.214.76.205) has joined #ceph
[20:37] * agh (~agh@www.nowhere-else.org) Quit (Remote host closed the connection)
[20:49] * mcclurmc (~mcclurmc@firewall.ctxuk.citrix.com) Quit (Ping timeout: 480 seconds)
[20:50] * DJF5 (~dennisdeg@backend0.link0.net) Quit (Read error: Connection reset by peer)
[20:50] * DJF5 (~dennisdeg@backend0.link0.net) has joined #ceph
[21:07] * diegows (~diegows@200.68.116.185) Quit (Ping timeout: 480 seconds)
[21:15] * jskinner (~jskinner@69.170.148.179) has joined #ceph
[21:18] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[21:21] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[21:28] * loicd (~loic@magenta.dachary.org) has joined #ceph
[21:29] * mcclurmc (~mcclurmc@cpc10-cmbg15-2-0-cust205.5-4.cable.virginmedia.com) has joined #ceph
[21:31] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) Quit (Quit: Leaving.)
[21:55] * danieagle (~Daniel@186.214.76.205) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[21:56] <sagewk> sjust: care to review wip-4079 when you have a minute?
[21:56] <sjust> sure
[21:59] <sjust> can we read aio_num without the aio_lock?
[22:00] <sjust> I think you need to if (aio) { Mutex::Locker l(aio_lock); ... } around the while loop
[22:01] <sjust> does wip_sam_bobtail look ok?
[22:01] <sagewk> oh right
[22:06] <sagewk> sjust: looks right, but we should probably use the torture test i was using before to make sure we've caught everything
[22:06] <sjust> want to do that before we merge? those patches likely need to go in anyway
[22:07] * dosaboy (~gizmo@HSI-KBW-46-237-220-2.hsi.kabel-badenwuerttemberg.de) has joined #ceph
[22:07] <sagewk> i'll kick it off now
[22:07] <sjust> ok
[22:07] <sagewk> can merge tho
[22:07] <sjust> k
[22:07] <sagewk> repushed wip-4079
[22:10] * drokita (~drokita@199.255.228.128) has joined #ceph
[22:16] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[22:18] <drokita> You guys ever have a problem with an OSD that continually registers as 'down' even though the start of the OSD runs successfully and there are no errors of merit in the OSD log?
[22:20] <janos> drokita: if you watch ceph -w, do you see any complaints from the other OSD's that they can't find that problem OSD?
[22:21] * dosaboy (~gizmo@HSI-KBW-46-237-220-2.hsi.kabel-badenwuerttemberg.de) Quit (Ping timeout: 480 seconds)
[22:21] <drokita> No, just the general Health Warning for all of the degraded pgs
[22:22] <drokita> I will add that this cluster is relatively unused. That might make the communication errors less apparent
[22:24] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[22:25] <dmick> drokita: last time that happened to me it was that the caps were screwed up
[22:25] <sjust> sagewk: nvm, looks good
[22:25] <dmick> suspect auth problems
[22:25] <sagewk> k thanks
[22:26] <drokita> dmick: caps?
[22:26] <dmick> see ceph auth list
[22:27] <dmick> all the daemons were fairly noncommunicative about the OSD's inability to authenticate/have the right caps for operations with the monitor. I had to crank up debugging to see the failure clearly
[22:27] <drokita> I see.
[22:28] <dmick> (this was with buggy code I was modifying; it might not be so bad with released code, but I'd still doublecheck auth)
[22:29] * ScOut3R (~ScOut3R@c83-249-233-227.bredband.comhem.se) Quit (Remote host closed the connection)
[22:29] <drokita> Thanks for the direction dmick!
[22:29] * ScOut3R (~ScOut3R@c83-249-233-227.bredband.comhem.se) has joined #ceph
[22:32] * dosaboy (~gizmo@HSI-KBW-46-237-220-2.hsi.kabel-badenwuerttemberg.de) has joined #ceph
[22:39] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Remote host closed the connection)
[22:40] <drokita> love it... osd went into uninterruptable sleep. Rebootan Por Favor.
[22:40] <dmick> hmm
[22:41] <dmick> filesystem problems?
[22:41] * aliguori (~anthony@32.97.110.51) Quit (Remote host closed the connection)
[22:43] <drokita> Could be. I will have to check it out when I can get in front of the box.
[22:45] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[22:50] * themgt (~themgt@24-177-232-181.dhcp.gnvl.sc.charter.com) Quit (Quit: themgt)
[22:51] * cephalobot (~ceph@ds2390.dreamservers.com) Quit (Ping timeout: 480 seconds)
[22:51] * rturk-away (~rturk@ds2390.dreamservers.com) Quit (Ping timeout: 480 seconds)
[22:52] * bstaz (~bstaz@ext-itdev.tech-corps.com) Quit (Ping timeout: 480 seconds)
[22:52] * themgt (~themgt@24-177-232-181.dhcp.gnvl.sc.charter.com) has joined #ceph
[22:55] * houkouonchi-home (~linux@pool-108-38-63-48.lsanca.fios.verizon.net) has joined #ceph
[22:59] * sleinen (~Adium@2001:620:0:26:35dc:e2a4:245b:3e96) Quit (Quit: Leaving.)
[23:01] * al (d@niel.cx) Quit (Remote host closed the connection)
[23:07] * drokita (~drokita@199.255.228.128) Quit (Ping timeout: 480 seconds)
[23:14] * eschnou (~eschnou@223.86-201-80.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[23:14] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) has joined #ceph
[23:15] * verwilst (~verwilst@dD5769628.access.telenet.be) has joined #ceph
[23:17] * jjgalvez (~jjgalvez@12.248.40.138) has joined #ceph
[23:19] * dosaboy (~gizmo@HSI-KBW-46-237-220-2.hsi.kabel-badenwuerttemberg.de) Quit (Ping timeout: 480 seconds)
[23:23] * Vjarjadian (~IceChat77@5ad6d005.bb.sky.com) has joined #ceph
[23:25] * verwilst (~verwilst@dD5769628.access.telenet.be) Quit (Quit: Ex-Chat)
[23:25] * jtang1 (~jtang@79.97.135.214) has joined #ceph
[23:26] <sagewk> joshd: 'pythonic?' :)
[23:27] <dmick> it's an industry-standard term...
[23:29] * BillK (~BillK@124-148-197-216.dyn.iinet.net.au) has joined #ceph
[23:30] <joshd> what dmick said :)
[23:31] * leseb (~leseb@37.161.227.242) has joined #ceph
[23:41] * leseb (~leseb@37.161.227.242) Quit (Ping timeout: 480 seconds)
[23:49] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[23:55] * PerlStalker (~PerlStalk@72.166.192.70) Quit (Quit: ...)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.