#ceph IRC Log

Index

IRC Log for 2014-06-25

Timestamps are in GMT/BST.

[0:00] * sherry (~sherry@mike-alien.esc.auckland.ac.nz) Quit (Ping timeout: 480 seconds)
[0:00] * sherry_ (~sherry@mike-alien.esc.auckland.ac.nz) Quit ()
[0:00] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:00] * sherry (~sherry@mike-alien.esc.auckland.ac.nz) has joined #ceph
[0:01] <sherry> Hey guys, any idea related to this bug> http://tracker.ceph.com/issues/8641
[0:01] * brad_mssw (~brad@shop.monetra.com) Quit (Quit: Leaving)
[0:02] * JayJ (~jayj@157.130.21.226) Quit (Quit: Computer has gone to sleep.)
[0:02] * fireD_ (~fireD@93-142-199-92.adsl.net.t-com.hr) has left #ceph
[0:06] <dmick> sherry: bad time to ask: https://wiki.ceph.com/Planning/CDS/CDS_Giant_and_Hammer_%28Jun_2014%29
[0:07] <scuttlemonkey> japuzzo: thanks :)
[0:07] <scuttlemonkey> sherry: yeah, most ceph devs are involved in our developer summit
[0:07] <scuttlemonkey> probably another hour or so
[0:08] <sherry> ah okay, sorry I was not aware of that. thanks
[0:08] * clayb|2 (~kvirc@proxy-ny1.bloomberg.com) has joined #ceph
[0:09] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 30.0/20140605174243])
[0:09] * rturk is now known as rturk|afk
[0:10] * sputnik13 (~sputnik13@207.8.121.241) Quit (Ping timeout: 480 seconds)
[0:12] * clayb (~kvirc@199.172.169.97) Quit (Read error: Connection reset by peer)
[0:14] * rturk|afk is now known as rturk
[0:17] * JuanEpstein (~rweeks@192.169.20.75.static.etheric.net) has joined #ceph
[0:20] * rweeks is now known as Guest250
[0:20] * JuanEpstein is now known as rweeks
[0:20] * dmsimard is now known as dmsimard_away
[0:22] * Guest250 (~rweeks@192.169.20.75.static.etheric.net) Quit (Read error: Operation timed out)
[0:23] * simulx (~simulx@vpn.expressionanalysis.com) Quit (Read error: Operation timed out)
[0:24] * simulx (~simulx@vpn.expressionanalysis.com) has joined #ceph
[0:24] * sjustlaptop (~sam@66-214-251-229.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[0:24] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[0:26] * rturk is now known as rturk|afk
[0:26] * paul_mezo (~pkilar@38.122.241.27) has joined #ceph
[0:30] * paul_mezo (~pkilar@38.122.241.27) Quit ()
[0:35] * ircolle (~Adium@2601:1:8380:2d9:905:f4e6:d94b:cb01) Quit (Quit: Leaving.)
[0:37] * Hell_Fire__ (~HellFire@123-243-155-184.static.tpgi.com.au) has joined #ceph
[0:37] * Hell_Fire_ (~HellFire@123-243-155-184.static.tpgi.com.au) Quit (Read error: Connection reset by peer)
[0:38] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:42] * Cube (~Cube@66-87-65-237.pools.spcsdns.net) has joined #ceph
[0:53] * lpabon (~quassel@66-189-8-115.dhcp.oxfr.ma.charter.com) Quit (Read error: Operation timed out)
[1:03] * sjm (~sjm@cloudgate.cs.utsa.edu) has joined #ceph
[1:04] * rturk|afk is now known as rturk
[1:04] * rturk is now known as rturk|afk
[1:07] * markbby (~Adium@168.94.245.3) has joined #ceph
[1:11] * scuttlemonkey is now known as scuttle|afk
[1:11] * scuttle|afk (~scuttle@nat-pool-rdu-t.redhat.com) Quit (Quit: Coyote finally caught me)
[1:13] <sherry> I guess developer summit is finished now, I got initial reply related to this bug but nothing more, appreciate any confirmation or help to solve that> http://tracker.ceph.com/issues/8641
[1:17] * scuttle|afk (~scuttle@nat-pool-rdu-t.redhat.com) has joined #ceph
[1:17] * Pedras (~Adium@50.185.218.255) has joined #ceph
[1:20] * markbby (~Adium@168.94.245.3) Quit (Remote host closed the connection)
[1:24] * sjm (~sjm@cloudgate.cs.utsa.edu) Quit (Read error: Operation timed out)
[1:24] * clayb|2 (~kvirc@proxy-ny1.bloomberg.com) Quit (Read error: Connection reset by peer)
[1:24] * PureNZ (~paul@122-62-45-132.jetstream.xtra.co.nz) Quit (Read error: Connection reset by peer)
[1:24] * PureNZ (~paul@122-62-45-132.jetstream.xtra.co.nz) has joined #ceph
[1:27] * thomnico (~thomnico@92.54.161.199) has joined #ceph
[1:36] * LeaChim (~LeaChim@host86-174-77-240.range86-174.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[1:37] * garphyx`aw (~garphy@frank.zone84.net) Quit (Quit: leaving)
[1:38] * garphy`aw (~garphy@frank.zone84.net) has joined #ceph
[1:45] * sjm (~sjm@cloudgate.cs.utsa.edu) has joined #ceph
[1:48] * sjm (~sjm@cloudgate.cs.utsa.edu) has left #ceph
[1:51] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) has joined #ceph
[1:54] * rweeks (~rweeks@192.169.20.75.static.etheric.net) Quit (Quit: Leaving)
[1:55] * jcsp1 (~jcsp@2607:f298:a:607:2872:cb28:9f87:7093) Quit (Ping timeout: 480 seconds)
[2:05] * hasues (~hazuez@adsl-74-178-236-65.jax.bellsouth.net) has joined #ceph
[2:07] * madkiss (~madkiss@217.194.72.154) Quit (Quit: Leaving.)
[2:08] * thomnico (~thomnico@92.54.161.199) Quit (Ping timeout: 480 seconds)
[2:08] * yguang11 (~yguang11@2406:2000:ef96:e:8527:6f3f:98fe:1866) Quit ()
[2:09] * yguang11 (~yguang11@2406:2000:ef96:e:8527:6f3f:98fe:1866) has joined #ceph
[2:09] * KaZeR (~kazer@64.201.252.132) Quit (Remote host closed the connection)
[2:09] * adamcrume (~quassel@c-71-204-162-10.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[2:14] * huangjun (~kvirc@111.173.81.152) has joined #ceph
[2:14] * hasues (~hazuez@adsl-74-178-236-65.jax.bellsouth.net) Quit (Quit: Leaving.)
[2:15] * wrencsok1 (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[2:16] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (Read error: Operation timed out)
[2:18] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[2:20] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[2:20] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[2:28] * alram (~alram@38.122.20.226) Quit (Read error: Operation timed out)
[2:30] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:45] * mfa298 (~mfa298@gateway.yapd.net) Quit (Ping timeout: 480 seconds)
[2:49] * mfa298 (~mfa298@gateway.yapd.net) has joined #ceph
[2:50] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[2:50] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit (Ping timeout: 480 seconds)
[2:51] * xarses (~andreww@12.164.168.117) Quit (Read error: Operation timed out)
[2:54] * ghartz (~ghartz@ircad17.u-strasbg.fr) has joined #ceph
[2:54] * zack_dolby (~textual@e0109-114-22-12-137.uqwimax.jp) has joined #ceph
[2:57] * longguang (~chatzilla@123.126.33.253) has joined #ceph
[2:58] * sarob_ (~sarob@2001:4998:effd:600:5956:bd8f:4881:9370) Quit (Remote host closed the connection)
[2:58] * sarob (~sarob@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) has joined #ceph
[3:01] * fdmanana_ (~fdmanana@bl10-142-244.dsl.telepac.pt) has joined #ceph
[3:06] * sarob (~sarob@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) Quit (Read error: Operation timed out)
[3:07] * sjustlaptop (~sam@66-214-251-229.dhcp.gldl.ca.charter.com) has joined #ceph
[3:08] * fdmanana (~fdmanana@bl5-3-159.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[3:09] * narb (~Jeff@38.99.52.10) Quit (Quit: narb)
[3:11] * lucas1 (~Thunderbi@222.240.148.130) has joined #ceph
[3:14] * Cube (~Cube@66-87-65-237.pools.spcsdns.net) Quit (Quit: Leaving.)
[3:14] * koleosfuscus (~koleosfus@adsl-84-226-68-69.adslplus.ch) has left #ceph
[3:15] * sjustlaptop (~sam@66-214-251-229.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[3:17] * sjustlaptop (~sam@66-214-251-229.dhcp.gldl.ca.charter.com) has joined #ceph
[3:22] * lupu (~lupu@86.107.101.214) has joined #ceph
[3:25] * diegows (~diegows@host-216-57-132-113.customer.veroxity.net) Quit (Ping timeout: 480 seconds)
[3:28] * zerick (~eocrospom@190.187.21.53) Quit (Ping timeout: 480 seconds)
[3:33] <huangjun> how to get the sepcific osd state? like active or booting or stopping?
[3:34] <huangjun> ceph osd stat or ceph osd dump only tells the osd summary info
[3:36] <Pedras> ceph osd tree
[3:36] <Pedras> a way
[3:37] <huangjun> ok, use "ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok status "
[3:38] <huangjun> and works in 0.80.1 not in 0.77
[3:46] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) Quit (Quit: Connection closed for inactivity)
[3:47] * bandrus (~oddo@162.223.167.195) Quit (Quit: Leaving.)
[3:53] <dmick> huangjun: osd dump shows it
[3:53] <dmick> use -f json (or json-pretty) to see it in a more parseable way
[3:54] <dmick> it only shows up or in; there's no such state as booting or stopping that I'm aware of
[4:09] * dpippenger (~Adium@66-192-9-78.static.twtelecom.net) Quit (Quit: Leaving.)
[4:10] <sherry> dmick: boot and stop would be shown if you keep track of ceph -w
[4:13] <dmick> oh, there are events, but not 'states'
[4:16] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) Quit (Quit: Leaving)
[4:20] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[4:25] * shang (~ShangWu@175.41.48.77) has joined #ceph
[4:27] * lucas1 (~Thunderbi@222.240.148.130) Quit (Quit: lucas1)
[4:32] <sherry> guys I am still struggling with this bug > http://tracker.ceph.com/issues/8641, cause still not sure it is from Ceph side or Im doing sth really wrong that can't have objects flushed into cold storage!
[4:33] * sjustlaptop (~sam@66-214-251-229.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[4:33] <dmick> I don't know, and everyone's exhausted from today, but it's also only a day old
[4:34] <sherry> I understand and tomorrow will be another summit day...
[4:36] <dmick> yes
[4:36] <dmick> oh and you actually got a response from David
[4:37] <dmick> I understand it's not yet answered
[4:37] <dmick> but at least you're getting quite fast responses
[4:40] <sherry> yeah, then I stop asking here and wait for another day or two after tomorrow's summit. thanks btw :)
[4:41] * lucas1 (~Thunderbi@222.240.148.130) has joined #ceph
[4:44] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[4:56] * haomaiwa_ (~haomaiwan@125-227-255-23.HINET-IP.hinet.net) has joined #ceph
[4:56] * showtime1135 (~kim@1.204.63.146) has joined #ceph
[4:56] * showtime1135 (~kim@1.204.63.146) has left #ceph
[4:59] * hasues (~hazuez@adsl-74-178-236-65.jax.bellsouth.net) has joined #ceph
[5:00] <huangjun> does ceph have any plans on sparse files?
[5:07] * sjm (~sjm@174.47.140.2) has joined #ceph
[5:12] * Vacum (~vovo@88.130.221.161) has joined #ceph
[5:19] * Vacum_ (~vovo@i59F7A7B3.versanet.de) Quit (Ping timeout: 480 seconds)
[5:21] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) Quit (Quit: Leaving.)
[5:48] * KevinPerks (~Adium@CPE602ad091f0ad-CM602ad091f0aa.cpe.net.cable.rogers.com) Quit (Quit: Leaving.)
[5:57] * Anticimex (anticimex@95.80.32.80) Quit (Quit: facility power maintenance)
[6:10] * Nats (~Nats@telstr575.lnk.telstra.net) Quit (Read error: Connection reset by peer)
[6:11] * Nats (~Nats@2001:8000:200c:0:f007:38b3:d5f8:4d9c) has joined #ceph
[6:27] * hasues (~hazuez@adsl-74-178-236-65.jax.bellsouth.net) Quit (Quit: Leaving.)
[6:30] * Pedras (~Adium@50.185.218.255) Quit (Quit: Leaving.)
[6:36] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[6:44] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[6:46] * longguang (~chatzilla@123.126.33.253) Quit (Ping timeout: 480 seconds)
[6:47] * vbellur (~vijay@122.167.247.240) Quit (Ping timeout: 480 seconds)
[6:50] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Quit: Leaving.)
[6:52] * lucas1 (~Thunderbi@222.240.148.130) Quit (Ping timeout: 480 seconds)
[6:56] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[6:56] * bkopilov (~bkopilov@213.57.16.185) has joined #ceph
[7:00] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[7:03] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit ()
[7:14] * vbellur (~vijay@121.244.87.117) has joined #ceph
[7:17] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[7:17] * v2 (~venky@ov42.x.rootbsd.net) Quit (Read error: Connection reset by peer)
[7:20] * wedge (lordsilenc@bigfoot.xh.se) has joined #ceph
[7:21] * v2 (~venky@ov42.x.rootbsd.net) has joined #ceph
[7:28] * saurabh (~saurabh@121.244.87.117) has joined #ceph
[7:29] * rdas (~rdas@nat-pool-pnq-t.redhat.com) has joined #ceph
[7:30] * longguang (~chatzilla@123.126.33.253) has joined #ceph
[7:35] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[7:35] * ScOut3R (~ScOut3R@5401C4E7.dsl.pool.telekom.hu) has joined #ceph
[7:38] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Remote host closed the connection)
[7:39] * vbellur (~vijay@121.244.87.117) Quit (Read error: Operation timed out)
[7:39] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) has joined #ceph
[7:39] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[7:45] * ScOut3R (~ScOut3R@5401C4E7.dsl.pool.telekom.hu) Quit (Read error: Operation timed out)
[7:55] * vbellur (~vijay@209.132.188.8) has joined #ceph
[8:03] * madkiss (~madkiss@217.194.72.154) has joined #ceph
[8:07] * ikrstic (~ikrstic@178-222-94-242.dynamic.isp.telekom.rs) has joined #ceph
[8:07] * Guest12597 (~coyo@thinks.outside.theb0x.org) Quit (Ping timeout: 480 seconds)
[8:11] * beardo_ (~sma310@208-58-255-215.c3-0.drf-ubr1.atw-drf.pa.cable.rcn.com) has joined #ceph
[8:11] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[8:12] * sjm (~sjm@174.47.140.2) Quit (Ping timeout: 480 seconds)
[8:16] * madkiss (~madkiss@217.194.72.154) Quit (Quit: Leaving.)
[8:19] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[8:19] * rotbeard (~redbeard@2a02:908:df19:9900:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[8:24] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[8:26] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Read error: Operation timed out)
[8:27] * hflai (~hflai@alumni.cs.nctu.edu.tw) Quit (Read error: Connection reset by peer)
[8:27] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[8:27] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[8:28] * vbellur (~vijay@209.132.188.8) Quit (Read error: Operation timed out)
[8:30] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Quit: Leaving.)
[8:40] * Sysadmin88 (~IceChat77@94.4.20.0) Quit (Quit: Oops. My brain just hit a bad sector)
[8:40] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[8:42] * rdas_ (~rdas@121.244.87.115) has joined #ceph
[8:42] * sm1ly (~sm1ly@ppp109-252-170-49.pppoe.spdop.ru) has joined #ceph
[8:42] * vbellur (~vijay@121.244.87.117) has joined #ceph
[8:42] * sm1ly (~sm1ly@ppp109-252-170-49.pppoe.spdop.ru) Quit ()
[8:45] * aldavud (~aldavud@213.55.176.163) has joined #ceph
[8:45] * rdas__ (~rdas@nat-pool-pnq-t.redhat.com) has joined #ceph
[8:47] * Hell_Fire (~HellFire@123-243-155-184.static.tpgi.com.au) has joined #ceph
[8:47] * Hell_Fire__ (~HellFire@123-243-155-184.static.tpgi.com.au) Quit (Read error: Connection reset by peer)
[8:47] * rdas (~rdas@nat-pool-pnq-t.redhat.com) Quit (Read error: Operation timed out)
[8:50] * v2 (~venky@ov42.x.rootbsd.net) Quit (Ping timeout: 480 seconds)
[8:51] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[8:51] * madkiss (~madkiss@nat.nue.novell.com) has joined #ceph
[8:52] * rdas_ (~rdas@121.244.87.115) Quit (Ping timeout: 480 seconds)
[8:53] * rdas (~rdas@121.244.87.115) has joined #ceph
[8:53] * rendar (~I@87.19.182.180) has joined #ceph
[8:53] * v2 (~venky@ov42.x.rootbsd.net) has joined #ceph
[8:56] * garphy`aw is now known as garphy
[8:58] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[8:59] * rdas__ (~rdas@nat-pool-pnq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[8:59] * hflai (~hflai@alumni.cs.nctu.edu.tw) has joined #ceph
[9:00] * ingard_ (~cake@tu.rd.vc) has joined #ceph
[9:01] * ingard (~cake@tu.rd.vc) Quit (Read error: Connection reset by peer)
[9:03] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:05] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[9:05] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[9:07] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) Quit (Read error: Operation timed out)
[9:07] * hflai (~hflai@alumni.cs.nctu.edu.tw) Quit (Remote host closed the connection)
[9:10] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) has joined #ceph
[9:12] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[9:13] * analbeard (~shw@support.memset.com) has joined #ceph
[9:13] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[9:14] * jordanP (~jordan@185.23.92.11) has joined #ceph
[9:18] * hflai (~hflai@alumni.cs.nctu.edu.tw) has joined #ceph
[9:18] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) has joined #ceph
[9:22] * vbellur (~vijay@209.132.188.8) has joined #ceph
[9:27] * michalefty (~micha@p20030071CF4C6A00EC3A6E8BBF44EB28.dip0.t-ipconnect.de) has joined #ceph
[9:28] * hflai (~hflai@alumni.cs.nctu.edu.tw) Quit (Read error: Connection reset by peer)
[9:28] * madkiss (~madkiss@nat.nue.novell.com) Quit (Quit: Leaving.)
[9:30] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:32] * fsimonce (~simon@host220-56-dynamic.116-80-r.retail.telecomitalia.it) has joined #ceph
[9:34] * aldavud (~aldavud@213.55.176.163) Quit (Ping timeout: 480 seconds)
[9:37] * ade (~abradshaw@193.202.255.218) has joined #ceph
[9:39] * joerocklin (~joe@cpe-75-186-9-154.cinci.res.rr.com) Quit (Ping timeout: 480 seconds)
[9:47] * analbeard (~shw@support.memset.com) Quit (Ping timeout: 480 seconds)
[9:47] * hflai (~hflai@alumni.cs.nctu.edu.tw) has joined #ceph
[9:49] * analbeard (~shw@support.memset.com) has joined #ceph
[9:52] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Ping timeout: 480 seconds)
[10:01] * lucas1 (~Thunderbi@218.76.25.66) Quit (Quit: lucas1)
[10:01] * vbellur (~vijay@209.132.188.8) Quit (Ping timeout: 480 seconds)
[10:11] * vbellur (~vijay@121.244.87.117) has joined #ceph
[10:15] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[10:20] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Read error: Operation timed out)
[10:23] * longguang (~chatzilla@123.126.33.253) Quit (Ping timeout: 480 seconds)
[10:25] * thomnico (~thomnico@92.54.161.199) has joined #ceph
[10:26] * longguang (~chatzilla@123.126.33.253) has joined #ceph
[10:29] * Infitialis (~infitiali@194.30.182.18) has joined #ceph
[10:33] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[10:35] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[10:41] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[10:42] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[10:46] * vbellur (~vijay@209.132.188.8) has joined #ceph
[10:50] * thomnico (~thomnico@92.54.161.199) Quit (Ping timeout: 480 seconds)
[10:50] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[10:51] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[10:51] * madkiss (~madkiss@charybdis-ext.suse.de) has joined #ceph
[10:58] * jianingy (~jianingy@211.151.238.51) has joined #ceph
[10:58] <jianingy> hi, anyone have some experience on dealing with incomplete pg ?
[10:59] <jianingy> i got this of my cluster,
[10:59] <jianingy> cluster 0b366ab6-06d1-48e5-a479-3ec4f638bb43
[10:59] <jianingy> health HEALTH_WARN 1 pgs incomplete; 1 pgs stuck inactive; 1 pgs stuck unclean; 5 requests are blocked > 32 sec
[10:59] <jianingy> monmap e1: 3 mons at {a=192.168.36.29:6789/0,b=192.168.36.30:6789/0,c=192.168.36.31:6789/0}, election epoch 28, quorum 0,1,2 a,b,c
[10:59] <jianingy> osdmap e39701: 90 osds: 89 up, 89 in
[10:59] <jianingy> pgmap v5394986: 32896 pgs, 3 pools, 28509 GB data, 7127 kobjects
[10:59] <jianingy> 29040 GB used, 20660 GB / 49701 GB avail
[10:59] <jianingy> 32895 active+clean
[10:59] <jianingy> 1 incomplete
[10:59] <jianingy> i already remove problem osds
[10:59] <jianingy> and i try to use force create pg , but seems it didn't work
[11:00] <jianingy> it happened after i increase the pg_num from 4096 to 32768
[11:01] * lucas1 (~Thunderbi@218.76.25.66) Quit (Quit: lucas1)
[11:02] <sherry> how many replicas?
[11:03] * fdmanana_ is now known as fmanana
[11:04] <sherry> jianingy: have a look at that http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/#stuck-placement-groups
[11:04] <jianingy> two
[11:05] <jianingy> yep, tried everything in the doc
[11:05] <sherry> btw, Ceph suggested to have OSD x 100 / replicas which in your case 4096 was fine
[11:06] * dpippenger (~Adium@cpe-198-72-157-189.socal.res.rr.com) has joined #ceph
[11:06] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[11:06] <sherry> why don't you add a new OSD?
[11:08] * fmanana (~fdmanana@bl10-142-244.dsl.telepac.pt) Quit (Quit: Leaving)
[11:08] <jianingy> actually, i encountered a problem that data not evenly distributed on all osds and someone suggests increasing pg_num would make it better
[11:09] <sherry> did you consider the weight of OSDs?
[11:09] <jianingy> I removed an OSD and added it back
[11:09] <jianingy> tried auto reweight but not work well
[11:10] <sherry> bt it seems it is not back yet
[11:10] <sherry> 90 osds: 89 up, 89 in
[11:10] <jianingy> i run 'ceph pg force_create_pg 2.b0d' and it now seems stuck in creating state for a long time
[11:11] * fdmanana (~fdmanana@bl10-142-244.dsl.telepac.pt) has joined #ceph
[11:11] <jianingy> 89 because one osd contains the incomplete pg cannot start
[11:11] <jianingy> so i osd out it
[11:11] <sherry> hmm why don't change weights in CRUSH map manually?
[11:11] <jianingy> no
[11:12] <jianingy> if pg is in creating state, how will it be create normally?
[11:13] <jianingy> and does '5 requests are blocked > 32 sec' block the 'creating' process?
[11:14] * dr_whax (drwhax@devio.us) has joined #ceph
[11:16] <sherry> well OSD which is down caused the PG to become incomplete
[11:16] <sherry> try this > ceph pg repair 2.b0d
[11:17] <sherry> u need to bring OSD up first
[11:18] <jianingy> that downed OSD (osd.33) has been out before. now 2.b0d is acting by [78,106]
[11:19] <jianingy> will start osd.33 work?
[11:19] <jianingy> and should i make it in the cluster?
[11:20] <sherry> hmm then try to repair first
[11:20] <sherry> after that u may scrub pg
[11:21] * joerocklin (~joe@cpe-65-185-149-56.woh.res.rr.com) has joined #ceph
[11:21] <jianingy> i've started osd.33 and osd.78 and 106 becomes down after that
[11:21] <jianingy> now status becomes,
[11:21] <jianingy> cluster 0b366ab6-06d1-48e5-a479-3ec4f638bb43
[11:21] <jianingy> health HEALTH_WARN 1458 pgs degraded; 10 pgs stale; 709 pgs stuck unclean; 5 requests are blocked > 32 sec; reco
[11:22] <jianingy> very 618602/14174998 objects degraded (4.364%); 1/89 in osds are down
[11:22] <jianingy> monmap e1: 3 mons at {a=192.168.36.29:6789/0,b=192.168.36.30:6789/0,c=192.168.36.31:6789/0}, election epoch 42,
[11:22] <jianingy> quorum 0,1,2 a,b,c
[11:22] <jianingy> osdmap e39730: 90 osds: 88 up, 89 in
[11:22] <jianingy> pgmap v5395196: 32896 pgs, 3 pools, 27685 GB data, 6921 kobjects
[11:22] <jianingy> 29040 GB used, 20660 GB / 49701 GB avail
[11:22] <jianingy> 618602/14174998 objects degraded (4.364%)
[11:22] <jianingy> 31427 active+clean
[11:22] <jianingy> 10 stale+active+clean
[11:22] <jianingy> 1457 active+degraded
[11:22] <jianingy> 1 active+clean+scrubbing
[11:22] <jianingy> 1 active+degraded+remapped
[11:22] <jianingy> seems ok
[11:26] * haomaiwa_ (~haomaiwan@125-227-255-23.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[11:28] * lucas1 (~Thunderbi@222.240.148.130) has joined #ceph
[11:28] * vbellur (~vijay@209.132.188.8) Quit (Ping timeout: 480 seconds)
[11:32] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (Read error: Connection reset by peer)
[11:32] <dr_whax> Hoi, I have created a ceph cluster which will function as a backup machine, 1 monitor, 3osd's. however, it seems that even after a day, the status is still health warn.."HEALTH_WARN 125 pgs degraded; 67 pgs incomplete; 67 pgs stuck inactive; 192 pgs stuck unclean; 1 requests are blocked > 32 sec" http://sprunge.us/cGaF
[11:32] <dr_whax> anyone has pointers for how to solve this?
[11:32] <dr_whax> I tried to restart the cluster and osd's one by one but no go
[11:32] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[11:33] <sherry> what does ceph osd dump | grep pool say?
[11:34] <dr_whax> http://sprunge.us/SGMT
[11:34] <sherry> and ur CRUSH map?
[11:35] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[11:35] <sherry> u may use: sudo ceph osd getcrushmap -o crush.running.map, sudo crushtool -d crush.running.map -o crush.map, sudo vim crush.map
[11:37] <dr_whax> http://sprunge.us/QXWM
[11:37] <sherry> u want to have 3 replicas?
[11:37] <dr_whax> Actually, I want to change that to 2.
[11:38] <dr_whax> After I have gotten the cluster running :-)
[11:38] <sherry> ah then try to change the size of ur pools to 2
[11:38] <sherry> as u can see in here: http://sprunge.us/SGMT
[11:38] <sherry> the size of all of ur three pools are three
[11:38] <sherry> and this is the default in firefly
[11:39] <dr_whax> I see
[11:39] <sherry> ceph osd pool set pool_name size 2 should work
[11:40] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[11:41] * BranchPr1dictor (branch@predictor.org.pl) has joined #ceph
[11:41] * hybrid5121 (~walid@195.200.167.70) Quit (Quit: Leaving.)
[11:42] * johnfoo (~johnfoo@ip-133.net-89-3-152.rev.numericable.fr) has joined #ceph
[11:42] <johnfoo> hi guys
[11:42] <dr_whax> ok, that did something, but still health_warn --> http://sprunge.us/UCPi
[11:42] <Vacum> dr_whax: you have all 3 OSDs in the same host. but your ruleset says each leaf should reside on a different *host*
[11:42] <dr_whax> aha!
[11:42] <dr_whax> *doh*
[11:42] <Vacum> dr_whax: step chooseleaf firstn 0 type host
[11:43] * BranchPredictor (branch@predictor.org.pl) Quit (Read error: Connection reset by peer)
[11:43] <Vacum> dr_whax: if you change that to "type osd", compile that crushmap and inject it, it should become healthy. but where is the sense in doing that? if that host dies, all data is gone. you could use a raid on that single host ;)
[11:43] <johnfoo> i recently upgraded our ceph cluster to 0.80.1 and added a second LACP interface to the backend, but for some reason osds timeout on the heartbeats all the time so ceph is stuck repairing
[11:44] <dr_whax> Vacum: it's for snapshots/backups from one cluster with osd's on multiple machines, to the backup machine.
[11:44] <Vacum> ok
[11:45] <dr_whax> Not the best solution.. but the machines will be redundant at least.
[11:46] <dr_whax> Vacum: thanks, this obviously worked! :-)
[11:47] * zack_dolby (~textual@e0109-114-22-12-137.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[11:47] * thomnico (~thomnico@faun.canonical.com) has joined #ceph
[11:50] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[11:54] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Read error: Connection reset by peer)
[11:58] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[12:02] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[12:02] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Read error: Connection reset by peer)
[12:04] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[12:06] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[12:06] <Svedrin> I configured libvirt to use an rbd image as outlined in http://ceph.com/docs/master/rbd/libvirt/, but when I try to start the vm, I get:
[12:06] <Svedrin> qemu-system-x86_64: -drive file=rbd:rbd/testvm2:id=admin:key=AQA[...]: could not open disk image rbd:r[...]: Unknown protocol
[12:06] <johnfoo> Svedrin: your qemu doesn't know how to speak rbd
[12:07] <Svedrin> johnfoo, how do I teach it to? did I miss some packages or something?
[12:07] <johnfoo> depending on your distro you may have to recompile it
[12:07] <johnfoo> it's a compile time option
[12:07] <Svedrin> I'm on debian wheezy, and I installed the qemu-packages from wheezy-backports
[12:07] <Svedrin> on another wheezy box, this works :|
[12:07] <Svedrin> odd.
[12:08] <Svedrin> aww man. they added it in the version I'm using on that other box, and removed it subsequently
[12:08] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[12:08] <johnfoo> :debian:
[12:10] * vbellur (~vijay@209.132.188.8) has joined #ceph
[12:11] * kalleeh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[12:11] <Svedrin> Domain testvm2 started
[12:11] <Svedrin> *sigh*
[12:13] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Read error: Operation timed out)
[12:13] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[12:14] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[12:14] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[12:16] <johnfoo> Vacum: do you have any idea what could cause OSDs to lose that many heartbeats ? clocks are 500ms apart, network latency is around 0.1ms, bandwidth is largely enough and neither the switches nor the interfaces show drops
[12:17] <johnfoo> yet OSDs from one host to another manage to miss heartbeats longer than the grace time
[12:32] * dpippenger (~Adium@cpe-198-72-157-189.socal.res.rr.com) Quit (Quit: Leaving.)
[12:36] * coreping (~xuser@hugin.coreping.org) Quit (Ping timeout: 480 seconds)
[12:51] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Quit: Konversation terminated!)
[12:53] * zack_dolby (~textual@pdf8519e7.tokynt01.ap.so-net.ne.jp) has joined #ceph
[12:54] * tdasilva (~quassel@nat-pool-bos-t.redhat.com) has joined #ceph
[12:55] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[12:58] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[13:01] * stephan (~stephan@62.217.45.26) has joined #ceph
[13:02] <stephan> Hi *
[13:06] <jianingy> sherry: thank you very much! everything's ok now. you save the world :)
[13:07] * stephan (~stephan@62.217.45.26) Quit (Quit: Ex-Chat)
[13:07] * coreping (~xuser@hugin.coreping.org) has joined #ceph
[13:07] * stephan (~stephan@62.217.45.26) has joined #ceph
[13:08] <stephan> Does someone has any experience with eleasticsearch and rbd?
[13:09] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[13:09] * tdasilva_ (~quassel@nat-pool-bos-u.redhat.com) has joined #ceph
[13:09] * tdasilva (~quassel@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[13:10] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has joined #ceph
[13:12] * rdas (~rdas@121.244.87.115) Quit (Quit: Leaving)
[13:16] * huangjun (~kvirc@111.173.81.152) Quit (Ping timeout: 480 seconds)
[13:18] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[13:22] * kalleeh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[13:25] * thomnico (~thomnico@faun.canonical.com) Quit (Ping timeout: 480 seconds)
[13:26] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[13:26] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[13:30] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[13:34] * mnash (~chatzilla@vpn.expressionanalysis.com) Quit (Ping timeout: 480 seconds)
[13:34] * PureNZ (~paul@122-62-45-132.jetstream.xtra.co.nz) Quit (Ping timeout: 480 seconds)
[13:36] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[13:37] * Anticimex (anticimex@95.80.32.80) has joined #ceph
[13:37] * simulx (~simulx@vpn.expressionanalysis.com) Quit (Ping timeout: 480 seconds)
[13:40] * simulx (~simulx@66-194-114-178.static.twtelecom.net) has joined #ceph
[13:41] * mnash (~chatzilla@66-194-114-178.static.twtelecom.net) has joined #ceph
[13:45] * diegows (~diegows@190.190.5.238) has joined #ceph
[13:47] * saurabh (~saurabh@121.244.87.117) Quit (Quit: Leaving)
[13:49] * dmsimard_away is now known as dmsimard
[13:50] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[13:51] * zack_dolby (~textual@pdf8519e7.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[13:52] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[13:58] * tdasilva_ (~quassel@nat-pool-bos-u.redhat.com) Quit (Read error: Operation timed out)
[13:59] * longguang (~chatzilla@123.126.33.253) Quit (Ping timeout: 480 seconds)
[14:00] * thomnico (~thomnico@faun.canonical.com) has joined #ceph
[14:02] * KevinPerks (~Adium@CPE602ad091f0ad-CM602ad091f0aa.cpe.net.cable.rogers.com) has joined #ceph
[14:03] * shang (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[14:04] * lucas1 (~Thunderbi@222.240.148.130) Quit (Quit: lucas1)
[14:08] * KevinPerks (~Adium@CPE602ad091f0ad-CM602ad091f0aa.cpe.net.cable.rogers.com) Quit (Quit: Leaving.)
[14:08] * tdasilva (~quassel@nat-pool-bos-t.redhat.com) has joined #ceph
[14:09] * KevinPerks (~Adium@CPE602ad091f0ad-CM602ad091f0aa.cpe.net.cable.rogers.com) has joined #ceph
[14:11] * thomnico (~thomnico@faun.canonical.com) Quit (Ping timeout: 480 seconds)
[14:11] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[14:12] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Quit: Computer has gone to sleep.)
[14:17] * KevinPerks (~Adium@CPE602ad091f0ad-CM602ad091f0aa.cpe.net.cable.rogers.com) Quit (Ping timeout: 480 seconds)
[14:18] <Vacum> johnfoo: you said this started when you added new network interfaces? did you change any IP addresses?
[14:26] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[14:27] <Serbitar> or add a default route on a different interface
[14:28] * longguang (~chatzilla@123.126.33.253) has joined #ceph
[14:30] <johnfoo> Vacum: i didn't had an interface
[14:30] <johnfoo> i added a slave interface to the LACP bond
[14:31] <johnfoo> originally the backend interface was a single interface
[14:31] <Vacum> so neither ip addresses nor routing changed?
[14:31] <johnfoo> no
[14:31] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[14:31] <johnfoo> there is no routing involved, everything is switched at the top of the rack
[14:32] <johnfoo> on the only routing involved is from the hypervisors to the OSDs, and it works fine
[14:32] <johnfoo> (other than the atrocious IO perf due to ceph constantly repairing)
[14:32] <Vacum> when you say "that many heardbeats" are lost: are *all* lost? could be an lacp misconfiguration?
[14:33] * vbellur (~vijay@209.132.188.8) Quit (Quit: Leaving.)
[14:34] <johnfoo> that's the point they're not all lost, it's seemingly random. and immediatly or soon after being marked down, OSDs will get back up and claim the last map marked them down wrongly
[14:35] <Vacum> johnfoo: how much CPU does the mon leader take at the moment?
[14:36] <Vacum> (or rather at the moment osds are wrongly marked down)
[14:36] <johnfoo> roughly 10%
[14:36] <johnfoo> doesn't get much higher than that
[14:38] <johnfoo> osd can eat a lot of cpu. but it's pmuch all iowait
[14:39] <Vacum> mh, sorry, no idea then :/
[14:39] <johnfoo> yeah i'm pretty lost too
[14:40] <johnfoo> i secretly hoped somebody had a similar problem
[14:40] <Vacum> could you do some low level network tests to check if you have high packet lost?
[14:40] <Vacum> which lacp driver mode btw?
[14:40] <johnfoo> 802.3ad
[14:40] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) Quit (Remote host closed the connection)
[14:40] <johnfoo> over MLAG
[14:41] <johnfoo> one pair on each switch
[14:41] <Vacum> so 4 total on each host?
[14:43] <Vacum> round robin, xor, ? I'm wondering if perhaps only one of the slave interfaces has a problem and 1/4 of all packets are lost?
[14:45] <johnfoo> interesting
[14:45] <johnfoo> i get a lot of packet loss when the BW consumption gets too high
[14:46] <Vacum> johnfoo: check with ie htop if you saturate CPU core 0 with NIC interrupts
[14:46] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[14:46] <absynth> uhm, are you sure you're not saturating your osd interfaces?
[14:47] <johnfoo> absynth: the switch reports 3.2Mb/s
[14:47] <johnfoo> interfaces are 2x10Gb aggregated, per host
[14:47] <absynth> ok, so unless it's a 10mbps interfaces, that can hardly be an issue
[14:47] <Vacum> you have packet loss with 3.2 Mb/s :)
[14:47] <absynth> neither can interrupt saturation be an issue on halfway modern hardware
[14:47] <johnfoo> Vacum: yes
[14:47] <absynth> do you see errors in the switch port counters?
[14:48] <johnfoo> absynth: nope
[14:48] <johnfoo> and neither on the host interface
[14:48] <johnfoo> that's what puzzles me
[14:48] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[14:49] * rdas (~rdas@121.244.87.115) has joined #ceph
[14:52] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[14:53] <Vacum> johnfoo: what bandwidth do you have between the two 10G switches that build one MC-LAG? also "only" 10G?
[14:53] <johnfoo> 4x10G
[14:53] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[14:53] <Vacum> ok :)
[14:54] * boichev (~boichev@213.169.56.130) Quit (Read error: Operation timed out)
[14:56] * boichev (~boichev@213.169.56.130) has joined #ceph
[14:59] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[15:00] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[15:00] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Quit: sync && halt)
[15:00] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[15:01] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit (Remote host closed the connection)
[15:01] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) has joined #ceph
[15:01] * michalefty (~micha@p20030071CF4C6A00EC3A6E8BBF44EB28.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[15:02] * huangjun (~kvirc@117.151.41.243) has joined #ceph
[15:03] * lpabon (~quassel@66-189-8-115.dhcp.oxfr.ma.charter.com) has joined #ceph
[15:07] <magicrobotmonkey> hey how do i find out if a certain commit is in a release?
[15:07] <magicrobotmonkey> specifically http://tracker.ceph.com/projects/ceph/repository/revisions/7989cbd418ed8d51348851a39ffa84ac2224f4fe in 0.80.1
[15:08] <johnfoo> magicrobotmonkey: you can use diff
[15:12] * michalefty (~micha@p20030071CF530400EC3A6E8BBF44EB28.dip0.t-ipconnect.de) has joined #ceph
[15:13] <bkopilov> joao|lap, Hi
[15:13] * JayJ (~jayj@157.130.21.226) has joined #ceph
[15:13] <bkopilov> I have a question about ceph install on client side
[15:14] * zack_dolby (~textual@pdf8519e7.tokynt01.ap.so-net.ne.jp) has joined #ceph
[15:22] * KevinPerks (~Adium@CPE602ad091f0ad-CM602ad091f0aa.cpe.net.cable.rogers.com) has joined #ceph
[15:24] * tdasilva (~quassel@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[15:29] <b0e> Hi, i have question to choose and chooseleaf in the crushmap. I have the following structure root->datacenter->switch->host and a pool with replica 3. I have to datacenters. At the end there should be a datacenter with 1 replica and a datacenter with 2 replicas. In the datacenter with 2 replicas, these should be on different switches. The following doesn't work for me :/
[15:29] <b0e> step take root
[15:29] <b0e> step choose firstn 2 type datacenter
[15:29] <b0e> step chooseleaf firstn 0 type switch
[15:29] <b0e> step emit
[15:30] <b0e> gets the result of `step choose firstn 2 type datacenter` passed to `step chooseleaf firstn 0 type switch` ?
[15:30] <johnfoo> Vacum: you won't believe me but i *may* have found the origin of my problem
[15:31] <johnfoo> net.core.wmem_max and net.core.rmem_max were set far too low
[15:31] <johnfoo> with 54Mb buffers, for the moment the cluster has stabilized
[15:31] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Read error: Operation timed out)
[15:32] <johnfoo> it's scrubbing, we'll see how it goes
[15:32] * primechuck (~primechuc@69.170.148.179) has joined #ceph
[15:33] <Vacum> johnfoo: right, we increased those too :)
[15:35] <Vacum> johnfoo: how many osds do you have in one host?
[15:35] <johnfoo> 8
[15:35] <Vacum> ok
[15:35] <johnfoo> 12 max
[15:35] <johnfoo> 4 are inactive
[15:35] <johnfoo> i'd like to reserve them for ssd caching
[15:35] <johnfoo> anybody got to play with that ?
[15:35] <magicrobotmonkey> is anyone familiar with using boto to access radosgw
[15:36] <magicrobotmonkey> im trying to figure out how to create a bucket in a different "zone" in rados terminology
[15:36] * thomnico (~thomnico@37.205.61.203) has joined #ceph
[15:36] <magicrobotmonkey> I think that's "location" to boto
[15:36] <magicrobotmonkey> but I'm not sure
[15:38] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[15:39] * dr_whax (drwhax@devio.us) Quit (Quit: leaving)
[15:40] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[15:41] * koleosfuscus (~koleosfus@ws11-189.unine.ch) has joined #ceph
[15:44] * fsimonce` (~simon@host21-29-dynamic.53-82-r.retail.telecomitalia.it) has joined #ceph
[15:46] * PerlStalker (~PerlStalk@72.166.192.70) has joined #ceph
[15:47] * brad_mssw (~brad@shop.monetra.com) has joined #ceph
[15:48] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[15:49] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[15:50] * fsimonce (~simon@host220-56-dynamic.116-80-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[15:50] * lpabon (~quassel@66-189-8-115.dhcp.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[15:51] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[15:52] * fsimonce (~simon@host107-51-dynamic.116-80-r.retail.telecomitalia.it) has joined #ceph
[15:53] * Kedsta (Ked@cpc6-pool14-2-0-cust202.15-1.cable.virginm.net) has joined #ceph
[15:54] * fsimonce` (~simon@host21-29-dynamic.53-82-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[15:55] <brad_mssw> is it possible to remove the 'data' and 'metadata' pools that exist by default?
[15:55] <brad_mssw> it gives me an error stating it is in use by cephfs
[15:56] <brad_mssw> but I don't even have any mds servers nor plan on using cephfs
[15:56] * ade (~abradshaw@193.202.255.218) Quit (Remote host closed the connection)
[15:56] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[15:57] * keds (Ked@cpc6-pool14-2-0-cust202.15-1.cable.virginm.net) Quit (Read error: Operation timed out)
[16:01] * The_Bishop_ (~bishop@e180039238.adsl.alicedsl.de) has joined #ceph
[16:03] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[16:04] <magicrobotmonkey> hey if you can't append to an object in an ecpool with librados, how is radosgw doing multipart uploads?
[16:08] * The_Bishop (~bishop@f055144195.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[16:10] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[16:15] * bandrus (~oddo@162.223.167.195) has joined #ceph
[16:15] <Vacum> magicrobotmonkey: if i understood correctly, it is possible, but you have to make sure the previous uploaded chunk is correctly aligned
[16:16] <Vacum> (never used it myself though)
[16:16] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[16:16] * vbellur (~vijay@122.167.109.216) has joined #ceph
[16:17] * sjm (~sjm@cloudgate.cs.utsa.edu) has joined #ceph
[16:21] * hybrid512 (~walid@195.200.167.70) has joined #ceph
[16:22] * thomnico (~thomnico@37.205.61.203) Quit (Ping timeout: 480 seconds)
[16:26] * michalefty (~micha@p20030071CF530400EC3A6E8BBF44EB28.dip0.t-ipconnect.de) has left #ceph
[16:28] * hufman (~hufman@cpe-184-58-235-28.wi.res.rr.com) has joined #ceph
[16:29] * markbby (~Adium@168.94.245.2) has joined #ceph
[16:38] * sz0 (~sz0@94.54.193.66) has joined #ceph
[16:43] * longguang (~chatzilla@123.126.33.253) Quit (Ping timeout: 480 seconds)
[16:47] * koleosfuscus (~koleosfus@ws11-189.unine.ch) Quit (Quit: koleosfuscus)
[16:48] * koleosfuscus (~koleosfus@ws11-189.unine.ch) has joined #ceph
[16:50] * fsimonce` (~simon@host29-76-dynamic.9-79-r.retail.telecomitalia.it) has joined #ceph
[16:51] * fsimonce (~simon@host107-51-dynamic.116-80-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[16:57] * fsimonce` is now known as fsimonce
[16:57] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[17:06] * madkiss (~madkiss@charybdis-ext.suse.de) Quit (Ping timeout: 480 seconds)
[17:11] * adamcrume (~quassel@c-71-204-162-10.hsd1.ca.comcast.net) has joined #ceph
[17:12] * BManojlovic (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[17:13] * Infitialis (~infitiali@194.30.182.18) Quit ()
[17:14] * terje (~joey@75-171-245-206.hlrn.qwest.net) Quit (Read error: Operation timed out)
[17:14] * analbeard (~shw@support.memset.com) Quit (Ping timeout: 480 seconds)
[17:16] * lpabon (~quassel@nat-pool-bos-t.redhat.com) has joined #ceph
[17:18] * terje (~joey@184-96-155-130.hlrn.qwest.net) has joined #ceph
[17:20] * b0e (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[17:20] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[17:21] * terje (~joey@184-96-155-130.hlrn.qwest.net) Quit (Read error: Operation timed out)
[17:23] * terje (~joey@184-96-155-130.hlrn.qwest.net) has joined #ceph
[17:24] * hasues (~hazuez@adsl-74-178-236-65.jax.bellsouth.net) has joined #ceph
[17:24] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[17:25] * koleosfuscus (~koleosfus@ws11-189.unine.ch) Quit (Quit: koleosfuscus)
[17:27] * sage (~quassel@cpe-23-242-158-79.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[17:27] * sage (~quassel@cpe-23-242-158-79.socal.res.rr.com) has joined #ceph
[17:27] * ChanServ sets mode +o sage
[17:30] * narb (~Jeff@38.99.52.10) has joined #ceph
[17:35] * Pedras (~Adium@216.207.42.140) has joined #ceph
[17:44] * scuttle|afk is now known as scuttlemonkey
[17:45] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[17:50] * joerocklin_ (~joe@cpe-65-185-149-56.woh.res.rr.com) has joined #ceph
[17:54] * ircolle (~Adium@2601:1:8380:2d9:14fe:84b:6e9d:df3d) has joined #ceph
[17:55] * joerocklin (~joe@cpe-65-185-149-56.woh.res.rr.com) Quit (Ping timeout: 480 seconds)
[17:58] * Pedras (~Adium@216.207.42.140) Quit (Ping timeout: 480 seconds)
[18:02] * tws_ (~traviss@rrcs-24-123-86-154.central.biz.rr.com) has joined #ceph
[18:02] * Pedras (~Adium@216.207.42.140) has joined #ceph
[18:03] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:04] * lalatenduM (~lalatendu@122.167.6.181) has joined #ceph
[18:05] * rwheeler (~rwheeler@nat-pool-bos-u.redhat.com) has joined #ceph
[18:11] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[18:16] * Pedras (~Adium@216.207.42.140) Quit (Quit: Leaving.)
[18:23] * fsimonce (~simon@host29-76-dynamic.9-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[18:26] * fsimonce (~simon@host29-76-dynamic.9-79-r.retail.telecomitalia.it) has joined #ceph
[18:31] * Pedras (~Adium@216.207.42.140) has joined #ceph
[18:34] * fsimonce (~simon@host29-76-dynamic.9-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[18:35] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[18:36] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:41] * gregsfortytwo1 (~Adium@cpe-107-184-64-126.socal.res.rr.com) has joined #ceph
[18:42] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[18:45] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[18:52] * garphy is now known as garphy`aw
[18:52] * jordanP (~jordan@185.23.92.11) Quit (Quit: Leaving)
[18:55] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[18:56] * sjm (~sjm@cloudgate.cs.utsa.edu) Quit (Read error: Operation timed out)
[18:58] <blSnoopy> are there tunables to get better rbd / iscsi reexport performance? i'm seeing very slow writes and poor reads (100~ MB/s read, 25MB/s write)?
[18:59] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[18:59] * davidzlap (~Adium@ip68-4-173-198.oc.oc.cox.net) has joined #ceph
[19:00] * DavidThunder (~Thunderbi@ip68-4-173-198.oc.oc.cox.net) has joined #ceph
[19:00] * DavidThunder (~Thunderbi@ip68-4-173-198.oc.oc.cox.net) Quit ()
[19:00] * DavidThunder (~Thunderbi@ip68-4-173-198.oc.oc.cox.net) has joined #ceph
[19:04] * sarob (~sarob@2001:4998:effd:600:90e:4294:3c21:5a1b) has joined #ceph
[19:04] <ponyofdeath> hi, is it possible to reduse the pg number for a pool?
[19:06] * markbby (~Adium@168.94.245.2) Quit (Ping timeout: 480 seconds)
[19:07] * rwheeler (~rwheeler@nat-pool-bos-u.redhat.com) Quit (Quit: Leaving)
[19:08] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[19:09] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Remote host closed the connection)
[19:10] * Cube (~Cube@12.248.40.138) has joined #ceph
[19:11] * Cube (~Cube@12.248.40.138) Quit (Read error: Connection reset by peer)
[19:11] * Cube (~Cube@12.248.40.138) has joined #ceph
[19:11] <ponyofdeath> i guessnot per the doc's
[19:13] * rdas (~rdas@121.244.87.115) Quit (Quit: Leaving)
[19:15] * xarses (~andreww@12.164.168.117) has joined #ceph
[19:20] * michalefty (~micha@188-195-129-145-dynip.superkabel.de) has joined #ceph
[19:20] * sjm (~sjm@cloudgate.cs.utsa.edu) has joined #ceph
[19:22] * huangjun (~kvirc@117.151.41.243) Quit (Ping timeout: 480 seconds)
[19:28] * sjm (~sjm@cloudgate.cs.utsa.edu) Quit (Read error: Operation timed out)
[19:30] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Operation timed out)
[19:41] * michalefty (~micha@188-195-129-145-dynip.superkabel.de) Quit (Quit: Leaving.)
[19:47] * markbby (~Adium@168.94.245.2) has joined #ceph
[19:48] * Cube1 (~Cube@12.248.40.138) has joined #ceph
[19:49] * sjm (~sjm@cloudgate.cs.utsa.edu) has joined #ceph
[19:54] * Guest136 is now known as Azrael
[19:54] * Cube (~Cube@12.248.40.138) Quit (Ping timeout: 480 seconds)
[20:02] * sjm (~sjm@cloudgate.cs.utsa.edu) Quit (Ping timeout: 480 seconds)
[20:03] <magicrobotmonkey> i may have done something stupid
[20:03] <magicrobotmonkey> i had three monitors
[20:03] <magicrobotmonkey> i added three more
[20:03] <magicrobotmonkey> checked that they were all in
[20:03] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[20:03] <magicrobotmonkey> then stopped the daemons for the original 3
[20:03] <magicrobotmonkey> now they seem to be stuck in an election race condition
[20:04] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) has joined #ceph
[20:07] * bandrus (~oddo@162.223.167.195) Quit (Ping timeout: 480 seconds)
[20:09] <Anticimex> blSnoopy: how do you mount rbd? is it the kernel rbd driver?
[20:09] * bandrus (~oddo@162.223.167.195) has joined #ceph
[20:09] <Anticimex> qemu-rbd apparently performs better from what i've read
[20:09] * The_Bishop_ (~bishop@e180039238.adsl.alicedsl.de) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[20:09] <brad_mssw> magicrobotmonkey: did you remove the monitors you don't plan on using from /etc/ceph/ceph.conf?
[20:10] <brad_mssw> magicrobotmonkey: because if you had 6 up and one time 3/6 is not greater than 50% of known monitors thus no quorum
[20:11] * aldavud (~aldavud@213.55.184.242) has joined #ceph
[20:11] <brad_mssw> magicrobotmonkey: however, I'm not sure if editing /etc/ceph/ceph.conf is the right way to do it ... the removing a monitor manual says to use ceph mon remove {mon-id}
[20:12] <Anticimex> speculation: maybe removal needs to be one at a time (allow time for the system to learn new normal state)
[20:20] * sjm (~sjm@cloudgate.cs.utsa.edu) has joined #ceph
[20:22] * Pedras (~Adium@216.207.42.140) Quit (Ping timeout: 480 seconds)
[20:22] * Pedras (~Adium@216.207.42.140) has joined #ceph
[20:24] * lalatenduM (~lalatendu@122.167.6.181) Quit (Quit: Leaving)
[20:27] * madkiss (~madkiss@217.194.72.154) has joined #ceph
[20:29] <ponyofdeath> anyone can help me figure out why when I have this crush map with a cache tier of 7 ssd's and i do rados bench my max write is around 50MB/s? http://bpaste.net/show/AIZD1Bds3Wu6J46MmcEg/
[20:29] * sjm (~sjm@cloudgate.cs.utsa.edu) Quit (Ping timeout: 480 seconds)
[20:34] * dpippenger (~Adium@66-192-9-78.static.twtelecom.net) has joined #ceph
[20:37] <brad_mssw> ponyofdeath: what is your network interface? what is your replica count for the pool you are benchmarking?
[20:37] <ponyofdeath> brad_mssw: interface is 10Gbit
[20:37] <ponyofdeath> brad_mssw: replica is 2
[20:37] <ponyofdeath> pool 17 'tier1-cache' replicated size 2 min_size 1 crush_ruleset 5 object_hash rjenkins pg_num 256 pgp_num 256 last_change 230051 owner 0 flags hashpspool tier_of 2 cache_mode writeback target_bytes 322122547200 hit_set bloom{false_positive_probability: 0.05, target_size: 0, seed: 0} 1800s x1 stripe_width 0
[20:39] <brad_mssw> sorry, I was just checking for the low-hanging fruit ... 50MB/s would have been about normal for 1Gbps and replica 2
[20:40] <brad_mssw> i'm a newbie to ceph though (about a week), so I'm afraid my knowledge is fairly limited at this point ... and my 10G hardware hasn't come in yet for testing
[20:40] <ponyofdeath> brad_mssw: thanks anyway :)
[20:41] <iggy> that name sounds familiar
[20:43] <cookednoodles> whats the benchmark say ?
[20:43] <cookednoodles> oh wow
[20:43] <cookednoodles> thats the bench speed :/
[20:44] * rendar (~I@87.19.182.180) Quit (Ping timeout: 480 seconds)
[20:44] * sjusthm (~sam@66-214-251-229.dhcp.gldl.ca.charter.com) has joined #ceph
[20:47] * rendar (~I@87.19.182.180) has joined #ceph
[20:50] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[20:50] * sjm (~sjm@cloudgate.cs.utsa.edu) has joined #ceph
[20:55] * KaZeR (~kazer@64.201.252.132) has joined #ceph
[20:59] <magicrobotmonkey> Yes, I think I took them out too fast, Anticimex
[21:00] <magicrobotmonkey> now i can't connect to the cluster at all
[21:00] <magicrobotmonkey> and they're all calling for their own elections
[21:00] <magicrobotmonkey> but it seems like they're not communicating with one another at all
[21:01] * sjm (~sjm@cloudgate.cs.utsa.edu) Quit (Read error: Operation timed out)
[21:04] * aldavud (~aldavud@213.55.184.242) Quit (Ping timeout: 480 seconds)
[21:08] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[21:09] * The_Bishop (~bishop@e180039238.adsl.alicedsl.de) has joined #ceph
[21:10] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[21:10] * rotbeard (~redbeard@2a02:908:df19:9900:76f0:6dff:fe3b:994d) has joined #ceph
[21:11] * brad_mssw (~brad@shop.monetra.com) Quit (Quit: Leaving)
[21:11] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[21:12] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[21:12] * sjm (~sjm@cloudgate.cs.utsa.edu) has joined #ceph
[21:14] * sarob (~sarob@2001:4998:effd:600:90e:4294:3c21:5a1b) Quit (Remote host closed the connection)
[21:19] * sjm (~sjm@cloudgate.cs.utsa.edu) Quit (Read error: Operation timed out)
[21:21] <Anticimex> how able are ceph osd's to fill up write queues for standard sata drives?
[21:22] <Anticimex> when reading on performance, it seems spinning drives also benefit from having a bit of a write queue, to achieve better random iops
[21:22] * Pedras (~Adium@216.207.42.140) Quit (Quit: Leaving.)
[21:22] <Anticimex> (guess more food for NCQ to place writes as the arm goes back and forth)
[21:22] <Anticimex> or are writes more sustained typically to osds, from ssd journal?
[21:23] * PureNZ (~paul@122-62-45-132.jetstream.xtra.co.nz) has joined #ceph
[21:26] * rotbeard (~redbeard@2a02:908:df19:9900:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[21:28] * lpabon_ (~quassel@nat-pool-bos-t.redhat.com) has joined #ceph
[21:30] * garphy`aw is now known as garphy
[21:31] * lpabon_ (~quassel@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[21:38] * The_Bishop (~bishop@e180039238.adsl.alicedsl.de) Quit (Remote host closed the connection)
[21:39] * vilobhmm (~vilobhmm@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) has joined #ceph
[21:41] * garphy is now known as garphy`aw
[21:41] * The_Bishop (~bishop@e180039238.adsl.alicedsl.de) has joined #ceph
[21:43] * Hell_Fire_ (~HellFire@123-243-155-184.static.tpgi.com.au) has joined #ceph
[21:44] * Hell_Fire (~HellFire@123-243-155-184.static.tpgi.com.au) Quit (Read error: Connection reset by peer)
[21:44] * sjm (~sjm@cloudgate.cs.utsa.edu) has joined #ceph
[21:44] * bandrus (~oddo@162.223.167.195) Quit (Ping timeout: 480 seconds)
[21:45] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) Quit (Ping timeout: 480 seconds)
[21:50] * tdasilva (~quassel@dhcp-0-26-5a-b5-f8-68.cpe.townisp.com) has joined #ceph
[21:52] * sjm (~sjm@cloudgate.cs.utsa.edu) Quit (Read error: Operation timed out)
[21:53] * JayJ (~jayj@157.130.21.226) Quit (Quit: Computer has gone to sleep.)
[21:54] * koleosfuscus (~koleosfus@adsl-84-226-68-69.adslplus.ch) has joined #ceph
[21:55] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[21:55] * JayJ (~jayj@157.130.21.226) has joined #ceph
[21:59] * bandrus (~oddo@162.223.167.195) has joined #ceph
[21:59] * hasues (~hazuez@adsl-74-178-236-65.jax.bellsouth.net) Quit (Quit: Leaving.)
[22:03] * Sysadmin88 (~IceChat77@94.4.20.0) has joined #ceph
[22:05] * sarob (~sarob@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) has joined #ceph
[22:06] <madkiss> what is the easiest way to see all currently available placement rules in the crush map?
[22:15] <Pauline_> madkiss: How about "ceph osd crush rule dump"?
[22:15] * sjm (~sjm@cloudgate.cs.utsa.edu) has joined #ceph
[22:15] <absynth> madkiss: nice first article - glad it finally went through
[22:16] <madkiss> absynth: thank you, and I certainly do owe you a lot :)
[22:17] * Pauline_ is now known as Pauline
[22:17] <madkiss> Pauline: thanks
[22:17] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) Quit (Quit: Leaving.)
[22:19] <absynth> inktankers, in case you didn't see it yet, http://www.heise.de/ix/inhalt/2014/7/104/ (German, paywall)
[22:20] * The_Bishop (~bishop@e180039238.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[22:20] <ircolle> absynth - danke schoen! :-)
[22:22] * sjm (~sjm@cloudgate.cs.utsa.edu) Quit (Read error: Operation timed out)
[22:23] * sjm (~sjm@cloudgate.cs.utsa.edu) has joined #ceph
[22:25] * lpabon (~quassel@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[22:27] * dneary (~dneary@87-231-145-225.rev.numericable.fr) Quit (Ping timeout: 480 seconds)
[22:27] * tracphil (~tracphil@130.14.71.217) Quit (Quit: leaving)
[22:33] * primechuck (~primechuc@69.170.148.179) Quit (Remote host closed the connection)
[22:37] * sjm (~sjm@cloudgate.cs.utsa.edu) Quit (Read error: No route to host)
[22:38] * primechuck (~primechuc@69.170.148.179) has joined #ceph
[22:40] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[22:40] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit (Quit: Quitte)
[22:43] * yabalu007 (~yabalu007@gonzo.yabalu.ch) has joined #ceph
[22:54] * sz0_ (~sz0@94.54.193.66) has joined #ceph
[22:56] * ikrstic (~ikrstic@178-222-94-242.dynamic.isp.telekom.rs) Quit (Quit: Konversation terminated!)
[22:57] * sz0_ (~sz0@94.54.193.66) Quit ()
[22:57] * Pedras (~Adium@50.185.218.255) has joined #ceph
[22:59] * LeaChim (~LeaChim@host86-174-77-240.range86-174.btcentralplus.com) has joined #ceph
[23:02] * The_Bishop (~bishop@e180039238.adsl.alicedsl.de) has joined #ceph
[23:03] * tdasilva (~quassel@dhcp-0-26-5a-b5-f8-68.cpe.townisp.com) Quit (Ping timeout: 480 seconds)
[23:06] * The_Bishop (~bishop@e180039238.adsl.alicedsl.de) Quit ()
[23:07] * rweeks (~rweeks@228.sub-70-197-13.myvzw.com) has joined #ceph
[23:07] * athrift (~nz_monkey@203.86.205.13) has joined #ceph
[23:10] * sjm (~sjm@cloudgate.cs.utsa.edu) has joined #ceph
[23:11] * rweeks (~rweeks@228.sub-70-197-13.myvzw.com) Quit (Read error: Connection reset by peer)
[23:11] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) Quit (Quit: Leaving)
[23:14] * rweeks (~rweeks@50.141.85.7) has joined #ceph
[23:15] <athrift> Hello, we are working on a script to manually deep scrub PG's based on a number of conditions, what we cannot figure out is how to determine if a PG is being deep scrubbed, we can see when it has finished by looking at the time stamp, but doing a pg query does not indicate if it is being scrubbed or not. How can we determine this ?
[23:18] <singler> ceph pg dump shows current status of pg
[23:19] <singler> maybe there is better way, but I do not know it
[23:20] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[23:24] * dpippenger (~Adium@66-192-9-78.static.twtelecom.net) Quit (Quit: Leaving.)
[23:25] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Ping timeout: 480 seconds)
[23:26] <vilobhmm> jdurgin : ping
[23:39] * jcsp (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[23:44] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[23:49] * Hell_Fire__ (~HellFire@123-243-155-184.static.tpgi.com.au) has joined #ceph
[23:49] * Hell_Fire_ (~HellFire@123-243-155-184.static.tpgi.com.au) Quit (Read error: Connection reset by peer)
[23:59] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.