#ceph IRC Log

Index

IRC Log for 2014-03-21

Timestamps are in GMT/BST.

[0:04] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[0:06] * pieterl (~pieterl@194.134.32.8) Quit (Read error: Operation timed out)
[0:07] * pieterl (~pieterl@194.134.32.8) has joined #ceph
[0:08] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) has joined #ceph
[0:08] * diegows (~diegows@190.49.168.202) Quit (Ping timeout: 480 seconds)
[0:11] * wrale_ (~wrale@cpe-107-9-20-3.woh.res.rr.com) has joined #ceph
[0:15] * diegows (~diegows@190.49.168.202) has joined #ceph
[0:16] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[0:18] * wrale (~wrale@cpe-107-9-20-3.woh.res.rr.com) Quit (Ping timeout: 480 seconds)
[0:21] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[0:21] * ChanServ sets mode +v andreask
[0:25] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[0:26] * sarob_ (~sarob@2001:4998:effd:600:c1bc:7ec8:8e0f:3a80) Quit (Remote host closed the connection)
[0:26] * sarob (~sarob@2001:4998:effd:600:c1bc:7ec8:8e0f:3a80) has joined #ceph
[0:32] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) Quit (Quit: Siva)
[0:34] * sarob (~sarob@2001:4998:effd:600:c1bc:7ec8:8e0f:3a80) Quit (Ping timeout: 480 seconds)
[0:35] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[0:35] * oblu- (~o@62.109.134.112) has joined #ceph
[0:36] * oblu (~o@62.109.134.112) Quit (Ping timeout: 480 seconds)
[0:36] * xarses (~andreww@173-12-165-153-oregon.hfc.comcastbusiness.net) has joined #ceph
[0:38] * diegows (~diegows@190.49.168.202) Quit (Ping timeout: 480 seconds)
[0:38] * Nats (~Nats@telstr575.lnk.telstra.net) Quit (Ping timeout: 480 seconds)
[0:39] * Nats (~Nats@telstr575.lnk.telstra.net) has joined #ceph
[0:39] * nhm (~nhm@65-128-159-155.mpls.qwest.net) Quit (Remote host closed the connection)
[0:42] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has left #ceph
[0:46] * linuxkidd_ (~linuxkidd@rtp-isp-nat1.cisco.com) has joined #ceph
[0:48] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) has joined #ceph
[0:51] * JRGruher (~JoeGruher@134.134.139.76) has joined #ceph
[0:54] * linuxkidd (~linuxkidd@rtp-isp-nat1.cisco.com) Quit (Ping timeout: 480 seconds)
[0:55] * kaizh (~kaizh@128-107-239-235.cisco.com) Quit (Remote host closed the connection)
[0:56] * JoeGruher (~JoeGruher@134.134.139.76) Quit (Ping timeout: 480 seconds)
[0:56] * lluis (~oftc-webi@pat.hitachigst.com) Quit (Quit: Page closed)
[0:58] * b0e (~aledermue@rgnb-5d87890c.pool.mediaWays.net) Quit (Remote host closed the connection)
[1:12] * kaizh (~kaizh@128-107-239-234.cisco.com) has joined #ceph
[1:14] * xarses (~andreww@173-12-165-153-oregon.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[1:18] * sjustwork (~sam@2607:f298:a:607:74f9:232d:3326:b73b) Quit (Quit: Leaving.)
[1:19] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) Quit (Quit: Siva)
[1:21] * Steki (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:21] * kaizh (~kaizh@128-107-239-234.cisco.com) Quit (Ping timeout: 480 seconds)
[1:22] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) has joined #ceph
[1:27] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[1:28] * sarob_ (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[1:31] * Guest3952 (~me@port-19899.pppoe.wtnet.de) Quit (Ping timeout: 480 seconds)
[1:31] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[1:35] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[1:37] * bitblt (~don@128-107-239-233.cisco.com) Quit (Read error: Connection reset by peer)
[1:38] <taras> looks like a bunch of inktank guys are in pdx today
[1:38] <taras> anyone still here on friday?
[1:38] <taras> mozilla is looking to standup a few ceph clusters, would love to chat irl
[1:40] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[1:49] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[1:50] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) Quit (Quit: Siva)
[1:55] <fedgoat> anyone know why this is giving me an error when trying to delete a bucket radosgw-admin --bucket=xyz bucket rm ERROR: could not remove bucket xyz
[1:55] <Guest3584> Hi guys, wonder if any of the senior ceph users can answer a question. I am trying to deploy a 3 node ceph cluster (for testing, but possibly would start with 3-5 in production once I get to know the system).
[1:55] <Guest3584> What would happen if I have 3 nodes by 8 x hdds (osds) in a node, and 1 of the nodes would go completely down ... so 24 total osds ... 8 disappear... could the cluster survive that?
[1:56] * cofol1986 (~xwrj@110.90.119.113) Quit (Read error: Connection reset by peer)
[1:56] * rmoe (~quassel@12.164.168.117) Quit (Read error: Operation timed out)
[1:56] <Guest3584> and second part of the question, if I brought that node back up (say cuz of a failed power supplyt hat I replaced), what would it take for the cluster to resume operation?
[1:56] * Guest3584 is now known as pasha_ceph
[2:00] <fedgoat> should be fine as long as you have replicas on the other host or hosts.
[2:02] * zerick (~eocrospom@190.187.21.53) Quit (Remote host closed the connection)
[2:02] * sroy (~sroy@96.127.230.203) has joined #ceph
[2:05] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[2:06] * LeaChim (~LeaChim@host86-159-235-225.range86-159.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:06] * madkiss (~madkiss@bzq-218-11-179.cablep.bezeqint.net) Quit (Quit: Leaving.)
[2:10] * JeffK (~Jeff@38.99.52.10) Quit (Quit: JeffK)
[2:10] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) has joined #ceph
[2:12] <pasha_ceph> fedgoat: thanks for the response... so if I understand correctly.... cluster would be 24 OSDs, I would configure say 2 copies of the data to exist at all times, which means that the algorithm should allow for up to 12 OSDs to fail before the cluster would become degraded ... AND the algorithm is smart enough to distribute that data in such a way that you can lose 8 OSDs on a single host?
[2:12] <pasha_ceph> and cluster should still be ok because there are still 16 OSDs
[2:17] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[2:25] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[2:26] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit ()
[2:31] * bandrus (~Adium@adsl-75-5-246-77.dsl.scrm01.sbcglobal.net) Quit (Quit: Leaving.)
[2:42] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[2:42] * sarob_ (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[2:43] * sarob (~sarob@2001:4998:effd:600:f825:9eab:f552:2beb) has joined #ceph
[2:47] * JRGruher (~JoeGruher@134.134.139.76) Quit (Remote host closed the connection)
[2:50] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[2:51] * sarob (~sarob@2001:4998:effd:600:f825:9eab:f552:2beb) Quit (Ping timeout: 480 seconds)
[2:52] * sarob (~sarob@2001:4998:effd:600:159e:87ba:2599:41d) has joined #ceph
[2:53] * sarob_ (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[2:53] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[2:55] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[2:57] * thuc_ (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[2:57] * Boltsky (~textual@office.deviantart.net) Quit (Ping timeout: 480 seconds)
[3:00] * sarob (~sarob@2001:4998:effd:600:159e:87ba:2599:41d) Quit (Ping timeout: 480 seconds)
[3:01] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:02] * kaizh (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) has joined #ceph
[3:02] * haomaiwang (~haomaiwan@106.38.255.122) Quit (Ping timeout: 480 seconds)
[3:05] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) Quit (Quit: Siva)
[3:05] * thuc_ (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:16] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[3:19] * oblu (~o@62.109.134.112) has joined #ceph
[3:20] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) has joined #ceph
[3:21] * oblu- (~o@62.109.134.112) Quit (Ping timeout: 480 seconds)
[3:23] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[3:24] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) has joined #ceph
[3:26] * lianghaoshen (~slhhust@119.39.124.239) has joined #ceph
[3:27] * lianghaoshen (~slhhust@119.39.124.239) Quit ()
[3:29] * talonisx (~kvirc@pool-108-18-97-131.washdc.fios.verizon.net) has joined #ceph
[3:30] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) Quit (Quit: mtanski)
[3:31] * sroy (~sroy@96.127.230.203) Quit (Quit: Quitte)
[3:37] * shang (~ShangWu@114-32-21-24.HINET-IP.hinet.net) has joined #ceph
[3:42] * talonisx (~kvirc@pool-108-18-97-131.washdc.fios.verizon.net) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[3:49] * Siva (~sivat@50-76-52-229-ip-static.hfc.comcastbusiness.net) has joined #ceph
[3:57] * kaizh (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[4:00] * Nats (~Nats@telstr575.lnk.telstra.net) Quit (Read error: Connection reset by peer)
[4:05] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) has joined #ceph
[4:06] * Nats (~Nats@telstr575.lnk.telstra.net) has joined #ceph
[4:06] * hasues (~hazuez@12.216.44.38) has joined #ceph
[4:07] * kaizh (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) has joined #ceph
[4:09] * oblu- (~o@62.109.134.112) has joined #ceph
[4:10] * oblu (~o@62.109.134.112) Quit (Ping timeout: 480 seconds)
[4:14] * jeff-YF (~jeffyf@pool-173-66-76-78.washdc.fios.verizon.net) has joined #ceph
[4:18] * jeff-YF_ (~jeffyf@67.23.123.228) has joined #ceph
[4:19] * kaizh (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[4:23] * jeff-YF (~jeffyf@pool-173-66-76-78.washdc.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[4:23] * jeff-YF_ is now known as jeff-YF
[4:25] * davidzlap (~Adium@ip68-5-239-214.oc.oc.cox.net) Quit (Quit: Leaving.)
[4:30] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[4:38] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:38] * Siva (~sivat@50-76-52-229-ip-static.hfc.comcastbusiness.net) Quit (Quit: Siva)
[4:53] * shang_ (~ShangWu@175.41.48.77) has joined #ceph
[4:55] * shang_ (~ShangWu@175.41.48.77) Quit (Remote host closed the connection)
[5:00] * shang (~ShangWu@114-32-21-24.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[5:05] * shang (~ShangWu@175.41.48.77) has joined #ceph
[5:10] * Vacum (~vovo@i59F7A07F.versanet.de) has joined #ceph
[5:14] * wrale_ (~wrale@cpe-107-9-20-3.woh.res.rr.com) Quit (Quit: Leaving...)
[5:17] * Vacum_ (~vovo@88.130.203.71) Quit (Ping timeout: 480 seconds)
[5:20] * xarses (~andreww@173-12-165-153-oregon.hfc.comcastbusiness.net) has joined #ceph
[5:32] * haomaiwang (~haomaiwan@117.79.232.154) has joined #ceph
[5:35] * hasues (~hazuez@12.216.44.38) Quit (Remote host closed the connection)
[5:40] * sarob_ (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[5:42] * hasues (~hazuez@12.216.44.38) has joined #ceph
[5:44] * kaizh (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) has joined #ceph
[5:54] * Siva (~sivat@50-76-52-231-ip-static.hfc.comcastbusiness.net) has joined #ceph
[6:05] * Siva_ (~sivat@nat-dip6.cfw-a-gci.corp.yahoo.com) has joined #ceph
[6:11] * haomaiwang (~haomaiwan@117.79.232.154) Quit (Remote host closed the connection)
[6:11] * Siva (~sivat@50-76-52-231-ip-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[6:11] * Siva_ is now known as Siva
[6:11] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) has joined #ceph
[6:14] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:22] * haomaiwa_ (~haomaiwan@117.79.232.254) has joined #ceph
[6:22] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[6:29] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) Quit (Ping timeout: 480 seconds)
[6:31] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[6:34] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[6:41] * hasues (~hazuez@12.216.44.38) Quit (Quit: Leaving.)
[6:47] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) has joined #ceph
[6:47] * jeff-YF (~jeffyf@67.23.123.228) Quit (Quit: jeff-YF)
[6:49] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[6:50] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[7:05] * athrift (~nz_monkey@203.86.205.13) Quit (Quit: No Ping reply in 180 seconds.)
[7:06] * athrift (~nz_monkey@203.86.205.13) has joined #ceph
[7:35] * xarses (~andreww@173-12-165-153-oregon.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[7:39] * kaizh (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[7:43] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) Quit (Ping timeout: 480 seconds)
[7:53] * bboris (~boris@78.90.142.146) Quit (Ping timeout: 480 seconds)
[7:58] * Cube (~Cube@66-87-67-34.pools.spcsdns.net) Quit (Quit: Leaving.)
[8:01] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) has joined #ceph
[8:01] * fghaas1 (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) has joined #ceph
[8:01] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) Quit (Read error: Connection reset by peer)
[8:15] * isodude_ (~isodude@kungsbacka.oderland.com) Quit (Remote host closed the connection)
[8:16] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) Quit (Read error: Connection reset by peer)
[8:28] * Siva (~sivat@nat-dip6.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[8:41] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[8:47] * joef (~Adium@2620:79:0:131:f19d:3ffd:a4cf:2fbb) Quit (Read error: Connection reset by peer)
[8:48] * joef (~Adium@2620:79:0:131:6976:4a68:298:44d) has joined #ceph
[8:48] * Sysadmin88 (~IceChat77@176.254.32.31) Quit (Quit: Why is the alphabet in that order? Is it because of that song?)
[8:51] * ksingh (~Adium@teeri.csc.fi) has joined #ceph
[8:51] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[8:53] * analbeard (~shw@host31-53-108-38.range31-53.btcentralplus.com) has joined #ceph
[9:00] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[9:02] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[9:02] * ChanServ sets mode +v andreask
[9:03] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) Quit (Quit: Leaving.)
[9:04] * analbeard (~shw@host31-53-108-38.range31-53.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[9:05] * fred` (fred@2001:4dd0:ff00:8ea1:2010:abec:24d:2500) Quit (Ping timeout: 480 seconds)
[9:10] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:15] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has left #ceph
[9:16] * analbeard (~shw@141.0.32.124) has joined #ceph
[9:21] * oblu (~o@62.109.134.112) has joined #ceph
[9:23] * oblu- (~o@62.109.134.112) Quit (Ping timeout: 480 seconds)
[9:23] * oblu (~o@62.109.134.112) Quit ()
[9:24] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[9:30] * oblu (~o@62.109.134.112) has joined #ceph
[9:33] * oblu (~o@62.109.134.112) Quit ()
[9:42] * oblu (~o@62.109.134.112) has joined #ceph
[9:49] * thb (~me@port-14241.pppoe.wtnet.de) has joined #ceph
[9:50] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[9:52] * oblu (~o@62.109.134.112) Quit (Quit: ~)
[9:55] * TMM (~hp@c97185.upc-c.chello.nl) Quit (Ping timeout: 480 seconds)
[9:59] * oblu (~o@62.109.134.112) has joined #ceph
[10:13] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) has joined #ceph
[10:14] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[10:20] * sleinen (~Adium@2001:620:0:46:3555:599a:3e5f:1209) has joined #ceph
[10:21] * princeholla (~princehol@p5DE9440B.dip0.t-ipconnect.de) has joined #ceph
[10:23] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[10:26] * allsystemsarego (~allsystem@5-12-37-194.residential.rdsnet.ro) has joined #ceph
[10:29] * bboris (~boris@router14.mail.bg) has joined #ceph
[10:33] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:36] * sleinen (~Adium@2001:620:0:46:3555:599a:3e5f:1209) has left #ceph
[10:37] * LeaChim (~LeaChim@host86-159-235-225.range86-159.btcentralplus.com) has joined #ceph
[10:45] * princeholla (~princehol@p5DE9440B.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[10:49] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[10:56] * jtangwk1 (~Adium@gateway.tchpc.tcd.ie) Quit (Quit: Leaving.)
[10:59] * glambert (~glambert@37.157.50.80) has joined #ceph
[11:13] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[11:25] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[11:28] * jtangwk (~Adium@gateway.tchpc.tcd.ie) has joined #ceph
[11:29] * ghartz (~ghartz@91.207.208.9) has joined #ceph
[11:29] <ghartz> hi
[11:31] <ghartz> in the slides of sebastian han during the frankfurt summit (http://fr.slideshare.net/Inktank_Ceph/ceph-performance) he said the will have no more journal for OSD
[11:31] <ghartz> where I can find informations about it ?
[11:37] * mattt (~textual@CPE68b6fcfafe43-CM68b6fcfafe40.cpe.net.cable.rogers.com) has joined #ceph
[11:39] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[11:48] * mattt (~textual@CPE68b6fcfafe43-CM68b6fcfafe40.cpe.net.cable.rogers.com) Quit (Quit: Computer has gone to sleep.)
[12:11] * rudolfsteiner (~federicon@201-246-70-75.baf.movistar.cl) has joined #ceph
[12:20] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[12:23] * rudolfsteiner (~federicon@201-246-70-75.baf.movistar.cl) Quit (Quit: rudolfsteiner)
[12:24] * madkiss1 (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[12:28] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[12:31] * diegows (~diegows@190.190.5.238) has joined #ceph
[12:36] <glambert> any news on 0.78?
[12:36] * allsystemsarego_ (~allsystem@5-12-37-194.residential.rdsnet.ro) has joined #ceph
[12:37] * allsystemsarego_ (~allsystem@50c25c2.test.dnsbl.oftc.net) Quit ()
[12:37] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) Quit (Remote host closed the connection)
[12:37] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) has joined #ceph
[12:37] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[12:50] * garphy`aw is now known as garphy
[12:51] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[12:59] <jerker> ghartz: cool
[12:59] * b0e (~aledermue@juniper1.netways.de) Quit (Quit: Leaving.)
[13:00] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[13:00] * b0e (~aledermue@juniper1.netways.de) Quit (Remote host closed the connection)
[13:01] * The_Bishop (~bishop@f055126182.adsl.alicedsl.de) has joined #ceph
[13:09] * ksingh (~Adium@teeri.csc.fi) Quit (Quit: Leaving.)
[13:10] * dis is now known as Guest4012
[13:10] * dis (~dis@109.110.66.239) has joined #ceph
[13:11] <fedgoat> some reason i cannot rm 2 specific buckets with rados..the data seems to have been removed but the buckets are still there...anyone have any ideas?
[13:12] * alphe (~alphe@0001ac6f.user.oftc.net) has joined #ceph
[13:12] * Guest4012 (~dis@109.110.66.238) Quit (Ping timeout: 480 seconds)
[13:12] <alphe> hello all !
[13:14] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[13:16] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[13:17] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[13:21] * xcrracer (~xcrracer@fw-ext-v-1.kvcc.edu) Quit (Quit: quit)
[13:25] * leseb (~leseb@185.21.172.77) Quit (Killed (NickServ (Too many failed password attempts.)))
[13:25] * leseb (~leseb@185.21.172.77) has joined #ceph
[13:26] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) Quit (Read error: Operation timed out)
[13:28] * mattt_ (~textual@92.52.76.140) has joined #ceph
[13:44] <fedgoat> this is driving me crazy..i have 2 stale buckets, i cant get stats on, and simply won't die...osd's got full yesterday, i attempted to rm the buckets..somehow the data got purged which took all night, but the buckets are still there...anyone know how I can remove them..
[13:49] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[13:49] * mattt_ (~textual@92.52.76.140) Quit (Read error: Connection reset by peer)
[13:50] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[14:00] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[14:05] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) has joined #ceph
[14:09] * ksingh (~Adium@teeri.csc.fi) has joined #ceph
[14:11] * thomnico (~thomnico@2001:920:7000:101:bdc6:d180:2e17:1710) has joined #ceph
[14:11] * BillK (~BillK-OFT@106-69-6-106.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[14:13] * `jpg (~josephgla@ppp121-44-151-43.lns20.syd7.internode.on.net) has joined #ceph
[14:15] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) has joined #ceph
[14:15] * ksingh (~Adium@teeri.csc.fi) Quit (Read error: Connection reset by peer)
[14:24] <alphe> fedgoat hum does disk are 100% full ?
[14:24] <alphe> because linux can t wipe data from 100% full disks
[14:25] <fedgoat> no, one osd was "full" 5 others were near full. ratios are set at .95 by default
[14:25] <fedgoat> i was able to reblanace the cluster after adding 3 additional osds
[14:25] <alphe> ok 3 new disks ?
[14:25] <fedgoat> i was able to remove a few buckets, but radosgw-admin bucket rm on the other 2 buckets which stored all the data isn't working. The data though ended up purging somehow, but he buckets are still there
[14:26] <fedgoat> yea 3 new disks..and i added 3 additional so 6 new osds
[14:26] <fedgoat> the cluster is healthy now..
[14:26] <fedgoat> i even tried to remove the .rgw.buckets pool i recreated it
[14:26] <fedgoat> im able to create new buckets and objects
[14:27] <alphe> fedgoat hum creazy solution zap the full disk and recreate the osd
[14:27] <fedgoat> the disks arent full
[14:27] <fedgoat> the datas gone now
[14:27] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[14:27] <fedgoat> somehow the bucket rm purged the data.which took all night about 8 TB of data
[14:27] <alphe> ok and what is doing your ceph cluster ?
[14:28] <alphe> health ok ?
[14:28] <fedgoat> radosgw-admin bucket list it shows the 2 buckets i cant remove them
[14:28] <fedgoat> health is OK
[14:28] <fedgoat> the 2 buckets which had all the data are left..they're stale or something
[14:28] <alphe> strange but not surprizing I had problems with s3 amazon users you couldn t delete and that had access to noting
[14:29] <alphe> but the data was there the user simply could log to access them ...
[14:29] <alphe> s3 amzaon style radosgw
[14:29] <fedgoat> right, that's what ive got setup
[14:29] <fedgoat> we needed s3 compat gateway for a client, this is a POC environment
[14:29] <alphe> hum fedgoat there is a trick with the map to do ...
[14:30] <fedgoat> what worries me is there isn't any real documentaion on recovering from FULL OSD's other than "adding more OSDs" which seems unacceptable
[14:30] <kraken> http://i.imgur.com/BwdP2xl.gif
[14:30] <alphe> some commands allow you to rebuild de map
[14:30] <fedgoat> even after adding more oSD's i was still not able to remove the buckets, then all of a sudden the data started being removed..now the buckets are still there stuck
[14:31] <alphe> fedgoat ... don t say there is no documentation that will sadens the people in charge of maintaining it
[14:31] <fedgoat> i attempted to do a bucket check and --fix
[14:32] <fedgoat> well i have to support this in production, so when things break..sitting in IRC waiting for a week for help from devs makes it hard to explain to customers while their data is locked out
[14:32] <alphe> but I see your point and I agree most of my path along radosgw S3 implementation was a lonely path made of mistakes and solving those mistakes then I grew tired of it and went cephfs way which is a bug house so now I m on RBD
[14:33] <alphe> fegoat the idea is more to force a rebuild of the map tables after the bucket rm
[14:33] <fedgoat> right how
[14:33] <fedgoat> i tried the --fix option
[14:33] <fedgoat> for bucket check
[14:33] <alphe> http://tracker.ceph.com/issues/5197
[14:34] <alphe> fedgoat read this maybe you will find there usefull insight
[14:35] <alphe> see the rm bucket doesn t work because there is nomore omap related to it ...
[14:35] <fedgoat> right
[14:35] <fedgoat> that seesm like it could be the issue
[14:35] <fedgoat> but what the fix may be, im not sure
[14:36] <fedgoat> this was put here 10 months ago with no resolution :(
[14:36] <alphe> fedgoat and unfortunatly it is un unsolved issue so ... do as those bucket doesn t exist and go on with next buckets
[14:36] <fedgoat> haha
[14:36] <alphe> ... ho but the problem is the slowlyness of the system ...
[14:37] <fedgoat> yea, it takes forever to remove a bucket with a lot of data
[14:37] <alphe> fedgoat maybe ring an alarm bell in the mailling list exposing your problem how exactly you get there and the exact concequencies
[14:37] <fedgoat> granted this is a POC environment not on 10G links and with sata drives...
[14:37] <alphe> mailing list is read by far more active people than this chat
[14:38] * ircolle (~Adium@207-109-47-67.dia.static.qwest.net) has joined #ceph
[14:38] <alphe> hi ircolle !
[14:38] <fedgoat> thanks at least i can reference the bug report and see if that gets me anywhere. I can try to keep tracking this down as well, but don't want to spend too much time on this
[14:39] <alphe> maybe you could be able to help fedgoat with is buckets problems
[14:39] <fedgoat> yea, ill see if i can track it down and come up with a solution to provide
[14:40] <alphe> anyways passing through the mailling list is always a good thing it is more read and exposed on a more long time length
[14:40] <alphe> so case there often leads to new documentation writing to sumerize the problem /solution exposed
[14:41] <alphe> there is 287 people here but few are really reading /active
[14:41] <fedgoat> thanks for the suggesting, ill run this by the mailling list
[14:41] <alphe> thank to you for taking the time to do so
[14:42] * fatih (~fatih@78.186.36.182) has joined #ceph
[14:42] <fedgoat> how am i able to edit the omap entries?
[14:44] * ksingh (~Adium@2001:708:10:10:c848:ad15:3f:f6b1) has joined #ceph
[14:45] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[14:49] * shang (~ShangWu@175.41.48.77) has joined #ceph
[14:52] * ksingh (~Adium@2001:708:10:10:c848:ad15:3f:f6b1) Quit (Ping timeout: 480 seconds)
[14:53] * sroy (~sroy@207.96.182.162) has joined #ceph
[14:53] * hasues (~hazuez@12.216.44.38) has joined #ceph
[14:54] <alphe> fedgoat did you do the bucket unlink ?
[14:54] <fedgoat> yea that doesnt do anything
[14:54] <fedgoat> bucket still exists
[14:55] <fedgoat> do you know how i can view the users omap index
[14:55] <alphe> and the object rm ?
[14:55] <fedgoat> triedt hat also
[14:55] <alphe> --puge-objects ?
[14:55] <fedgoat> tried --purge-objcets yes
[14:55] <fedgoat> still no go
[14:55] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[14:56] <fedgoat> i can create and remove new buckets..its just these 2 that are stuck, probably in the users index
[14:56] <fedgoat> im tempted to remove the user if tha twould resolve the issue, but im not sure it would
[14:58] <alphe> what killed my s3 radosgw was the passage of the region to zone concept
[14:59] <alphe> radosgw-admin zone list what does it shows to you ?
[14:59] * thomnico (~thomnico@2001:920:7000:101:bdc6:d180:2e17:1710) Quit (Quit: Ex-Chat)
[14:59] <fedgoat> just default zone
[15:02] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[15:02] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[15:02] <glambert> is there a guestimated release date for 0.78? waiting on the rbd-fuse fix by Ilya
[15:05] <bboris> how do i create a crushmap bucket with a non-working monitor?
[15:06] <bboris> or import an osdmap that i modified with the bucket created back to the monitor stor
[15:06] <alphe> hum ... bboris don t know
[15:07] <alphe> how can you create a bucket on a specific osd without having it active in the ceph cluster
[15:07] <alphe> bboris the idea of ceph is the share resources not to focus resources to a specific node ...
[15:08] <bboris> sure, but when your monitor fails to start because it tries to apply a rule to a non-existing bucket
[15:08] <bboris> what do you do?
[15:08] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Ping timeout: 480 seconds)
[15:08] <alphe> you can limit a bucket size but saying the bucket will be confined to a single point on your ceph cluster
[15:09] <alphe> bboris remove the bucket ?
[15:09] <alphe> or create the bucket ...
[15:09] <fedgoat> is there a way to see which buckets are linked to which user?
[15:09] <alphe> so the unexistent then exists and the world it happy
[15:09] <bboris> how? i need the monitor running to create it.. or... ?
[15:09] <alphe> bboris since monitor doesn t start you can t wipe and restart
[15:10] <glambert> sage, do you know when the release will be?
[15:10] <alphe> bboris since monitor doesn t start you can t do nothing on the ceph cluster than wipe and restart
[15:10] <bboris> wipe what? the whole cluster with the data?
[15:11] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[15:11] <alphe> glambert some moment to another this week said sage on the mailing list
[15:12] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (Read error: Operation timed out)
[15:12] <alphe> but glambert seems like they are giving the last polishing touch can be next week too
[15:12] <sage> hopefully today
[15:12] <alphe> sage :)
[15:12] <sage> for 0.78; but firefly will be 79 or 80
[15:12] <alphe> sage will there be a saucy oriented package this time ?
[15:12] <sage> 78 will have all fo the core functionality for proper testing tho
[15:12] <sage> yes
[15:12] <sage> 90% sure
[15:13] <alphe> sage if you take 2 or 3 extra days to finish the testing process this is not a drama ...
[15:13] <alphe> I mean i prefer a full tested release then an oooops release
[15:13] * fghaas1 (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[15:13] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:14] <alphe> sage fedgoat had an interresting behavior to submit
[15:14] <bboris> sage, yesterday you helped me with exporting the osdmap using ceph-monstore-tool. i modified the map to my needs, but can't figure out how to import it back to the monitor store. ceph-monstore-tool doesnt have setosdmap command
[15:14] <bboris> sorry to interrupt the conversation
[15:15] <glambert> sage, that would be bloody brilliant if it is released today
[15:15] <alphe> sage fedgoat got in a radosgw oriented storage a near full status that reacher a 95% of use he removed the data of 2 bucket but now he fails to remove the bucket them selves
[15:15] <glambert> really could do with getting the work I need to do with rbd-fuse done over the weekend
[15:15] <sage> bboris: you can do 'ceph osd set' to import a fresh osdmap.. but that is usually not a good idea. if your monitors aren't running it's the monmap that needs to get fixed, not hte osdmap
[15:15] <glambert> but obviously that's no concern of yours :)
[15:16] <alphe> it seems like this bug http://tracker.ceph.com/issues/5197
[15:16] <fedgoat> yea these buckets will not die..but yet the data got purged from the buckets
[15:16] <sage> glambert: remember 78 != firefly, but it's something you can go play with :)
[15:16] <fedgoat> when i go through the api its listing the buckets for the user, so im assuming it may be related to the bug alphe posted...
[15:16] <glambert> sage, I know, but Ilya confirmed the bug fix for rbd-fuse is in 78
[15:16] <sage> sorry have to wake up the kids for school, back online in a bit
[15:16] <sage> glambert: ah cool
[15:16] <glambert> :)
[15:17] <glambert> and it's in the release notes online
[15:19] <bboris> sage: ceph-mon crashes on a rule defined in the crush map
[15:20] <bboris> sage: what can i do about it with the monmap
[15:20] * ircolle (~Adium@207-109-47-67.dia.static.qwest.net) Quit (Quit: Leaving.)
[15:21] * zack_dolby (~textual@p852cae.tokynt01.ap.so-net.ne.jp) has joined #ceph
[15:22] <bboris> sage: ceph osd set is not an option because the monitor itself is not working and the ceph command times out
[15:23] * joao (~joao@a95-92-33-54.cpe.netcabo.pt) Quit (Ping timeout: 480 seconds)
[15:25] * ksingh (~Adium@2001:708:10:10:2961:1882:8523:d216) has joined #ceph
[15:25] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[15:25] * thomnico (~thomnico@2001:920:7000:101:bdc6:d180:2e17:1710) has joined #ceph
[15:30] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[15:30] <alphe> bboris and you have a single monitor ?
[15:31] <alphe> bboris without a working monitor you are game over
[15:31] * bitblt (~don@128-107-239-233.cisco.com) has joined #ceph
[15:31] <alphe> starting anew your ceph cluster is the only alternative I see ...
[15:32] <alphe> other manipulation will not make your monitor able to comeback
[15:32] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[15:32] * joao (~joao@a95-92-33-54.cpe.netcabo.pt) has joined #ceph
[15:32] * ChanServ sets mode +o joao
[15:32] * thomnico (~thomnico@2001:920:7000:101:bdc6:d180:2e17:1710) Quit (Read error: Connection reset by peer)
[15:33] * rahat (~rmahbub@128.224.252.2) has joined #ceph
[15:33] * thomnico (~thomnico@2001:920:7000:101:bdc6:d180:2e17:1710) has joined #ceph
[15:33] <bboris> yea, currently i have one monitor. yesterday i had 2 and couldn't save the data also. the docs recommend an odd number, so i'll be trying 3 in a few minutes
[15:33] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Ping timeout: 480 seconds)
[15:33] <alphe> fegoat and if your remove the .bucket pool and recreate it ?
[15:36] <alphe> The index pool for default placement is .rgw.buckets.index and for the data pool for default placement is .rgw.buckets.
[15:36] * thomnico (~thomnico@2001:920:7000:101:bdc6:d180:2e17:1710) Quit ()
[15:36] <alphe> bboris a single monitor doesn t work
[15:36] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) Quit (Quit: Leaving.)
[15:37] <alphe> that is why it is recommanded to have at least 3 monitors in orders to always have 2 running
[15:38] <bboris> alphe: interesting thing was that it seems that 2/2 monitors broke yesterday
[15:38] <bboris> not just the one
[15:38] <bboris> also, my next question is:
[15:38] <bboris> as i have only 2 hosts, can i have two monitors on a single host
[15:38] <bboris> and one on the other
[15:38] <alphe> bboris nope
[15:39] <bboris> bad
[15:39] <alphe> bboris each monitor needs it s own ip /ports
[15:39] <bboris> yeah, i was hoping i can specify different ports for two monitors
[15:39] <alphe> bboris you can t do a ceph on a single machine that wasn t design for that use
[15:39] <alphe> use virtualmachines
[15:40] * markbby (~Adium@168.94.245.3) has joined #ceph
[15:40] <alphe> then you can put all your ceph needed nodes on a single hardware ressource that will be laggy for everything
[15:41] * zirpu (~zirpu@2600:3c02::f03c:91ff:fe96:bae7) has joined #ceph
[15:42] <alphe> bboris for the purpose of putting monitors on a single hardware ressource you can use virtual machine
[15:43] <bboris> got it
[15:43] <bboris> thanks
[15:43] <alphe> you dont need extra fancy stuff you give some ram at least 2 core per monitors and 10gb of disc space
[15:43] <alphe> then you can manipulate them replicate them etc as you see fit
[15:44] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[15:46] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (Read error: Operation timed out)
[15:47] * ksingh (~Adium@2001:708:10:10:2961:1882:8523:d216) Quit (Ping timeout: 480 seconds)
[15:57] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[15:58] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[15:59] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[16:02] * vata (~vata@2607:fad8:4:6:611d:c3:feef:ee64) Quit (Quit: Leaving.)
[16:03] * bandrus (~Adium@c-98-238-176-251.hsd1.ca.comcast.net) has joined #ceph
[16:04] * rahat (~rmahbub@128.224.252.2) has left #ceph
[16:07] * glambert (~glambert@37.157.50.80) Quit (Quit: <?php exit(); ?>)
[16:09] * b0e (~aledermue@juniper1.netways.de) Quit (Remote host closed the connection)
[16:15] * newbie|2 (~kvirc@pool-108-18-97-131.washdc.fios.verizon.net) has joined #ceph
[16:16] * newbie|2 is now known as talonisx
[16:30] * JeffK (~Jeff@38.99.52.10) has joined #ceph
[16:33] * bandrus1 (~Adium@c-98-238-176-251.hsd1.ca.comcast.net) has joined #ceph
[16:33] * bandrus (~Adium@c-98-238-176-251.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[16:34] * stewiem2000 (~stewiem20@195.10.250.233) Quit (Quit: Leaving.)
[16:35] * vata (~vata@2607:fad8:4:6:69e5:96f1:b720:1283) has joined #ceph
[16:36] * bandrus (~Adium@adsl-75-5-254-73.dsl.scrm01.sbcglobal.net) has joined #ceph
[16:36] * stewiem2000 (~stewiem20@195.10.250.233) has joined #ceph
[16:39] * markbby (~Adium@168.94.245.3) Quit (Remote host closed the connection)
[16:40] * jtangwk (~Adium@gateway.tchpc.tcd.ie) Quit (Quit: Leaving.)
[16:43] * bandrus1 (~Adium@c-98-238-176-251.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:51] * diegows (~diegows@190.49.168.202) has joined #ceph
[16:52] * `jpg (~josephgla@ppp121-44-151-43.lns20.syd7.internode.on.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[16:52] * jtangwk (~Adium@gateway.tchpc.tcd.ie) has joined #ceph
[16:53] * rmoe (~quassel@12.164.168.117) has joined #ceph
[16:55] * dunswill (~wnda@gw2.maxtelecom.bg) has joined #ceph
[16:56] <dunswill> hi guys, I'm wondering about the OSD enumeration
[16:56] <dunswill> i understand OSD has UUID and a #
[16:57] <dunswill> i'm wondering if i have two storages in a cluster, on storage1 i got osd.0 with uuid=deadbeefdeadbeef, on storage2 i got osd.0 with uuid=
[16:58] <dunswill> on storage2 i got osd.0 with uuid=addictedtocaffe
[16:58] <dunswill> will there be conflict with osd.0 having the same # on both storages?
[16:59] <dunswill> do i have to enumarete OSDs on the second storage starting with something bigger, like osd.10 and so on ?
[16:59] <dwm> dunswill: I think you need to explain how you'd get two OSDs with the same numeric id.
[16:59] <dwm> dunswill: Ah, yes.
[17:00] <dunswill> dwm: so i have to have different osd.# ?
[17:00] <dunswill> infact ceph-deploy should manage to do that
[17:00] <dunswill> if i get it right
[17:00] <dwm> Generally, you don't assign OSD ids yourself; minting a new OSD will cause this to happen for you.
[17:01] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[17:01] <dwm> Note that you might have a situation with several different OSDs with the same IDs -- if you have multiple independent Ceph clusters.
[17:01] * Shmouel1 (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) has joined #ceph
[17:01] <dwm> (i.e. osd.0 in cluster 'ceph-A', osd.0 in cluster 'ceph-B', etc.)
[17:01] <dunswill> i see
[17:03] <loicd> is it cheaper to store on tape than on disk, assuming the I/O load is comparable ?
[17:04] <dunswill> dwm: is http://ceph.com/docs/master/rados/ fresh enough or there is some pdf on the site that's more updated and precise
[17:04] <dunswill> cause i have seen atleast several confusing stuff about variables in the guide
[17:06] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) Quit (Read error: Operation timed out)
[17:07] <dunswill> i wanted to try to build an ceph cluster from 2 machines, with the most simple configuration,experiment with it, break it and get to know it, just configure the most basic stuff but appears
[17:08] <dunswill> i'll need to read everything and stick to ceph-deploy
[17:08] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[17:12] * analbeard (~shw@141.0.32.124) Quit (Quit: Leaving.)
[17:17] <joef> loicd is that a ceph question?
[17:18] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) has joined #ceph
[17:18] * linuxkidd_ (~linuxkidd@rtp-isp-nat1.cisco.com) Quit (Remote host closed the connection)
[17:19] * reed (~reed@75-101-54-131.dsl.static.sonic.net) has joined #ceph
[17:21] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[17:22] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[17:22] * kaizh (~kaizh@128-107-239-233.cisco.com) has joined #ceph
[17:23] <loicd> joef: kind of. I never considered the fact that ceph may be cheaper if using tapes instead of disks... totaly ignorant
[17:24] * ghartz (~ghartz@91.207.208.9) Quit (Remote host closed the connection)
[17:24] <joef> tape is definitely cheaper
[17:24] <jtangwk> lto !
[17:25] <jtangwk> tiering to tapes would be nice
[17:25] <joef> like ltfs?
[17:25] <jtangwk> though i have never seen anyone use ltfs
[17:25] <jtangwk> at least not in academia
[17:26] <jtangwk> ibm tried to sell us some ltfs stuff a while ago
[17:26] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[17:26] <jtangwk> along with tivoli
[17:26] <joef> tivoli
[17:26] <joef> damn
[17:26] <jtangwk> we got tivoli and a big tape robot
[17:26] <jtangwk> but not the ltfs stuff
[17:27] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) has joined #ceph
[17:27] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[17:28] <jtangwk> speaking of which, is the erasure coding work going to be production ready/alpha/beta for the firefly release?
[17:32] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) has joined #ceph
[17:37] * linuxkidd (~linuxkidd@cpe-066-057-019-145.nc.res.rr.com) has joined #ceph
[17:37] <ksingh> need advice , can use erasure coding for RBD
[17:38] * dunswill (~wnda@gw2.maxtelecom.bg) Quit (Quit: thanks)
[17:39] * fred` (fred@2001:4dd0:ff00:8ea1:2010:abec:24d:2500) has joined #ceph
[17:44] * Underbyte (~jerrad@pat-global.macpractice.net) has joined #ceph
[17:45] * linuxkidd (~linuxkidd@cpe-066-057-019-145.nc.res.rr.com) Quit (Ping timeout: 480 seconds)
[17:54] * linuxkidd (~linuxkidd@rtp-isp-nat1.cisco.com) has joined #ceph
[17:56] * garphy is now known as garphy`aw
[17:58] * danieagle (~Daniel@179.176.50.252.dynamic.adsl.gvt.net.br) has joined #ceph
[18:03] <eightyeight> so,i thought i could use pre-formatted devices with ceph? if i've put four drives in a btrfs raid0, can i use that metadata, or does ceph want access to the raw devices?
[18:04] * JoeGruher (~JoeGruher@jfdmzpr05-ext.jf.intel.com) has joined #ceph
[18:05] <eightyeight> reading: http://ceph.com/docs/master/rados/deployment/ceph-deploy-osd/
[18:13] * sjustwork (~sam@2607:f298:a:607:293b:8951:4063:71f8) has joined #ceph
[18:14] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) Quit (Quit: Leaving.)
[18:15] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[18:16] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[18:18] <jcsp> with unusual drive configurations, there are things ceph can do that ceph-deploy doesn't support
[18:20] <eightyeight> fair enough
[18:21] <eightyeight> just trying to understand the abstract steps. with mdadm(8), i could just give ceph-deploy(1) a /dev/md/* device. because block devices are not exported with Btrfs, just trying to figure out how that fits into the picture with Ceph
[18:26] * Cube (~Cube@12.248.40.138) has joined #ceph
[18:26] * BManojlovic (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[18:26] <JoeGruher> anyone know if we'll see 0.78 today? :)
[18:31] * hasues (~hazuez@12.216.44.38) Quit (Ping timeout: 480 seconds)
[18:37] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) has joined #ceph
[18:38] * Haksoldier (~islamatta@88.234.101.67) has joined #ceph
[18:38] <Haksoldier> EUZUBILLAHIMINE??EYTANIRRACIM BISMILLAHIRRAHMANIRRAHIM
[18:38] <Haksoldier> ALLAHU EKBERRRRR! LA ?LAHE ?LLALLAH MUHAMMEDEN RESULULLAH!
[18:38] <Haksoldier> I did the obligatory prayers five times a day to the nation. And I promised myself that, who (beside me) taking care not to make the five daily prayers comes ahead of time, I'll put it to heaven. Who says prayer does not show attention to me I do not have a word for it.! Prophet Muhammad (s.a.v.)
[18:38] <Haksoldier> hell if you did until the needle tip could not remove your head from prostration Prophet Muhammad pbuh
[18:39] * Haksoldier (~islamatta@88.234.101.67) has left #ceph
[18:41] * hasues (~hazuez@12.216.44.38) has joined #ceph
[18:42] * bandrus (~Adium@adsl-75-5-254-73.dsl.scrm01.sbcglobal.net) Quit (Quit: Leaving.)
[18:46] <alphe> JoeGruher sage said earlier in this chat that it will be out anytime today and that 0.78 was not firefly
[18:47] <alphe> [11:12] <sage> hopefully today
[18:47] <alphe> sage is 90% sure
[18:48] <alphe> it is iin test round though so ... who will live will see !
[18:48] <eightyeight> bugfix release, or new features?
[18:49] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) has joined #ceph
[18:50] <eightyeight> looks like a good set of proposed features
[18:50] <eightyeight> nice
[18:54] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[18:55] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[18:55] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[18:56] * Pedras (~Adium@216.207.42.132) has joined #ceph
[18:59] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[18:59] * Pedras1 (~Adium@216.207.42.134) has joined #ceph
[19:04] <ksingh> need advice , can use erasure coding for RBD ??
[19:04] * Pedras (~Adium@216.207.42.132) Quit (Ping timeout: 480 seconds)
[19:05] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[19:05] * \ask (~ask@oz.develooper.com) Quit (Ping timeout: 480 seconds)
[19:05] * mattt (~textual@CPE68b6fcfafe43-CM68b6fcfafe40.cpe.net.cable.rogers.com) has joined #ceph
[19:06] * \ask (~ask@oz.develooper.com) has joined #ceph
[19:10] <JoeGruher> thx alphe
[19:10] * thb (~me@0001bd58.user.oftc.net) Quit (Quit: Leaving.)
[19:12] * xarses (~andreww@173-12-165-153-oregon.hfc.comcastbusiness.net) has joined #ceph
[19:12] * Sysadmin88 (~IceChat77@176.254.32.31) has joined #ceph
[19:13] * mattt (~textual@CPE68b6fcfafe43-CM68b6fcfafe40.cpe.net.cable.rogers.com) Quit (Ping timeout: 480 seconds)
[19:13] * leseb (~leseb@185.21.172.77) Quit (Killed (NickServ (Too many failed password attempts.)))
[19:13] * leseb (~leseb@185.21.172.77) has joined #ceph
[19:17] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[19:18] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[19:20] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Read error: Operation timed out)
[19:23] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[19:25] * analbeard (~shw@host31-53-108-38.range31-53.btcentralplus.com) has joined #ceph
[19:25] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[19:28] * xarses (~andreww@173-12-165-153-oregon.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[19:32] * markbby (~Adium@168.94.245.3) has joined #ceph
[19:36] * Boltsky (~textual@office.deviantart.net) has joined #ceph
[19:38] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Quit: Leaving.)
[19:38] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[19:41] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) Quit (Read error: Connection reset by peer)
[19:41] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[19:41] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[19:41] * ksingh (~Adium@a-v6-0002.vpn.csc.fi) has joined #ceph
[19:41] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) Quit (Quit: Siva)
[19:42] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[19:42] * stepheno (~oskars@70.96.128.243) has joined #ceph
[19:43] <stepheno> Hey guys, I'm trying to follow the manual deploy documentation, and i'm a bit confused about how/when the bootstrap-osd key is created
[19:44] <stepheno> I'm attempting to use the ceph-disk prepare/activate utility, but i'm not sure how to properly setup auth between the osd and the mon for this operation
[19:44] * Pedras (~Adium@216.207.42.132) has joined #ceph
[19:46] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:47] <stepheno> which file is ceph-mon checking for authentication of the client request?(/var/lib/ceph/mon/mon.hostname/keyring?)
[19:47] * kaizh (~kaizh@128-107-239-233.cisco.com) Quit (Remote host closed the connection)
[19:47] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[19:49] * Pedras1 (~Adium@216.207.42.134) Quit (Ping timeout: 480 seconds)
[19:50] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[19:50] <Anticimex> so, swiftstack say ceph sucks for object storage.. :)
[19:51] <Anticimex> well, sales pitching or whatever
[19:53] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) has joined #ceph
[19:55] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[19:56] * jharley (~jharley@76-10-151-146.dsl.teksavvy.com) has joined #ceph
[19:56] * dmsimard (~Adium@108.163.152.2) Quit (Quit: Leaving.)
[19:58] * kaizh (~kaizh@128-107-239-233.cisco.com) has joined #ceph
[19:58] * thb (~me@2a02:2028:37:50b0:6267:20ff:fec9:4e40) has joined #ceph
[19:59] * thb is now known as Guest4040
[20:02] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[20:02] * ksingh (~Adium@a-v6-0002.vpn.csc.fi) Quit (Ping timeout: 480 seconds)
[20:04] * Steki (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[20:06] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[20:06] * Guest4040 (~me@2a02:2028:37:50b0:6267:20ff:fec9:4e40) Quit (Ping timeout: 480 seconds)
[20:07] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Ping timeout: 480 seconds)
[20:07] * nrs_ (~nrs@ool-435376d0.dyn.optonline.net) has joined #ceph
[20:08] <nrs_> is anyone here using Ceph with aby DCB features on Ethernet?
[20:08] <nrs_> DCB = Data Center Bridging
[20:10] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[20:16] * mattt_ (~textual@92.52.76.140) has joined #ceph
[20:17] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) has joined #ceph
[20:23] <alphe> 0] <Anticimex> hum and who cares ?
[20:24] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[20:25] <alphe> 0] <Anticimex> hum Im eager to know that are the alternative to ceph for swiftstack that would be better the raid10 / iscsi ?
[20:25] <alphe> laugh out loud :P
[20:26] * bboris (~boris@router14.mail.bg) Quit (Ping timeout: 480 seconds)
[20:26] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[20:27] * reed (~reed@75-101-54-131.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[20:27] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[20:27] <ponyofdeath> hi, i am using ceph rbd and am wondering how i can enable cache in the defenition of the kvm guest?
[20:34] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) Quit (Quit: Siva)
[20:35] * danieagle (~Daniel@179.176.50.252.dynamic.adsl.gvt.net.br) Quit (Quit: Muito Obrigado por Tudo! :-))
[20:36] * pasha_ceph (~quassel@S0106c8fb267c0b17.ok.shawcable.net) Quit (Remote host closed the connection)
[20:36] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) has joined #ceph
[20:39] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) has joined #ceph
[20:39] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) has joined #ceph
[20:40] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) Quit ()
[20:43] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) Quit (Quit: Siva)
[20:45] * Steki (~steki@fo-d-130.180.254.37.targo.rs) Quit (Read error: Operation timed out)
[20:48] * mattt_ (~textual@92.52.76.140) Quit (Read error: Connection reset by peer)
[20:49] <jharley> ponyofdeath: hopefully this helps? http://ceph.com/docs/master/rbd/qemu-rbd/
[20:50] * Steki (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[20:51] <jharley> ponyofdeath: I believe RBD observes the QEMU cache option (the default being writethrough)
[20:52] * jharley (~jharley@76-10-151-146.dsl.teksavvy.com) Quit (Quit: jharley)
[20:53] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[20:53] * hasues (~hazuez@12.216.44.38) Quit (Quit: Leaving.)
[20:53] * hasues (~hazuez@12.216.44.38) has joined #ceph
[20:55] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) has joined #ceph
[21:00] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) has joined #ceph
[21:01] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[21:04] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) Quit (Quit: Siva)
[21:06] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) has joined #ceph
[21:09] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) has joined #ceph
[21:16] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[21:19] * bboris (~boris@78.90.142.146) has joined #ceph
[21:19] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) has left #ceph
[21:22] * fatih (~fatih@78.186.36.182) Quit (Ping timeout: 480 seconds)
[21:29] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[21:32] * sputnik13 (~sputnik13@207.8.121.241) Quit (Ping timeout: 480 seconds)
[21:34] * Steki (~steki@fo-d-130.180.254.37.targo.rs) Quit (Ping timeout: 480 seconds)
[21:34] * Steki (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[21:35] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) Quit (Quit: Siva)
[21:39] * bandrus (~Adium@adsl-75-5-254-73.dsl.scrm01.sbcglobal.net) has joined #ceph
[21:40] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) Quit (Ping timeout: 480 seconds)
[21:42] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[21:42] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[21:42] * hasues (~hazuez@12.216.44.38) Quit (Read error: No route to host)
[21:43] * hasues (~hazuez@12.216.44.38) has joined #ceph
[21:47] * sroy (~sroy@207.96.182.162) Quit (Quit: Quitte)
[21:50] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[21:50] * Sommarnatt (~Sommarnat@c83-251-204-51.bredband.comhem.se) has joined #ceph
[21:55] * linuxkidd (~linuxkidd@rtp-isp-nat1.cisco.com) Quit (Quit: Leaving)
[21:55] * rmoe_ (~quassel@12.164.168.117) has joined #ceph
[21:56] * nrs_ (~nrs@ool-435376d0.dyn.optonline.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[21:59] * talonisx (~kvirc@pool-108-18-97-131.washdc.fios.verizon.net) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[21:59] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[22:01] * Cube1 (~Cube@12.248.40.138) has joined #ceph
[22:01] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[22:04] * Cube (~Cube@12.248.40.138) Quit (Read error: Operation timed out)
[22:04] * bitblt (~don@128-107-239-233.cisco.com) Quit (Quit: Leaving)
[22:09] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[22:12] * scuttlemonkey (~scuttlemo@c-107-5-193-244.hsd1.mi.comcast.net) Quit (Ping timeout: 480 seconds)
[22:14] * Siva (~sivat@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) has joined #ceph
[22:16] * Steki (~steki@fo-d-130.180.254.37.targo.rs) Quit (Ping timeout: 480 seconds)
[22:17] * Steki (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[22:26] * sputnik13 (~sputnik13@client64-171.sdsc.edu) has joined #ceph
[22:26] * sputnik13 (~sputnik13@client64-171.sdsc.edu) Quit ()
[22:26] * sputnik13 (~sputnik13@client64-171.sdsc.edu) has joined #ceph
[22:30] * vata (~vata@2607:fad8:4:6:69e5:96f1:b720:1283) Quit (Quit: Leaving.)
[22:32] * Cube (~Cube@12.248.40.138) has joined #ceph
[22:33] * Cube1 (~Cube@12.248.40.138) Quit (Read error: Operation timed out)
[22:36] * alphe (~alphe@0001ac6f.user.oftc.net) Quit (Quit: Leaving)
[22:38] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[22:38] * ChanServ sets mode +v andreask
[22:56] * Steki (~steki@fo-d-130.180.254.37.targo.rs) Quit (Ping timeout: 480 seconds)
[23:03] * nhm (~nhm@65-128-159-155.mpls.qwest.net) has joined #ceph
[23:03] * ChanServ sets mode +o nhm
[23:06] * scuttlemonkey (~scuttlemo@99-6-62-94.lightspeed.livnmi.sbcglobal.net) has joined #ceph
[23:06] * ChanServ sets mode +o scuttlemonkey
[23:10] * allsystemsarego (~allsystem@5-12-37-194.residential.rdsnet.ro) Quit (Quit: Leaving)
[23:11] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit (Quit: valeech)
[23:21] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Remote host closed the connection)
[23:25] * ircolle (~Adium@12.69.234.201) has joined #ceph
[23:28] * jhujhiti (~jhujhiti@00012a8b.user.oftc.net) Quit (Quit: leaving)
[23:28] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (Quit: Leaving)
[23:33] * sputnik13 (~sputnik13@client64-171.sdsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[23:34] * sputnik13 (~sputnik13@client64-171.sdsc.edu) has joined #ceph
[23:34] * Sommarnatt (~Sommarnat@c83-251-204-51.bredband.comhem.se) Quit (Remote host closed the connection)
[23:36] * JoeGruher (~JoeGruher@jfdmzpr05-ext.jf.intel.com) Quit (Remote host closed the connection)
[23:36] * analbeard (~shw@host31-53-108-38.range31-53.btcentralplus.com) Quit (Quit: Leaving.)
[23:38] * jeff-YF (~jeffyf@67.23.117.122) Quit (Quit: jeff-YF)
[23:44] * scuttlemonkey (~scuttlemo@99-6-62-94.lightspeed.livnmi.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[23:57] * ircolle (~Adium@12.69.234.201) Quit (Quit: Leaving.)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.