#ceph IRC Log

Index

IRC Log for 2014-01-06

Timestamps are in GMT/BST.

[0:02] * BillK (~BillK-OFT@220-253-180-210.dyn.iinet.net.au) has joined #ceph
[0:05] * Steki (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[0:06] * sagelap (~sage@172.56.12.104) has joined #ceph
[0:07] * sagelap (~sage@172.56.12.104) Quit ()
[0:21] * AfC (~andrew@182.255.121.59) has joined #ceph
[0:26] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[0:29] * DarkAce-Z is now known as DarkAceZ
[0:34] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[0:39] * ScOut3R (~scout3r@540205B4.dsl.pool.telekom.hu) Quit ()
[1:06] <cbob> so, we are trying to setup a cloudstack with ceph for both primary and secondary, and we get the rbd to work, but when we get to the s3/swift portion for radosgw we run into some authentication problems, 1. is there like a really good tutorial somewhere that we can use, the ceph documentation wasnt very helpful, and 2. do you install radosgw to all your hosts or just one, if just one what happens if that node fails?
[1:22] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[1:23] * Discovery (~Discovery@192.162.100.197) has joined #ceph
[1:26] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[1:30] * AfC (~andrew@182.255.121.59) Quit (Quit: Leaving.)
[1:33] * geraintjones (~geraint@208.72.139.54) has joined #ceph
[1:34] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[1:56] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[2:00] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[2:03] * mkoo (~mkoo@chello213047069005.1.13.vie.surfer.at) Quit (Quit: mkoo)
[2:05] * bjornar (~bjornar@ti0099a340-dhcp0395.bb.online.no) Quit (Ping timeout: 480 seconds)
[2:05] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[2:05] * yanzheng (~zhyan@134.134.139.76) has joined #ceph
[2:12] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[2:13] * mschiff (~mschiff@port-13119.pppoe.wtnet.de) has joined #ceph
[2:19] * mschiff_ (~mschiff@port-34493.pppoe.wtnet.de) has joined #ceph
[2:26] * sarob (~sarob@2601:9:7080:13a:64e9:8210:d027:368a) has joined #ceph
[2:26] * mschiff (~mschiff@port-13119.pppoe.wtnet.de) Quit (Ping timeout: 480 seconds)
[2:34] * sarob (~sarob@2601:9:7080:13a:64e9:8210:d027:368a) Quit (Ping timeout: 480 seconds)
[2:39] * nerdtron (~oftc-webi@202.60.8.250) has joined #ceph
[2:39] <nerdtron> hi all
[2:39] <nerdtron> again, i have this, HEALTH_ERR 1 pgs inconsistent; 1 scrub errors
[2:39] <nerdtron> i have no idea how to proceed
[2:40] * LeaChim (~LeaChim@host86-161-89-52.range86-161.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:44] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[2:45] * wschulze1 (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[2:52] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[2:53] * sarob (~sarob@2601:9:7080:13a:9588:dbf7:28e7:9453) has joined #ceph
[2:57] * wschulze1 (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Read error: Connection reset by peer)
[2:57] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[3:01] * sarob (~sarob@2601:9:7080:13a:9588:dbf7:28e7:9453) Quit (Ping timeout: 480 seconds)
[3:06] * geraintjones (~geraint@208.72.139.54) Quit (Remote host closed the connection)
[3:07] * geraintjones (~geraint@208.72.139.54) has joined #ceph
[3:07] <geraintjones> whats the output from ceph pg dump|grep inconsis
[3:08] * pingu (~christian@203-219-79-122.static.tpgi.com.au) has joined #ceph
[3:15] * haomaiwa_ (~haomaiwan@117.79.232.187) Quit (Remote host closed the connection)
[3:16] * haomaiwang (~haomaiwan@118.187.35.10) has joined #ceph
[3:16] * Discovery (~Discovery@192.162.100.197) Quit (Read error: Connection reset by peer)
[3:19] * ScOut3R (~ScOut3R@540205B4.dsl.pool.telekom.hu) has joined #ceph
[3:20] * ScOut3R (~ScOut3R@540205B4.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[3:21] * ScOut3R (~ScOut3R@540205B4.dsl.pool.telekom.hu) has joined #ceph
[3:25] * shang (~ShangWu@175.41.48.77) has joined #ceph
[3:26] * sarob (~sarob@2601:9:7080:13a:cc29:2da5:8bbc:568a) has joined #ceph
[3:28] * mattbenjamin1 (~matt@76-206-42-105.lightspeed.livnmi.sbcglobal.net) has joined #ceph
[3:29] * ScOut3R (~ScOut3R@540205B4.dsl.pool.telekom.hu) Quit (Ping timeout: 480 seconds)
[3:33] * mattbenjamin (~matt@76-206-42-105.lightspeed.livnmi.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[3:34] * sarob (~sarob@2601:9:7080:13a:cc29:2da5:8bbc:568a) Quit (Ping timeout: 480 seconds)
[3:35] * AfC (~andrew@182.255.122.166) has joined #ceph
[3:43] * i_m (~ivan.miro@2001:470:1f0b:1a08:3e97:eff:febd:a80c) has joined #ceph
[4:03] * BillK (~BillK-OFT@220-253-180-210.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[4:04] * BillK (~BillK-OFT@124-171-169-226.dyn.iinet.net.au) has joined #ceph
[4:07] <nerdtron> geraintjones: iI already tried to repair the PG
[4:07] <nerdtron> repair did not work initially, after reading the mailing list, they sugeested to restart the osd and then try repair again
[4:07] <nerdtron> this time it worked
[4:08] * mschiff_ (~mschiff@port-34493.pppoe.wtnet.de) Quit (Remote host closed the connection)
[4:12] * BillK (~BillK-OFT@124-171-169-226.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[4:14] * Freeaqingme (~dolf@5350F4AD.cm-6-1d.dynamic.ziggo.nl) Quit (Ping timeout: 480 seconds)
[4:15] * Freeaqingme (~dolf@5350F4AD.cm-6-1d.dynamic.ziggo.nl) has joined #ceph
[4:15] * Steki (~steki@fo-d-130.180.254.37.targo.rs) Quit (Ping timeout: 480 seconds)
[4:17] * BillK (~BillK-OFT@124-148-103-108.dyn.iinet.net.au) has joined #ceph
[4:26] * sarob (~sarob@2601:9:7080:13a:4bd:63d4:b752:b415) has joined #ceph
[4:31] * elmo (~james@faun.canonical.com) Quit (Ping timeout: 480 seconds)
[4:31] * ScOut3R (~ScOut3R@540205B4.dsl.pool.telekom.hu) has joined #ceph
[4:34] * sarob (~sarob@2601:9:7080:13a:4bd:63d4:b752:b415) Quit (Ping timeout: 480 seconds)
[4:39] * ScOut3R (~ScOut3R@540205B4.dsl.pool.telekom.hu) Quit (Ping timeout: 480 seconds)
[4:48] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[4:52] * zeb (~quassel@cortex.hznet.us) Quit (Remote host closed the connection)
[4:59] * elmo (~james@faun.canonical.com) has joined #ceph
[5:01] <pingu> does anyone have any experience with the rados_lock* family of c librados functions?
[5:01] <pingu> I'm wondering what the role of tags are
[5:01] <pingu> and if the cookie is just supposed to be a uuid of some sort
[5:03] * julian (~julianwa@125.70.133.219) has joined #ceph
[5:06] * fireD (~fireD@93-139-129-73.adsl.net.t-com.hr) has joined #ceph
[5:08] * fireD_ (~fireD@93-139-169-245.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[5:26] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[5:26] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[5:34] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:34] * AfC (~andrew@182.255.122.166) Quit (Quit: Leaving.)
[5:45] * Vacum_ (~vovo@88.130.207.24) has joined #ceph
[5:52] * Vacum (~vovo@88.130.199.28) Quit (Ping timeout: 480 seconds)
[5:53] * nerdtron (~oftc-webi@202.60.8.250) Quit (Remote host closed the connection)
[6:03] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[6:08] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[6:08] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:18] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[6:21] * nwf (~nwf@67.62.51.95) Quit (Ping timeout: 480 seconds)
[6:24] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[6:24] * nwf (~nwf@67.62.51.95) has joined #ceph
[6:25] * paradon (~thomas@60.234.66.253) Quit (Remote host closed the connection)
[6:25] * paradon (~thomas@60.234.66.253) has joined #ceph
[6:26] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[6:32] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[6:33] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[6:33] * sarob (~sarob@2001:4998:effd:7801::1038) has joined #ceph
[6:35] * i_m (~ivan.miro@2001:470:1f0b:1a08:3e97:eff:febd:a80c) Quit (Ping timeout: 480 seconds)
[6:37] * sarob (~sarob@2001:4998:effd:7801::1038) Quit (Remote host closed the connection)
[6:37] * sarob (~sarob@nat-dip6.cfw-a-gci.corp.yahoo.com) has joined #ceph
[6:38] * yanzheng (~zhyan@134.134.139.76) Quit (Remote host closed the connection)
[6:39] * sarob (~sarob@nat-dip6.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[6:40] * sarob (~sarob@2001:4998:effd:7801::1038) has joined #ceph
[6:41] * sarob_ (~sarob@2601:9:7080:13a:c819:b019:7a36:3e65) has joined #ceph
[6:41] * nwf (~nwf@67.62.51.95) Quit (Ping timeout: 480 seconds)
[6:41] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[6:42] * sarob (~sarob@2001:4998:effd:7801::1038) Quit (Read error: Connection reset by peer)
[6:55] * pingu (~christian@203-219-79-122.static.tpgi.com.au) Quit (Ping timeout: 480 seconds)
[6:56] * psomas (~psomas@inferno.cc.ece.ntua.gr) has joined #ceph
[7:01] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[7:05] * sagelap (~sage@182.255.123.109) has joined #ceph
[7:05] * AfC (~andrew@182.255.122.166) has joined #ceph
[7:05] * psomas (~psomas@inferno.cc.ece.ntua.gr) Quit (Ping timeout: 480 seconds)
[7:09] * psomas (~psomas@inferno.cc.ece.ntua.gr) has joined #ceph
[7:21] * haomaiwang (~haomaiwan@118.187.35.10) Quit (Remote host closed the connection)
[7:22] * haomaiwang (~haomaiwan@216.157.85.168) has joined #ceph
[7:28] * nwf_ (~nwf@67.62.51.95) has joined #ceph
[7:29] * haomaiwa_ (~haomaiwan@211.155.113.187) has joined #ceph
[7:30] * haomaiw__ (~haomaiwan@117.79.232.187) has joined #ceph
[7:30] * AfC (~andrew@182.255.122.166) Quit (Quit: Leaving.)
[7:36] * haomaiwang (~haomaiwan@216.157.85.168) Quit (Ping timeout: 480 seconds)
[7:37] * haomaiwa_ (~haomaiwan@211.155.113.187) Quit (Ping timeout: 480 seconds)
[7:37] * sagelap (~sage@182.255.123.109) Quit (Ping timeout: 480 seconds)
[7:40] * AfC (~andrew@182.255.122.166) has joined #ceph
[7:40] * nwf_ (~nwf@67.62.51.95) Quit (Ping timeout: 480 seconds)
[7:42] * psomas (~psomas@inferno.cc.ece.ntua.gr) Quit (Ping timeout: 480 seconds)
[7:42] * haomaiw__ (~haomaiwan@117.79.232.187) Quit (Remote host closed the connection)
[7:43] * haomaiwang (~haomaiwan@118.187.35.10) has joined #ceph
[7:52] * psomas (~psomas@inferno.cc.ece.ntua.gr) has joined #ceph
[7:55] <aarontc> hey guys, does anyone know how to increase the timeout on ATA commands? sometimes Ceph stalls, and the VMs with RBD images attached as IDE disks all do controller resets, etc, which loses the write cache
[7:59] * AfC (~andrew@182.255.122.166) Quit (Quit: Leaving.)
[8:01] * mattbenjamin1 (~matt@76-206-42-105.lightspeed.livnmi.sbcglobal.net) Quit (Quit: Leaving.)
[8:16] * sarob_ (~sarob@2601:9:7080:13a:c819:b019:7a36:3e65) Quit (Remote host closed the connection)
[8:17] * sarob (~sarob@2001:4998:effd:7801::116c) has joined #ceph
[8:22] * sarob (~sarob@2001:4998:effd:7801::116c) Quit (Remote host closed the connection)
[8:22] * sarob (~sarob@nat-dip6.cfw-a-gci.corp.yahoo.com) has joined #ceph
[8:28] * sarob_ (~sarob@2001:4998:effd:7801::116c) has joined #ceph
[8:30] * sarob (~sarob@nat-dip6.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[8:31] * sarob_ (~sarob@2001:4998:effd:7801::116c) Quit (Remote host closed the connection)
[8:32] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[8:35] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[8:35] * psomas (~psomas@inferno.cc.ece.ntua.gr) Quit (Ping timeout: 480 seconds)
[8:35] * sarob (~sarob@2001:4998:effd:7801::1083) has joined #ceph
[8:37] * sarob (~sarob@2001:4998:effd:7801::1083) Quit (Remote host closed the connection)
[8:37] * i_m (~ivan.miro@2001:470:1f0b:1a08:3e97:eff:febd:a80c) has joined #ceph
[8:37] * sarob (~sarob@2001:4998:effd:7801::1083) has joined #ceph
[8:40] * AfC (~andrew@182.255.122.166) has joined #ceph
[8:42] * tobru (~quassel@2a02:41a:3999::94) has joined #ceph
[8:43] * tobru (~quassel@2a02:41a:3999::94) Quit (Remote host closed the connection)
[8:44] * Sysadmin88 (~IceChat77@2.218.8.40) Quit (Read error: Connection reset by peer)
[8:45] * sarob (~sarob@2001:4998:effd:7801::1083) Quit (Ping timeout: 480 seconds)
[8:46] * tobru (~quassel@2a02:41a:3999::94) has joined #ceph
[9:05] * AfC (~andrew@182.255.122.166) Quit (Quit: Leaving.)
[9:07] * mkoo (~mkoo@chello213047069005.1.13.vie.surfer.at) has joined #ceph
[9:08] * AfC (~andrew@182.255.122.166) has joined #ceph
[9:11] * Cube (~Cube@66-87-77-220.pools.spcsdns.net) has joined #ceph
[9:14] * psomas (~psomas@inferno.cc.ece.ntua.gr) has joined #ceph
[9:19] * mattt_ (~textual@94.236.7.190) has joined #ceph
[9:24] * gregorg (~Greg@78.155.152.6) Quit (Remote host closed the connection)
[9:30] * hjjg (~hg@p3EE315B6.dip0.t-ipconnect.de) has joined #ceph
[9:32] * cbob (~cbob@host-63-232-9-69.midco.net) Quit (Ping timeout: 480 seconds)
[9:33] * nwf_ (~nwf@67.62.51.95) has joined #ceph
[9:38] * psomas (~psomas@inferno.cc.ece.ntua.gr) Quit (Ping timeout: 480 seconds)
[9:40] * mattt_ (~textual@94.236.7.190) Quit (Ping timeout: 480 seconds)
[9:40] * mattt_ (~textual@94.236.7.190) has joined #ceph
[9:45] * nwf_ (~nwf@67.62.51.95) Quit (Remote host closed the connection)
[9:46] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[9:47] * Cube1 (~Cube@66-87-77-220.pools.spcsdns.net) has joined #ceph
[9:47] * Cube (~Cube@66-87-77-220.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[9:52] * glambert (~glambert@37.157.50.80) Quit (Read error: Connection reset by peer)
[9:55] * nwf_ (~nwf@67.62.51.95) has joined #ceph
[9:55] * thomnico (~thomnico@host.234.54.23.62.rev.coltfrance.com) has joined #ceph
[9:56] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[9:58] * Cube (~Cube@66-87-77-220.pools.spcsdns.net) has joined #ceph
[10:02] * Cube2 (~Cube@66-87-77-220.pools.spcsdns.net) has joined #ceph
[10:02] * _matt_ is now known as matt_
[10:03] * Cube1 (~Cube@66-87-77-220.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[10:03] * Cube2 (~Cube@66-87-77-220.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[10:04] * nwf_ (~nwf@67.62.51.95) Quit (Remote host closed the connection)
[10:04] * Cube (~Cube@66-87-77-220.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[10:05] * nwf_ (~nwf@67.62.51.95) has joined #ceph
[10:08] * tziOm (~bjornar@194.19.106.242) Quit (Remote host closed the connection)
[10:08] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[10:08] * ChanServ sets mode +v andreask
[10:08] <sherry> is there any rule in the CRUSH map that I can define to migrate data from first pool to second pool when the first pool is full?
[10:09] <andreask> no, there isn't
[10:09] * Cube (~Cube@66-87-77-220.pools.spcsdns.net) has joined #ceph
[10:09] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[10:10] <sherry> andreask: thanks, so I have to change the weight when the pool is full?
[10:10] <andreask> sherry: you would typically add more osds
[10:11] * LeaChim (~LeaChim@host86-161-89-52.range86-161.btcentralplus.com) has joined #ceph
[10:13] <sherry> andreask: right, one more thing > in http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds the ruleset number is not unique, is it on purpose?
[10:15] <andreask> sherry: no, that is a mistake
[10:15] <sherry> thank you
[10:17] <andreask> yw
[10:18] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:25] * i_m (~ivan.miro@2001:470:1f0b:1a08:3e97:eff:febd:a80c) Quit (Ping timeout: 480 seconds)
[10:25] * Cube1 (~Cube@66-87-77-220.pools.spcsdns.net) has joined #ceph
[10:26] * AfC (~andrew@182.255.122.166) Quit (Quit: Leaving.)
[10:26] * Cube1 (~Cube@66-87-77-220.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[10:26] * Cube1 (~Cube@66-87-77-220.pools.spcsdns.net) has joined #ceph
[10:33] * Cube (~Cube@66-87-77-220.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[10:35] * garphyx is now known as garphyx`aw
[10:35] <ccourtaut> morning
[10:41] * nwf_ (~nwf@67.62.51.95) Quit (Ping timeout: 480 seconds)
[10:45] * nwf_ (~nwf@67.62.51.95) has joined #ceph
[10:45] * grifferz (~andy@bitfolk.com) Quit (Remote host closed the connection)
[10:46] * jbd_ (~jbd_@2001:41d0:52:a00::77) Quit (Ping timeout: 480 seconds)
[10:47] * thomnico (~thomnico@host.234.54.23.62.rev.coltfrance.com) Quit (Quit: Ex-Chat)
[10:47] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[10:49] * thomnico (~thomnico@host.234.54.23.62.rev.coltfrance.com) has joined #ceph
[10:56] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[11:06] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (Quit: Leaving.)
[11:18] * mancdaz_away is now known as mancdaz
[11:22] * Steki (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[11:29] * psomas (~psomas@inferno.cc.ece.ntua.gr) has joined #ceph
[11:30] <loicd> ccourtaut: will you be at http://fosdem.org/ at the end of january ?
[11:31] <ccourtaut> loicd: of course
[11:31] <loicd> cool :-)
[11:31] <ccourtaut> loicd: from friday till sunday
[11:32] <loicd> and the ceph meetup ? http://www.meetup.com/Ceph-Brussels/events/152786002/ ?
[11:32] <loicd> joao: good morning
[11:32] <joao> morning loicd
[11:33] <loicd> will you be at FOSDEM ? Maybe you told me already and I forgot, I'm not good at remembering that kind of things ;-)
[11:33] <loicd> joao: ^
[11:33] <ccourtaut> loicd: unfortunaly know, didn't knew about it, i land in brussels early eve
[11:33] <joao> not sure yet
[11:33] <joao> totally forgot about fosdem
[11:34] <joao> let me try figuring that out today
[11:34] <loicd> ok
[11:34] * jks (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[11:34] * joao is eagerly waiting for dhl to arrive with his cubietruck
[11:34] <loicd> ccourtaut: you fly to brussels ? ?
[11:35] * loicd amazed, knowing it's 1h train from Paris ;-)
[11:35] <ccourtaut> loicd: nop, by train, my bad
[11:35] <ccourtaut> i know trains does not land :)
[11:35] <loicd> ah !
[11:35] <loicd> ccourtaut: most of them don't indeed ;-)
[11:35] <ccourtaut> XD
[11:35] <joao> ccourtaut, the right kind of trains do
[11:36] <loicd> ahah
[11:38] <loicd> wido: will you be @ fosdem ? It would be great to have you with us at the meetup too http://www.meetup.com/Ceph-Brussels/events/152786002/ :-)
[11:39] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[11:40] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[11:40] * KindTwo (KindOne@h126.1.40.162.dynamic.ip.windstream.net) has joined #ceph
[11:41] * KindTwo is now known as KindOne
[11:42] <joao> sweet
[11:42] <joao> my cubietruck just arrived
[11:43] <ccourtaut> joao: looks nice
[11:43] <ccourtaut> enjoy
[11:43] * psomas (~psomas@inferno.cc.ece.ntua.gr) Quit (Ping timeout: 480 seconds)
[11:48] * psomas (~psomas@inferno.cc.ece.ntua.gr) has joined #ceph
[11:54] * garphyx`aw is now known as garphy
[11:54] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Read error: Connection reset by peer)
[11:56] * KindOne (KindOne@0001a7db.user.oftc.net) has joined #ceph
[11:58] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[12:01] * psomas (~psomas@inferno.cc.ece.ntua.gr) Quit (Ping timeout: 480 seconds)
[12:02] * matt_ (~matt@ccpc-mwr.bath.ac.uk) Quit (Ping timeout: 480 seconds)
[12:11] * psomas (~psomas@inferno.cc.ece.ntua.gr) has joined #ceph
[12:31] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[12:33] * yolo1604 (~yolo1604@14.160.47.142) has joined #ceph
[12:33] * yolo1604 (~yolo1604@14.160.47.142) Quit ()
[12:34] * yolo1604 (~yolo1604@118.70.67.142) has joined #ceph
[12:35] * thomnico (~thomnico@host.234.54.23.62.rev.coltfrance.com) Quit (Read error: Operation timed out)
[12:45] * sagelap (~sage@2001:388:a098:120:9932:ad65:6a5d:6d02) has joined #ceph
[12:49] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[12:52] * tobru (~quassel@2a02:41a:3999::94) Quit (Remote host closed the connection)
[13:07] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Read error: Operation timed out)
[13:23] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[13:23] * ChanServ sets mode +v andreask
[13:25] * tobru (~quassel@2a02:41a:3999::94) has joined #ceph
[13:33] * allsystemsarego (~allsystem@5-12-241-225.residential.rdsnet.ro) has joined #ceph
[13:33] * erice (~erice@50.240.86.181) has joined #ceph
[13:41] * thomnico (~thomnico@host.234.54.23.62.rev.coltfrance.com) has joined #ceph
[13:54] * madkiss (~madkiss@089144197147.atnat0006.highway.a1.net) has joined #ceph
[13:54] * mschiff (~mschiff@port-34493.pppoe.wtnet.de) has joined #ceph
[13:56] * garphy is now known as garphy`aw
[13:56] <loicd> The next Ceph meetup in Paris will be http://www.meetup.com/Ceph-in-Paris/events/158942372/
[13:58] <madkiss> uh.
[13:58] <madkiss> on a saturday?
[13:58] <madkiss> uh no, wait
[13:58] <madkiss> my bad.
[13:58] * mattt_ (~textual@94.236.7.190) Quit (Quit: Textual IRC Client: www.textualapp.com)
[14:05] * t0rn (~ssullivan@c-24-11-198-35.hsd1.mi.comcast.net) has joined #ceph
[14:07] * t0rn (~ssullivan@c-24-11-198-35.hsd1.mi.comcast.net) has left #ceph
[14:09] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) has joined #ceph
[14:09] <loicd> madkiss: :-)
[14:10] <madkiss> i'd kove to go there *sigh*
[14:10] <loicd> madkiss: what's the next Ceph event you will attend ?
[14:11] <madkiss> dunno yet
[14:12] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[14:15] * tryggvil (~tryggvil@178.19.53.254) Quit (Quit: tryggvil)
[14:18] * thomnico (~thomnico@host.234.54.23.62.rev.coltfrance.com) Quit (Quit: Ex-Chat)
[14:19] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[14:22] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[14:23] * madkiss (~madkiss@089144197147.atnat0006.highway.a1.net) Quit (Ping timeout: 480 seconds)
[14:25] <cfreak200> is there some documentation about the 'perf dump' per osd metrics? I'd like to know which of those are gauges/counters/... Working on some proper munin plugins...
[14:27] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) has joined #ceph
[14:28] * getup (~getup@gw.office.cyso.net) has joined #ceph
[14:30] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[14:31] <cfreak200> nvm found the perf schema...
[14:34] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[14:34] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[14:40] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[14:44] * sroy (~sroy@208.88.110.46) has joined #ceph
[14:45] * thomnico (~thomnico@host.234.54.23.62.rev.coltfrance.com) has joined #ceph
[14:50] * BillK (~BillK-OFT@124-148-103-108.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[15:00] * linuxkidd (~linuxkidd@cpe-066-057-019-145.nc.res.rr.com) has joined #ceph
[15:11] * sroy (~sroy@208.88.110.46) Quit (Ping timeout: 480 seconds)
[15:13] * blahnana (~bman@blahnana.com) Quit (Ping timeout: 480 seconds)
[15:14] * JC (~JC@71-94-44-243.static.trlk.ca.charter.com) has joined #ceph
[15:30] * blahnana (~bman@blahnana.com) has joined #ceph
[15:37] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[15:40] * mattt_ (~textual@94.236.7.190) has joined #ceph
[15:45] * markbby (~Adium@c-75-73-240-50.hsd1.mn.comcast.net) has joined #ceph
[15:45] * bdonnahue (~tschneide@ool-18bda2d8.dyn.optonline.net) has joined #ceph
[15:46] * bjornar (~bjornar@ns3.uniweb.no) has joined #ceph
[15:46] * bdonnahue (~tschneide@ool-18bda2d8.dyn.optonline.net) has left #ceph
[15:48] * AfC (~andrew@2001:388:a098:120:2ad2:44ff:fe08:a4c) has joined #ceph
[15:48] * markbby1 (~Adium@168.94.245.3) has joined #ceph
[15:50] * markbby1 (~Adium@168.94.245.3) Quit ()
[15:50] * markbby1 (~Adium@168.94.245.3) has joined #ceph
[15:53] * markbby (~Adium@c-75-73-240-50.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[15:56] <cfreak200> ceph blog broken ? :/
[15:57] * sagelap (~sage@2001:388:a098:120:9932:ad65:6a5d:6d02) Quit (Ping timeout: 480 seconds)
[16:02] * alphe (~alphe@0001ac6f.user.oftc.net) has joined #ceph
[16:02] <alphe> hello everyone !
[16:02] * AfC (~andrew@2001:388:a098:120:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[16:03] <alphe> my todays problem is that I have the same RBD image mounted on two separated proxy server
[16:03] <alphe> if I add a new folder to image mounted on node 1 it is not seen on node 2 ...
[16:03] <alphe> and inversaly
[16:04] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[16:04] <alphe> I have to umount unmap then remount remap to have the new data taken into consideration ...
[16:08] * thomnico (~thomnico@host.234.54.23.62.rev.coltfrance.com) Quit (Quit: Ex-Chat)
[16:09] * bjornar (~bjornar@ns3.uniweb.no) Quit (Ping timeout: 480 seconds)
[16:10] <alphe> have a problem I mapped the same rbd image on two different proxy server the changes on that image from one of that proxy is not reflected to the other proxy content until i umount /unmap remap the rbd image anyone knows how to fix that ?
[16:10] * vata (~vata@2607:fad8:4:6:3481:e44b:4b6a:5d98) has joined #ceph
[16:10] <cfreak200> alphe: I dont think an rbd image should ever be written two from two different places...
[16:12] <alphe> hum ...
[16:12] <alphe> in fact it is not writen from the second place
[16:13] <cfreak200> well if you mount it via rbd it's like mounting a disk
[16:13] <cfreak200> your OS will not recognize if you change the disk underneath it...
[16:13] <cfreak200> unless you flush caches and re-read the same address
[16:13] <alphe> this sux
[16:14] <cfreak200> what are you trying to do?
[16:14] <cfreak200> export via iSCIC ?
[16:14] <cfreak200> *iSCSI
[16:14] <alphe> nope
[16:14] <kraken> http://i.imgur.com/foEHo.gif
[16:14] <pmatulis> is that Spongebob?
[16:14] <alfredodeza> yes it is
[16:15] <alphe> pmatulis indeed
[16:16] <alphe> cfreak200 hum ... I have one proxy to feed data to RBD image and the other proxy to show content from the image
[16:16] <alphe> so I need those two mount to be on sync
[16:17] <alphe> although from my proxy feeder server I could share nfs the mounted partitio to the view proxy I though it would had been more efficient if I had both proxy mounting the image
[16:17] <cfreak200> alphe: looks like you really should use cephfs instead of RBD
[16:18] <alphe> cfreak200 cephfs is unstable and the folder tree disapear with noone able to tell me why
[16:19] <cfreak200> alphe: sorry can't help you there, I'm only using RBD for block devices (no shared devices) on my clusters
[16:19] * JC (~JC@71-94-44-243.static.trlk.ca.charter.com) Quit (Quit: Leaving.)
[16:19] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[16:19] <alphe> I guess this is why on top of RBD there is radosgw layer and the openstack/s3-amazon ...
[16:21] <alphe> but i don t like radosgw mainly because it has bugs too that makes your users impossible to connect after an update
[16:21] <cfreak200> alphe: have you filed reports for those yet?
[16:21] <alphe> it has deployement bugs too some time it auto creates you the right pools you need to operate it some times it doesn t create anything
[16:22] <alphe> cfreak200 those probs are experienced by other and known ... but solutions are not provided.
[16:23] <pmatulis> alphe: what are the bug numbers?
[16:23] * diegows (~diegows@200.68.116.185) has joined #ceph
[16:23] <alphe> pmatulis don t know searched the web seen most of them were answered through the mailling list
[16:23] * tryggvil (~tryggvil@178.19.53.254) Quit (Quit: tryggvil)
[16:24] <pmatulis> alphe: well, if these things are important to you, then make sure bugs get filed
[16:24] <alphe> for the tree folder that disapear with cephfs the reply was to try a patch to my kernel which I had no time to test
[16:25] <pmatulis> alphe: otherwise they won't get fixed
[16:25] * zidarsk8 (~zidar@84-255-203-33.static.t-2.net) has joined #ceph
[16:25] <alphe> thing is those are important for everyone using ceph ... I mean why i can t map rbd image in two different place and feed from a point and see live changes on the other point ?
[16:25] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (Quit: Leaving.)
[16:26] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[16:26] <cfreak200> alphe: well if you run a FS on top of rbd images and modify it on 2 machines it will just mess up.
[16:27] <cfreak200> doesnt matter if it's ceph or iSCSI or $SOMETHING
[16:27] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[16:29] <alphe> I see the point there but so I have to reshare through another technology my RBD image ... nfs, samba, ftp or whatever
[16:29] <alphe> that is actually what does the radosgw stack
[16:30] <cfreak200> and if you can reproduce the radosgw issue file a bug maybe it's a very simple fix and you'll have it running fine very soon
[16:30] <cfreak200> if you dont file a bug you'll have to hope that someone will do it.
[16:30] <alphe> cfreak200 I can detail the radosgw prob
[16:30] <cfreak200> alphe: go ahead file abug.
[16:31] <alphe> in fact their is two main problem one is solved easyly but is originated by a lack of documentation
[16:31] <cfreak200> alphe: i havent checked but I'm sure there is a documentation type bug
[16:31] * tobru (~quassel@2a02:41a:3999::94) Quit (Quit: http://quassel-irc.org - Chat comfortably. Anywhere.)
[16:31] * tobru (~quassel@2a02:41a:3999::94) has joined #ceph
[16:32] <alphe> most windows clients for s3-amazon (they are like 5 so it is not a plenty) need a SSL layer to send data to a s3-amazon like cluster (website)
[16:32] <alphe> it is hard coded so or you have that SSL layer active and working or you can t use them
[16:33] <cfreak200> well then put a proxy in front which just does SSL to the client and plaintext to localhost ?
[16:33] <alphe> in doc their is not the proper instructions to have the SSL layer working properly. there is missing the vhost part
[16:33] <cfreak200> unless it's a requirement from the s3 spec I wouldnt call that a bug in ceph.
[16:34] <alphe> it is not a bug in ceph it is a missing part of the ceph documentation
[16:34] * grifferz (~andy@bitfolk.com) has joined #ceph
[16:34] <alphe> the vhost file isn t properly fill as explaination so it leads to a not working ssl layer
[16:34] <pmatulis> alphe: you can file a documentation bug
[16:35] <alphe> sure and I could edit the documentation too and update it as often as possible ...
[16:35] * zidarsk8 (~zidar@84-255-203-33.static.t-2.net) has left #ceph
[16:35] <alphe> bucket management is a missing part in the doc too ...
[16:36] <alphe> how do you create a bucket in s3 it is just said use your fav client and do a bucket
[16:36] <alphe> but this is not taking in consideration quota settings and other fine tune that can be done
[16:36] <cfreak200> alphe: file a simple bug which states it missing. unless you pay someone to actually get ceph running how you like it, it is up to you report bugs/missing features/... thats part of the trade-off between open source and a blackbox solution
[16:37] * JC (~JC@71-94-44-243.static.trlk.ca.charter.com) has joined #ceph
[16:37] <alphe> cfreak200 but anyway nor s3 or cephfs as been stable enough for the use I have so I m going RBD which is never stated as being only mountable on one point only
[16:38] <alphe> it is logical though but it is not mentioned
[16:39] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[16:41] <alphe> bugs or miss use ... simply put you can say most of my problems where because I m not used to deploying s3-amazon (who is ?) and that once you have a working ceph /s3 amazon you should not touch it anymore
[16:41] <alphe> just live happy with the working current version until you face a real big problem if any ...
[16:42] <pmatulis> alphe: how do you edit the documentation?
[16:42] <alphe> pmatulis asking for the right to contribute ?
[16:43] <pmatulis> alphe: have you done that?
[16:43] <alphe> I mean if i have the time to fill a bug list with the documentation as it should be then I should directly ask to operate the changes on the doc that would be faster and more efficient
[16:44] * Discovery (~Discovery@192.162.100.197) has joined #ceph
[16:44] * markbby1 (~Adium@168.94.245.3) Quit (Remote host closed the connection)
[16:44] * markbby (~Adium@168.94.245.3) has joined #ceph
[16:44] <alphe> pmatulis I have no time for that I m too busy trying to make ceph works without needing a full wipe reinstall every 2 weeks
[16:44] <cfreak200> alphe: it would be faster to just file a bug instead of complaining of it not being fixed
[16:46] <alphe> cfreak200 I am not complaining I m asking something I got the reply I wanted and it makes sense so it is a bad use of me. For the other technologies allowing to access or share data I tested them they are not good enough that is not a complain ... sort of but not really... it is a statement
[16:46] <alphe> based on month of experiments in real world
[16:48] * bandrus (~Adium@108.246.12.107) has joined #ceph
[16:51] * Sp4rKy (~Sp4rKy@rennes1.dunnewind.net) has joined #ceph
[16:51] <Sp4rKy> morning
[16:54] <mikedawson> good morning
[16:55] <loicd> Sp4rKy: hey !
[16:55] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[16:55] <loicd> Sp4rKy: I'm curious to know how you are required to prove that kvm + ceph is a good combination. Is it about I/O speed ? Resilience ? Live migration ? ...
[16:56] <loicd> s/prove/demonstrate/
[16:56] <kraken> loicd meant to say: Sp4rKy: I'm curious to know how you are required to demonstrate that kvm + ceph is a good combination. Is it about I/O speed ? Resilience ? Live migration ? ...
[16:57] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) has joined #ceph
[16:58] * jcsp (~Adium@0001bf3a.user.oftc.net) has joined #ceph
[16:59] <ccourtaut> loicd: is it the standard behaviour that rgw is compiled by default now?
[16:59] * scuttlemonkey_ is now known as scuttlemonkey
[17:00] <loicd> ccourtaut: yes. Can't remember when / if it changed though.
[17:01] <ccourtaut> oh
[17:01] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[17:06] <Sp4rKy> loicd: hi :)
[17:06] <Sp4rKy> loicd: ok, the company I'm working in already use kvm, with local storage
[17:06] <Sp4rKy> which causes many issues:)
[17:06] <loicd> like loosing everything when the disk dies ?
[17:06] <Sp4rKy> for exemple
[17:07] <loicd> never a good thing ;)
[17:07] <Sp4rKy> or just to need a few minutes to restart the vm when the host reboot
[17:07] <Sp4rKy> so we have to find a solution, one of them could be to use distributed storage (understand: ceph), another to pay for vmware for exemple
[17:08] <loicd> the use case is therefore to reduce the risk of loosing data, mostly, right ?
[17:08] <Sp4rKy> nop, to reduce the downtime of vms in case a node crash
[17:09] <Sp4rKy> for the data lose issue, we have backups etc, that's a different issue
[17:09] <loicd> ok
[17:09] <Sp4rKy> mine is more about production service downtime
[17:11] <loicd> kvm + ceph => node crash => restart kvm on another machine is what you are after ?
[17:11] <Sp4rKy> yep, that and kvm + ceph => live migrate of kvm to another host
[17:11] <loicd> kvm in OpenStack just kvm ?
[17:12] <loicd> or Cloudstack
[17:12] * loicd looks over his shoulder and sees wido frowning
[17:13] <Sp4rKy> loicd: right now just kvm
[17:14] <Sp4rKy> but kvm will probably be included in a opennebula/openstack infra
[17:16] <loicd> so the demonstration you are to deliver is a) running ceph on a 3 node cluster, b) run kvm on a node using a rbd provided device, c) live migrate the kvm from one node to another ?
[17:17] <Sp4rKy> yep, for the first one, and probably something like a) same with 3-10 nodes, b) power down one node, c) restart the vm(s) on one of the remaining nodes
[17:21] <loicd> you should not experience any trouble at all, I think. I'm a little envious because I did not try this myself. But plenty of people did.
[17:21] <loicd> For some reason I never found myself in need of live migration.
[17:24] <loicd> I only have two tips for you : a) use ceph-deploy and http://ceph.com/docs/master/start/quick-ceph-deploy/ to create the cluster, b) deactivate authentication http://ceph.com/docs/master/rados/operations/authentication/#disabling-cephx because it will simplify things for you
[17:25] * alram (~alram@38.122.20.226) has joined #ceph
[17:25] <loicd> which you can do either by editing the ceph.conf created by ceph-deploy new
[17:25] <loicd> or afterward by adding
[17:25] <loicd> auth supported = none
[17:25] <loicd> to [global]
[17:25] <loicd> on all ceph.conf
[17:26] <Sp4rKy> ok, thanks
[17:26] <Sp4rKy> I'll try ... not before next week I think
[17:26] <cfreak200> how is it that for such a "quick" demo everyone goes for those big projects like open*? Wouldn't ganeti be much easier or is it about the GUI ?
[17:27] <loicd> I don't know if there are instructions on how to manually setup kvm to use rbd + live migrate. But maybe leseb_ or joshd do.
[17:28] <toutour> may be because people have open* infrastructure, juste want to use ceph with it
[17:29] <Sp4rKy> cfreak200: ganeti is not much easier than opennebula
[17:29] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[17:29] <Sp4rKy> openstack is slightly more complicated to setup though
[17:29] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[17:30] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[17:30] <cfreak200> I have to admin not haveing played with opennebula, just found the ganeti approach very easy and alot less "eye-candy"... ;)
[17:30] <leseb_> loicd: well with KVM is just a matter of keyring (virsh secret) (if you use cephx) then you just need to properly setup your xml file
[17:30] <leseb_> Sp4rKy: devstack is good for testing purpose
[17:31] <Sp4rKy> cfreak200: I used to setup & admin both of them
[17:31] <Sp4rKy> leseb_: right
[17:32] * Sysadmin88 (~IceChat77@2.218.8.40) has joined #ceph
[17:33] <loicd> leseb_: do you know of instructions that would allow to test live migration with ceph just by using kvm and not OpenStack ?
[17:33] * Steki (~steki@fo-d-130.180.254.37.targo.rs) Quit (Ping timeout: 480 seconds)
[17:33] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) has joined #ceph
[17:33] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[17:37] <leseb_> loicd: hum if you can boot VM in ceph with KVM then just configure both libvirt processes to listen, then run something like virsh migrate --live --p2p --domain instance-xxxxxx --desturi qemu+tcp://<your-host>/system --tunnelled
[17:37] <leseb_> loicd: this should work :)
[17:37] <loicd> Sp4rKy: are you allowed to use virsh or is it just kvm and your bare hands ?
[17:38] <loicd> !norris Sp4rKy
[17:38] <kraken> There is no Esc key on Sp4rKy' keyboard, because no one escapes Sp4rKy.
[17:41] * hjjg_ (~hg@p3EE329C5.dip0.t-ipconnect.de) has joined #ceph
[17:42] <Sp4rKy> loicd: virsh is fine
[17:43] * hjjg (~hg@p3EE315B6.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[17:44] * markbby (~Adium@168.94.245.3) Quit (Ping timeout: 480 seconds)
[17:45] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[17:47] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[17:53] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[17:54] * nhm (~nhm@184-97-148-136.mpls.qwest.net) has joined #ceph
[17:54] * ChanServ sets mode +o nhm
[17:54] * centauro (~franci@host104-3-dynamic.22-79-r.retail.telecomitalia.it) has joined #ceph
[17:56] * jackas21 (~ty2o0u00@bl4-219-171.dsl.telepac.pt) has joined #ceph
[17:58] * Cube1 is now known as Cube
[17:59] <alphe> <cfreak200> as for the s3-amazon radosgw over RBD my problem is more conceptual in fact ... I hate to have a box in a box in a box in a box ... and this is exactly what is rdosgw all about .... each box is a smaller asset of the previous one in the end you loose 4TB of space disk just because of formating different layers (that is not exactly true since buckets don t need to be formated like RBD does if I remember well)
[17:59] * ChanServ sets mode +o scuttlemonkey
[17:59] * scuttlemonkey changes topic to 'Latest stable (v0.72.0 "Emperor") -- http://ceph.com/get || dev channel #ceph-devel || New Ceph Use Cases: http://ceph.com/use-cases/ email community@ceph.com to add yours!'
[17:59] <alphe> scuttlemonkey lastest stable is 0.72.2 no ?
[18:00] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:00] * scuttlemonkey changes topic to 'Latest stable (v0.72 "Emperor") -- http://ceph.com/get || dev channel #ceph-devel || New Ceph Use Cases: http://ceph.com/use-cases/ email community@ceph.com to add yours!'
[18:00] <scuttlemonkey> alphe: yeah, not sure who added the .0
[18:00] <scuttlemonkey> I usually leave it at the major release
[18:00] <alphe> not me that is certain :)
[18:00] * scuttlemonkey changes topic to 'Latest stable (v0.72.x "Emperor") -- http://ceph.com/get || dev channel #ceph-devel || New Ceph Use Cases: http://ceph.com/use-cases/ email community@ceph.com to add yours!'
[18:00] <scuttlemonkey> there
[18:00] <janos> aww yeah
[18:00] <alphe> it is good to get time to time things certain :)
[18:01] <alphe> relaxing mind !
[18:01] <scuttlemonkey> hehe
[18:02] <cfreak200> is there an per rbd-volume statistic about ongoing I/O ? I'd like to see the I/O distribution between my devices across the cluster.
[18:02] <alphe> I still don t get why there is no stable version of empereror for saucy only for raring
[18:02] <alphe> cfreak200 there is
[18:02] <alphe> ceph osd stat ?
[18:02] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[18:03] * getup (~getup@gw.office.cyso.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[18:03] <alphe> rbd bench-write
[18:03] <cfreak200> o.O I dont want to produce any I/O is just want counters ;)
[18:03] * gregsfortytwo (~Adium@cpe-172-250-69-138.socal.res.rr.com) has joined #ceph
[18:03] <mikedawson> cfreak200: you can enable rbd admin sockets and do 'perf dump'
[18:03] <alphe> rbd bench-write and sub options like --io-size
[18:04] <cfreak200> mikedawson: but that will only show per osd right ?
[18:05] <mikedawson> cfreak200: no, it shows the IO for the rbd volume and all its io across all osds
[18:05] <mikedawson> the rbd admin socket will be on the host where rbd is mounted. In my case the qemu host
[18:05] <alphe> oh yes a great ask that everyone will like to read "how do you rebalance data stored on your osd?(i.e 1 osd is near full others are around 50%)"
[18:05] * jackas21 was kicked from #ceph by scuttlemonkey
[18:05] * jackas21 (~ty2o0u00@bl4-219-171.dsl.telepac.pt) has joined #ceph
[18:06] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:07] * scuttlemonkey sets mode +b *!*ty2o0u00@bl4-219-171.dsl.telepac.pt
[18:07] * jackas21 was kicked from #ceph by scuttlemonkey
[18:07] * xmltok_ (~xmltok@216.103.134.250) has joined #ceph
[18:13] * hjjg_ (~hg@p3EE329C5.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[18:14] * thomnico (~thomnico@92.103.156.154) has joined #ceph
[18:15] * mschiff (~mschiff@port-34493.pppoe.wtnet.de) Quit (Remote host closed the connection)
[18:16] * centauro (~franci@host104-3-dynamic.22-79-r.retail.telecomitalia.it) Quit (Quit: Sto andando via)
[18:16] * KindTwo (KindOne@198.14.201.177) has joined #ceph
[18:18] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:18] * KindTwo is now known as KindOne
[18:19] * mattt_ (~textual@94.236.7.190) Quit (Ping timeout: 480 seconds)
[18:21] * xarses (~andreww@12.164.168.115) has joined #ceph
[18:21] * gregmark (~Adium@68.87.42.115) has joined #ceph
[18:22] * joshd (~joshd@2607:f298:a:607:5daa:8dd2:ce69:ffbe) Quit (Ping timeout: 480 seconds)
[18:23] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[18:24] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Quit: Leaving.)
[18:25] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[18:27] * jtlebigot (~jlebigot@proxy.ovh.net) has joined #ceph
[18:27] <alphe> oh yes a great ask that everyone will like to read "how do you rebalance data stored on your osd?(i.e 1 osd is near full others are around 50%)"
[18:27] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[18:28] <jtlebigot> Hi
[18:28] <alphe> hi
[18:28] * brambles (lechuck@s0.barwen.ch) Quit (Ping timeout: 480 seconds)
[18:29] <jtlebigot> When trying to unmap an image on linux 3.10.24 the unmap operation sometime "locks"
[18:29] <alphe> ?
[18:29] <alphe> what do you mean by locks ?
[18:30] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[18:30] <jtlebigot> the operation freezes
[18:30] <alphe> does it takes forever to complete or does to directly says you don t have the rights ?
[18:30] <jtlebigot> forever
[18:30] <alphe> jtlebigot are you sure the image is properly unmounted ?
[18:30] <jtlebigot> The only option to stop "unmap" it is rebboting hard the system
[18:31] <jtlebigot> it should, the operation right before is always a "umount()"
[18:31] <alphe> (if you reshare the image with nfs you should unmount the nfs bind mount point too)
[18:31] <jtlebigot> without the lazy flag
[18:32] <jtlebigot> my test case is map 2 images, do a prtected snaphot, unmount + unmap both
[18:32] <pmatulis> mikedawson: is enabling rbd admin sockets documented?
[18:32] <jtlebigot> using 0.72.1 with per pool authentication
[18:32] <mikedawson> pmatulis: do you use libvirt/qemu?
[18:32] * brambles (lechuck@s0.barwen.ch) has joined #ceph
[18:33] <pmatulis> mikedawson: yes
[18:33] <mikedawson> pmatulis: openstack?
[18:33] <alphe> jtlebigot hard to tell what is going on ...
[18:33] <pmatulis> mikedawson: no
[18:33] * joshd (~joshd@2607:f298:a:607:a135:b4f3:420c:2a67) has joined #ceph
[18:33] <mikedawson> pmatulis: ok. what distro?
[18:33] <pmatulis> mikedawson: 'buntu
[18:33] <alphe> jtlebigot basically you could shoot the rbd kernel module ..
[18:34] <alphe> lsmod rmmod modprobe
[18:34] <alphe> sequence
[18:34] <mikedawson> pmatulis: ok, I'll try to find one of the old emails I've sent on the topic. It can be a bit challenging
[18:34] <jtlebigot> alphe: well... it's a monolithique kernel :(
[18:35] <alphe> but that is a bad thing to do ... I would say try another kernel a more recent kernel around 3.11 and see if your problem is fixed
[18:35] <pmatulis> mikedawson: i think i did see something on the list but it looked fragmented/incomplete
[18:35] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[18:35] <alphe> jtlebigot ... linux is monolitic and modular ...
[18:35] <alphe> unless you have hard compiled all the drivers you needed in your kernel which is a bad idea...
[18:36] <jtlebigot> sorry, I meant monolitic build. modules are not enabled for security purpose
[18:36] <pmatulis> mikedawson: would be great if it could be gathered up. i could test and submit everything to the docs
[18:36] <alphe> jtlebigot then you loose on flexibility ...
[18:36] <alphe> ok so no modules shooting ...
[18:36] <alphe> then you can still try a more recent kernel
[18:36] <mikedawson> pmatulis: I've been meaning to start blogging on some of these topic that I've helped others with multiple times
[18:37] * ircolle (~Adium@2601:1:8380:2d9:24c7:72d6:a7bf:4e35) has joined #ceph
[18:37] <jtlebigot> Even though I can not use 3.11 neither 3.12 in production I'll give it a try to narrow the issue
[18:37] <alphe> recompiling the whole thing ... is a long task
[18:37] <pmatulis> mikedawson: yeah, the time factor, i understand
[18:37] <jtlebigot> getting used to it ;)
[18:37] <alphe> yeah probably
[18:37] <alphe> but it is still a day or two of work
[18:38] <alphe> and then not sure that correct your problem
[18:38] <jtlebigot> worth trying anyway.
[18:39] <jtlebigot> Do you have any pointers to look at in the module code ? I'll try to investigate there too as it compiles
[18:39] * mancdaz is now known as mancdaz_away
[18:39] <mikedawson> pmatulis: Here is one thread that I tried to explain the process http://www.spinics.net/lists/ceph-users/msg06111.html
[18:40] <alphe> jtlebigot you have a nfs export running for that particion ?
[18:41] <alphe> in sebastien han awsome docs it says Warning: Don?t use rbd kernel driver on the osd server. Perhaps it will freeze the rbd client and your osd server.
[18:41] * bjornar (~bjornar@ti0099a340-dhcp0395.bb.online.no) has joined #ceph
[18:41] <alphe> that is maybe your problem no ?
[18:41] <jtlebigot> alphe: nope, only map/mount ext4 umount unmap
[18:42] <alphe> dmesg says anything in regard to a possible rbd kernel module problem ?
[18:43] * gregsfortytwo1 (~Adium@2607:f298:a:607:607c:463f:25ad:d43d) Quit (Ping timeout: 480 seconds)
[18:43] <jtlebigot> dmesg | grep rbd --> nothing
[18:43] <jtlebigot> forgot to mention: this only accures occasionally, test scenario being always the same
[18:43] <alphe> ok and just a tail dmesg shows what ?
[18:44] <alphe> jtlebigot can be due to lags ...
[18:44] <jtlebigot> sorry, ran it on the wrong machine
[18:44] <alphe> on lan lags is odd but who knows ...
[18:44] * ldurnez (~ldurnez@proxy.ovh.net) has joined #ceph
[18:46] <jtlebigot> alphe, I'll try with reproduce it with dmesg tail
[18:46] * angdraug (~angdraug@12.164.168.115) has joined #ceph
[18:47] * bdonnahue (~tschneide@ool-18bda2d8.dyn.optonline.net) has joined #ceph
[18:47] <alphe> jtlebigot yeah but if it is intermitent it can be due to some external factors
[18:48] <bdonnahue> how do user permissions work on ceph
[18:48] <bdonnahue> ie if i create an rbd will it have the same group, user, and anaonymous permissions that a linux filesystem would have
[18:49] <jtlebigot> alphe: got this
[18:49] <jtlebigot> [<ffffffff816a260c>] rbd_osd_req_format_write+0x5c/0x90
[18:49] <jtlebigot> [<ffffffff816a3750>] rbd_img_request_fill+0xb0/0x930
[18:49] <jtlebigot> [<ffffffff816a6021>] rbd_request_fn+0x1c1/0x2a0
[18:50] * dpippenger (~riven@cpe-198-72-154-134.socal.res.rr.com) has joined #ceph
[18:50] <jtlebigot> and this
[18:50] <jtlebigot> [<ffffffff816a260c>] rbd_osd_req_format_write+0x5c/0x90
[18:50] <jtlebigot> [<ffffffff816a3750>] rbd_img_request_fill+0xb0/0x930
[18:50] <jtlebigot> [<ffffffff816a6021>] rbd_request_fn+0x1c1/0x2a0
[18:51] <jtlebigot> the first one is related to an unmap while the second one is related to a map
[18:53] <xmltok_> is there any documentation on the stats frmo the admin socket? like what exactly avgcount is for latency and the sum?
[18:53] * houkouonchi-home (~linux@66-215-209-207.dhcp.rvsd.ca.charter.com) has joined #ceph
[18:56] * Underbyte (~jerrad@pat-global.macpractice.net) has joined #ceph
[18:57] <alphe> jtlebigot seems the same to me and probably the lock didn t show up
[18:59] <alphe> xmltok I asked google and no real doc exists it is mentionned in several way but that is all
[19:00] <jtlebigot> alphe: Gonna try to import these pending fixes, just noticed there are not yet mainline: git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git for-stable-3.10.24
[19:00] <alphe> ok
[19:00] <jtlebigot> thanks for the help
[19:00] <alphe> good luck and hope that fixes your problems !
[19:00] <alphe> jtlebigot I did little ...
[19:03] * nhm (~nhm@184-97-148-136.mpls.qwest.net) Quit (Read error: No route to host)
[19:04] <mikedawson> xmltok_: there isn't much documentation at all. I will tell you that if you are looking for poorly performing hardware, focus on subop_latency.
[19:06] * Sysadmin88 (~IceChat77@2.218.8.40) Quit (Read error: Connection reset by peer)
[19:06] <mikedawson> xmltok_: in graphite I watch 10s samples of "alias(divideSeries(perSecond(ceph.osd.0.osd.subop_latency_sum), perSecond(ceph.osd.0.osd.subop_latency_avgcount)), 'Average Latency')",
[19:06] <mikedawson> mikedawson: and "alias(dashed(divideSeries(perSecond(ceph.osd.0.filestore.journal_latency_sum), perSecond(ceph.osd.0.filestore.journal_latency_avgcount))), 'Avg Journal Latency')",
[19:06] * bdonnahue (~tschneide@ool-18bda2d8.dyn.optonline.net) has left #ceph
[19:07] * thomnico (~thomnico@92.103.156.154) Quit (Quit: Ex-Chat)
[19:07] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[19:07] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[19:08] <mikedawson> xmltok_: graph that data for each osd. It will let you see if you have slow journals and/or slow individual osds.
[19:09] <mikedawson> xnktok_: avgcount is a counter of cumulative number of operations. Sum is a counter of the cumulative latency for those operations.
[19:10] <xmltok_> excellent
[19:10] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) has joined #ceph
[19:11] <xmltok_> unfortunately the one thing opentsdb is missing is all of the graphite series math, but knowing what the avgcount/sum is helps
[19:12] <xmltok_> i will probably just do that math in my collector and add it as avg_latency
[19:13] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[19:13] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[19:13] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) Quit ()
[19:14] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) has joined #ceph
[19:14] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) Quit ()
[19:15] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) has joined #ceph
[19:15] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:21] * sarob (~sarob@2601:9:7080:13a:c9aa:71d6:bcff:a15d) has joined #ceph
[19:23] * tryggvil (~tryggvil@178.19.53.254) Quit (Quit: tryggvil)
[19:25] * mattbenjamin (~matt@aa2.linuxbox.com) has joined #ceph
[19:28] * LeaChim (~LeaChim@host86-161-89-52.range86-161.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[19:29] * sarob (~sarob@2601:9:7080:13a:c9aa:71d6:bcff:a15d) Quit (Ping timeout: 480 seconds)
[19:34] * ScOut3R (~scout3r@54009895.dsl.pool.telekom.hu) has joined #ceph
[19:36] * LeaChim (~LeaChim@host86-174-30-7.range86-174.btcentralplus.com) has joined #ceph
[19:37] * jtlebigot (~jlebigot@proxy.ovh.net) Quit (Quit: Leaving.)
[19:37] * jtlebigot (~jlebigot@proxy.ovh.net) has joined #ceph
[19:42] <jtlebigot> alphe: just imported fixes for 3.10.y from ceph upstream, looks fine after 50 iterations :)
[19:42] <jtlebigot> I will leave the script for the night
[19:44] * jtlebigot (~jlebigot@proxy.ovh.net) Quit (Quit: Leaving.)
[19:45] * ScOut3R_ (~scout3r@54009895.dsl.pool.telekom.hu) has joined #ceph
[19:45] * ScOut3R (~scout3r@54009895.dsl.pool.telekom.hu) Quit (Read error: Connection reset by peer)
[19:50] <alphe> oh yes a great ask that everyone will like to read "how do you rebalance data stored on your osd?(i.e 1 osd is near full others are around 50%)"
[19:59] * Pedras (~Adium@216.207.42.134) has joined #ceph
[20:00] * nwat (~textual@eduroam-227-220.ucsc.edu) has joined #ceph
[20:01] * ldurnez (~ldurnez@proxy.ovh.net) Quit (Quit: Leaving.)
[20:10] * Gamekiller77 (~Gamekille@128-107-239-235.cisco.com) has joined #ceph
[20:12] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[20:12] * geraintjones (~geraint@208.72.139.54) Quit (Read error: Connection reset by peer)
[20:14] * geraintjones (~geraint@208.72.139.54) has joined #ceph
[20:28] * markbby (~Adium@168.94.245.4) has joined #ceph
[20:31] * psomas (~psomas@inferno.cc.ece.ntua.gr) Quit (Ping timeout: 480 seconds)
[20:39] * Gamekiller77 (~Gamekille@128-107-239-235.cisco.com) Quit (Quit: This computer has gone to sleep)
[20:43] * Gamekiller77 (~Gamekille@128-107-239-235.cisco.com) has joined #ceph
[20:46] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[20:47] * nhm (~nhm@184-97-148-136.mpls.qwest.net) has joined #ceph
[20:47] * ChanServ sets mode +o nhm
[20:56] * yolo1604 (~yolo1604@118.70.67.142) Quit (Ping timeout: 480 seconds)
[20:56] * yolo1604 (~yolo1604@117.7.237.75) has joined #ceph
[20:58] * Guyou (~bonnefil@mrb31-1-88-184-0-166.fbx.proxad.net) has joined #ceph
[21:04] * alphe (~alphe@0001ac6f.user.oftc.net) Quit (Quit: Leaving)
[21:12] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) has joined #ceph
[21:16] * Pedras1 (~Adium@216.207.42.132) has joined #ceph
[21:16] * mattt_ (~textual@cpc25-rdng20-2-0-cust162.15-3.cable.virginm.net) has joined #ceph
[21:17] * Discovery (~Discovery@192.162.100.197) Quit (Read error: Connection reset by peer)
[21:17] * mattt_ (~textual@cpc25-rdng20-2-0-cust162.15-3.cable.virginm.net) Quit ()
[21:20] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[21:21] * Pedras (~Adium@216.207.42.134) Quit (Ping timeout: 480 seconds)
[21:22] * diegows (~diegows@200.68.116.185) Quit (Ping timeout: 480 seconds)
[21:23] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) has joined #ceph
[21:27] * markbby (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[21:27] * markbby (~Adium@168.94.245.4) has joined #ceph
[21:28] * Discovery (~Discovery@109.235.55.67) has joined #ceph
[21:28] * i_m (~ivan.miro@2001:470:1f0b:1a08:3e97:eff:febd:a80c) has joined #ceph
[21:39] * DarkAceZ (~BillyMays@50-32-40-238.drr01.hrbg.pa.frontiernet.net) Quit (Read error: Operation timed out)
[21:39] * erice (~erice@50.240.86.181) Quit (Ping timeout: 480 seconds)
[21:40] * DarkAceZ (~BillyMays@50-32-34-24.drr01.hrbg.pa.frontiernet.net) has joined #ceph
[21:47] * JoeGruher (~JoeGruher@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[21:48] <JoeGruher> is there documentation for osd bench somewhere? i'm just looking for a basic description of what it does, what arguments it takes, is it hazardous to data on the OSD, etc. can't seem to google up much.
[21:48] * bdonnahue (~tschneide@ool-18bda2d8.dyn.optonline.net) has joined #ceph
[21:49] <bdonnahue> can anyone help me understand ceph authentication / permissions
[21:49] * erice (~erice@50.240.86.181) has joined #ceph
[21:49] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Read error: Connection reset by peer)
[21:50] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[21:52] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[21:52] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) has joined #ceph
[21:59] * sarob_ (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) has joined #ceph
[22:00] * dmsimard (~Adium@108.163.152.2) Quit (Ping timeout: 480 seconds)
[22:00] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[22:00] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit (Quit: Quitte)
[22:01] <JoeGruher> If my RBD is size 1073741824k and I have 41114113k used, and 41114113/1073741824 is .038 or 3.8%, why does the %USED in "ceph df" show 0.36%? seems like the decimal point is in the wrong place.
[22:04] <JoeGruher> oh duh because it is the pool size and not the rbd size... obviously i took too much vacation last week
[22:12] * mattt_ (~textual@cpc25-rdng20-2-0-cust162.15-3.cable.virginm.net) has joined #ceph
[22:12] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[22:13] * bandrus (~Adium@108.246.12.107) Quit (Read error: No route to host)
[22:13] * JoeGruher (~JoeGruher@jfdmzpr04-ext.jf.intel.com) Quit (Remote host closed the connection)
[22:13] * markbby (~Adium@168.94.245.4) has joined #ceph
[22:15] * allsystemsarego (~allsystem@5-12-241-225.residential.rdsnet.ro) Quit (Quit: Leaving)
[22:16] * bandrus (~Adium@108.246.12.107) has joined #ceph
[22:17] * gdavis33 (~gdavis@38.122.12.254) has joined #ceph
[22:17] * gdavis33 (~gdavis@38.122.12.254) has left #ceph
[22:20] * nwat (~textual@eduroam-227-220.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[22:22] * fireD (~fireD@93-139-129-73.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[22:26] * markbby (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[22:27] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[22:28] * markbby (~Adium@168.94.245.4) has joined #ceph
[22:32] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[22:32] * mschiff (~mschiff@port-34493.pppoe.wtnet.de) has joined #ceph
[22:34] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[22:34] * ChanServ sets mode +v andreask
[22:36] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[22:36] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[22:37] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[22:44] * markbby (~Adium@168.94.245.4) has joined #ceph
[22:46] * sagelap (~sage@2001:388:a098:120:f0b7:7b25:30f1:a52f) has joined #ceph
[22:46] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[22:55] * gregsfortytwo (~Adium@cpe-172-250-69-138.socal.res.rr.com) Quit (Quit: Leaving.)
[23:01] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[23:04] * mjeanson_ (~mjeanson@bell.multivax.ca) has joined #ceph
[23:04] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Read error: Connection reset by peer)
[23:04] * sarob_ (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[23:04] * mjeanson_ is now known as mjeanson
[23:07] * kaizh (~oftc-webi@128-107-239-234.cisco.com) has joined #ceph
[23:09] <loicd> Sp4rKy: are you coming to FOSDEM this year ?
[23:09] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) Quit (Quit: Leaving)
[23:13] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[23:15] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[23:17] * alram (~alram@38.122.20.226) Quit (Ping timeout: 480 seconds)
[23:18] * bjornar (~bjornar@ti0099a340-dhcp0395.bb.online.no) Quit (Ping timeout: 480 seconds)
[23:21] * psomas (~psomas@inferno.cc.ece.ntua.gr) has joined #ceph
[23:24] * vata (~vata@2607:fad8:4:6:3481:e44b:4b6a:5d98) Quit (Quit: Leaving.)
[23:26] * BillK (~BillK-OFT@124-148-103-108.dyn.iinet.net.au) has joined #ceph
[23:27] * markbby (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[23:28] * markbby1 (~Adium@168.94.245.4) has joined #ceph
[23:37] * mattt_ (~textual@cpc25-rdng20-2-0-cust162.15-3.cable.virginm.net) Quit (Quit: Computer has gone to sleep.)
[23:37] * alram (~alram@38.122.20.226) has joined #ceph
[23:38] * gregsfortytwo (~Adium@cpe-172-250-69-138.socal.res.rr.com) has joined #ceph
[23:39] * markbby1 (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[23:40] * markbby (~Adium@168.94.245.4) has joined #ceph
[23:52] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[23:57] * jlogan1 (~Thunderbi@2600:c00:3010:1:1::40) Quit (Quit: jlogan1)
[23:58] * Steki (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[23:58] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.