#ceph IRC Log

Index

IRC Log for 2014-03-11

Timestamps are in GMT/BST.

[0:05] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[0:07] * xarses (~andreww@12.164.168.117) has joined #ceph
[0:08] * clayb (~kvirc@199.172.169.97) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[0:12] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[0:27] * yanzheng (~zhyan@134.134.137.75) has joined #ceph
[0:33] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[0:33] * ChanServ sets mode +v andreask
[0:43] * sbadia (~sbadia@yasaw.net) Quit (Remote host closed the connection)
[0:44] * sbadia (~sbadia@yasaw.net) has joined #ceph
[0:44] * zapotah (~zapotah@dsl-hkibrasgw1-58c08e-250.dhcp.inet.fi) Quit (Ping timeout: 480 seconds)
[0:46] * yanzheng (~zhyan@134.134.137.75) Quit (Remote host closed the connection)
[0:51] * dmsimard1 (~Adium@108.163.152.2) Quit (Ping timeout: 480 seconds)
[0:55] * sjustwork (~sam@2607:f298:a:607:89e2:6c6a:f4b8:3977) Quit (Quit: Leaving.)
[1:01] * danieagle (~Daniel@179.182.144.28) Quit (Quit: Muito Obrigado por Tudo! :-))
[1:11] * jwillem (~jwillem@thuiscomputer.xs4all.nl) Quit (Ping timeout: 480 seconds)
[1:13] * alram (~alram@38.122.20.226) Quit (Quit: leaving)
[1:16] * jwillem (~jwillem@thuiscomputer.xs4all.nl) has joined #ceph
[1:17] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[1:20] * kaizh (~kaizh@128-107-239-234.cisco.com) Quit (Remote host closed the connection)
[1:27] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has left #ceph
[1:28] * sarob (~sarob@2001:4998:effd:600:7132:21be:778d:6f6f) Quit (Remote host closed the connection)
[1:28] * sarob (~sarob@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) has joined #ceph
[1:31] * jwillem (~jwillem@thuiscomputer.xs4all.nl) Quit (Ping timeout: 480 seconds)
[1:33] * sarob (~sarob@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) Quit (Read error: Operation timed out)
[1:33] * rmoe_ (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[1:34] * jwillem (~jwillem@thuiscomputer.xs4all.nl) has joined #ceph
[1:37] * ChrisNBlum (~Adium@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Quit: Leaving.)
[1:38] * richard-gs (~yguo@206.173.10.4.ptr.us.xo.net) has joined #ceph
[1:41] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:43] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[1:47] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[1:51] * meeh (~meeh@193.150.121.66) Quit (Read error: Operation timed out)
[1:53] * meeh (~meeh@193.150.121.66) has joined #ceph
[1:54] * jeff-YF (~jeffyf@pool-173-66-76-78.washdc.fios.verizon.net) has joined #ceph
[1:57] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[1:57] * jeff-YF_ (~jeffyf@67.23.123.228) has joined #ceph
[2:02] * jeff-YF (~jeffyf@pool-173-66-76-78.washdc.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[2:02] * jeff-YF_ is now known as jeff-YF
[2:03] <richard-gs> hi everyone, I have a strange issue with osds
[2:03] <richard-gs> I can start the osds through `service ceph start` but when I issue `ceph -s`, it reports the osd's are down
[2:04] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) has joined #ceph
[2:07] <richard-gs> this is after a crash and attempted recovery of ceph
[2:14] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[2:16] * LeaChim (~LeaChim@host86-166-182-74.range86-166.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:19] * wogri_ (~wolf@nix.wogri.at) has joined #ceph
[2:19] * wogri (~wolf@nix.wogri.at) Quit (Read error: Connection reset by peer)
[2:21] * jwillem (~jwillem@thuiscomputer.xs4all.nl) Quit (Ping timeout: 480 seconds)
[2:26] * jwillem (~jwillem@thuiscomputer.xs4all.nl) has joined #ceph
[2:29] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) Quit (Quit: Connection closed for inactivity)
[2:30] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[2:30] * darkfaded (~floh@88.79.251.60) Quit (Read error: Connection reset by peer)
[2:38] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[2:43] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[2:45] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[2:48] * yanzheng (~zhyan@134.134.137.73) has joined #ceph
[2:48] * tserong (~tserong@203-57-208-132.dyn.iinet.net.au) Quit (Quit: Leaving)
[2:50] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[2:53] * glzhao (~glzhao@220.181.11.232) has joined #ceph
[2:53] * KevinPerks1 (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[2:54] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Read error: Connection reset by peer)
[2:55] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[2:56] * tserong (~tserong@203-57-208-132.dyn.iinet.net.au) has joined #ceph
[2:57] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[3:01] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[3:01] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[3:02] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:03] * darkfader (~floh@88.79.251.60) has joined #ceph
[3:04] * erkules_ (~erkules@port-92-193-54-111.dynamic.qsc.de) has joined #ceph
[3:05] * Siva (~sivat@117.192.38.27) has joined #ceph
[3:05] * Siva_ (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[3:07] * BillK (~BillK-OFT@124-148-93-148.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[3:08] * BillK (~BillK-OFT@58-7-181-232.dyn.iinet.net.au) has joined #ceph
[3:11] * bandrus (~Adium@adsl-75-5-250-121.dsl.scrm01.sbcglobal.net) Quit (Quit: Leaving.)
[3:11] * erkules (~erkules@port-92-193-5-143.dynamic.qsc.de) Quit (Ping timeout: 480 seconds)
[3:11] * jwillem (~jwillem@thuiscomputer.xs4all.nl) Quit (Ping timeout: 480 seconds)
[3:13] * Siva (~sivat@117.192.38.27) Quit (Ping timeout: 480 seconds)
[3:13] * Siva_ is now known as Siva
[3:16] * jwillem (~jwillem@thuiscomputer.xs4all.nl) has joined #ceph
[3:20] * reed (~reed@75-101-54-131.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[3:23] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[3:27] * shang (~ShangWu@175.41.48.77) has joined #ceph
[3:28] * Cnidus (~cnidus@216.129.126.126) Quit (Quit: Leaving.)
[3:33] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[3:34] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[3:41] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:43] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[3:44] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[3:48] <richard-gs> is there anyway to recover when all you have is the osd?
[3:50] <dmick> richard-gs: scrolled back up and saw you were asking about ceph -s showing no osds; is this the same question?
[3:50] * Cube (~Cube@66-87-64-6.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[3:50] <richard-gs> yes
[3:51] <dmick> so, first off, are the osd processes actually running?
[3:51] <richard-gs> yes
[3:51] <dmick> (i.e. not just started, and then dead)
[3:51] <richard-gs> well, I see it with ps and the logs are continually updated
[3:52] <dmick> ok. and ceph -s still shows them all as down
[3:52] <richard-gs> correct
[3:52] <dmick> (and presumably out)
[3:52] <dmick> how many mons, how many osds, how many hosts?
[3:52] <richard-gs> 2/2/2
[3:53] <dmick> 2 mons is not great, although it'll work until either goes down
[3:53] <dmick> you want 1 or 3
[3:53] <dmick> but leave that aside for the moment; so one daemon per host, I assume?
[3:53] <richard-gs> yes
[3:53] <dmick> (of each type). ok
[3:54] * Cube (~Cube@66-87-64-150.pools.spcsdns.net) has joined #ceph
[3:54] <dmick> can you pastebin the output of ceph -s and ceph osd dump?
[3:54] <richard-gs> sure, 1 sec
[3:55] <richard-gs> http://pastebin.com/ELHbuiKe
[3:56] <dmick> what are the osds logging?
[3:57] * pawel_v (~vps@c-67-169-176-2.hsd1.ca.comcast.net) has joined #ceph
[3:58] <richard-gs> this is since last restart of the osd: http://pastebin.com/idPs86fk
[3:59] <dmick> so at one point there were 4 osds
[3:59] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) Quit (Quit: Leaving.)
[3:59] * yguang11 (~yguang11@2406:2000:ef96:e:a453:57de:7900:7069) Quit (Remote host closed the connection)
[3:59] <pawel_v> Yeah (I'm with Richard)
[4:00] <pawel_v> dmick: after that re-creating cluster attempt, we created 4 OSDs, and then deleted the first 2
[4:00] <richard-gs> osd.0 and osd.1 were inserted into the crushmap as dummy items since it seemed to keep reverting the entries to osd.0 and osd.1
[4:00] * yguang11 (~yguang11@2406:2000:ef96:e:4b5:8141:98ce:1570) has joined #ceph
[4:00] <richard-gs> osd.2 and osd.3 were the ones with data on them
[4:01] <dmick> hm. so what's in crush now? ceph osd getcrushmap -o /tmp/crush; crushtool -d /tmp/crush; paste the output?
[4:03] <pawel_v> http://pastebin.com/fxCuR88x
[4:05] <dmick> that looks okayish
[4:06] <dmick> if the mons think the osds are down, it seems like the osds can't talk tot hem
[4:06] <dmick> ceph.conf on both machines have the right mon information (presumably in mon_initial_members)?
[4:07] <pawel_v> no mon_initial members
[4:07] <pawel_v> but otherwise - should be so
[4:07] <dmick> mon_host then?
[4:07] <pawel_v> I can add the initial members and re-try
[4:07] <pawel_v> [mon.c]
[4:07] <pawel_v> host = ip-10-16-20-11
[4:07] <pawel_v> mon addr = 10.16.20.11:6789
[4:07] <pawel_v> [mon.d]
[4:07] <pawel_v> host = ip-10-16-43-12
[4:07] <pawel_v> mon addr = 10.16.43.12:6789
[4:07] <pawel_v> like this
[4:08] <dmick> oh, ooooold conf style
[4:09] <pawel_v> should I try as [mon], or initial members ?
[4:09] <pawel_v> mons sure can find each other, so...
[4:13] * jeff-YF (~jeffyf@67.23.123.228) Quit (Quit: jeff-YF)
[4:14] <pawel_v> OSD.3 again is syaing :
[4:14] <pawel_v> 2014-03-11 03:14:33.612676 7f4852993700 0 -- 0.0.0.0:6801/28403 >> 10.16.20.11:6801/17802 pipe(0xe9b0000 sd=24 :46232 s=1 pgs=0 cs=0 l=0 c=0x361ca40).connect claims to be 10.16.20.11:6801/8017 not 10.16.20.11:6801/17802 - wrong node!
[4:16] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[4:16] <pawel_v> OSD.2 doesn't say anything (new)
[4:16] <pawel_v> they both start up, and then just sit there
[4:17] <dmick> yeah, I don't know. I'd be tempted to start one of the osds in the foreground with some debug flags turned on I guess
[4:17] <dmick> maybe -f --debug-osd=30
[4:17] <pawel_v> ok
[4:17] <dmick> see if it's more forthcoming
[4:17] <dmick> (plus whatever else it normally gets; --id at leass)
[4:17] <dmick> *least)
[4:19] <pawel_v> 2014-03-11 03:19:03.474273 7f42f2b467c0 10 osd.3 8593 done with init, starting boot process
[4:19] <pawel_v> 2014-03-11 03:19:03.474285 7f42f2b467c0 10 osd.3 8593 start_boot - have maps 8090..8593
[4:19] <pawel_v> 2014-03-11 03:19:03.477021 7f42dbc68700 10 osd.3 8593 _maybe_boot mon has osdmaps 1..8566
[4:20] <pawel_v> 2014-03-11 03:19:15.525140 7f42d8c62700 30 osd.3 8593 heartbeat_entry woke up
[4:20] <pawel_v> 2014-03-11 03:19:15.525164 7f42d8c62700 30 osd.3 8593 heartbeat
[4:20] <pawel_v> 2014-03-11 03:19:15.525215 7f42d8c62700 30 osd.3 8593 heartbeat checking stats
[4:20] <pawel_v> 2014-03-11 03:19:15.525239 7f42d8c62700 20 osd.3 8593 update_osd_stat osd_stat(41882 MB used, 458 GB avail, 499 GB total, peers []/[] op hist [])
[4:20] <pawel_v> 2014-03-11 03:19:15.525250 7f42d8c62700 5 osd.3 8593 heartbeat: osd_stat(41882 MB used, 458 GB avail, 499 GB total, peers []/[] op hist [])
[4:20] <pawel_v> 2014-03-11 03:19:15.525256 7f42d8c62700 30 osd.3 8593 heartbeat check
[4:20] <pawel_v> 2014-03-11 03:19:15.525261 7f42d8c62700 30 osd.3 8593 heartbeat lonely?
[4:20] <pawel_v> 2014-03-11 03:19:15.525264 7f42d8c62700 30 osd.3 8593 heartbeat done
[4:20] <dmick> don't post it all here; pastebin more than 2-3 lines
[4:20] <dmick> but ok
[4:20] <pawel_v> yeah, sorry
[4:20] <pawel_v> but I don't see anything else that's relevant
[4:20] <pawel_v> I can pastebin the whole log
[4:20] <pawel_v> but it's mostly PG rants
[4:21] <dmick> what release is this?
[4:21] <pawel_v> 72.2
[4:22] <dmick> it feels like some kind of basic network connectivity issue
[4:22] <pawel_v> it might as well be
[4:22] <pawel_v> there is just no indication as to where to dig :(
[4:22] <pawel_v> I've stracted it once
[4:22] <dmick> hm ,actually
[4:22] <pawel_v> it does send stuff someplace
[4:23] <pawel_v> **straced
[4:23] <dmick> in the osd dump
[4:23] <dmick> last_clean_interval [0,0) :/0 :/0 :/0 :/0 exists
[4:23] <dmick> those :/0's are supposed to be peer OSD IPs
[4:23] <dmick> consistent with the osds not finding each other
[4:23] <pawel_v> OSD would register with MON upon start-up, right ?
[4:23] <dmick> yeah
[4:23] <pawel_v> I mean, I would imagine
[4:24] <pawel_v> you want me to strace osd ?
[4:24] <pawel_v> And grep for sends ?
[4:24] <pawel_v> is it UDP or TCP ?
[4:24] <dmick> tcp
[4:24] <dmick> but
[4:24] <dmick> there's a lot of communication
[4:24] <pawel_v> not if they can't talk to each other :)
[4:24] <pawel_v> anyway, let me try this asap, and just see where is it sending everything to
[4:25] <pawel_v> but then, if it's TCP, and it's connecting....
[4:27] <dmick> no apparmor/selinux/iptables getting in the way?..
[4:28] <pawel_v> wasn't before, and none of that stuff was touched
[4:32] <pawel_v> I see it create a socket of (PF_NETLINK,SOCK_RAW)
[4:32] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[4:32] <pawel_v> Then a bunch of TCP sockets which it binds to
[4:33] <pawel_v> then PF_FILE,SOCK_STREAM for admin
[4:33] <pawel_v> but no sockets that it then even tried to connect
[4:33] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[4:34] * wusui (~Warren@38.122.20.226) Quit (Read error: Connection reset by peer)
[4:34] * wusui (~Warren@2607:f298:a:607:c568:4757:5bf1:b42f) has joined #ceph
[4:34] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Remote host closed the connection)
[4:39] <pawel_v> ugh, forgot -f
[4:40] <shang> hi all, can anyone tell me more about multi-site implementation for Ceph?
[4:42] <pawel_v> dmick: well, it connected to 10.16.20.11:6789
[4:42] <pawel_v> and it's sending/receiving data
[4:44] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[4:44] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[4:46] <pawel_v> It connected to other OSD, 10.16.20.11:6801, sent something, but never received anything back, even though it tried to
[4:46] * BillK (~BillK-OFT@58-7-181-232.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[4:46] <pawel_v> dmick: that's pretty much all I can ascertain :-/
[4:50] * BillK (~BillK-OFT@58-7-167-189.dyn.iinet.net.au) has joined #ceph
[4:52] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[4:52] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:55] <pawel_v> Is there any way I can extract data directly from the data directory, as regular file ?
[4:56] <pawel_v> ** regular files
[4:59] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[5:00] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit (Quit: Leaving...)
[5:00] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:06] * yuriw (~Adium@c-71-202-126-141.hsd1.ca.comcast.net) has joined #ceph
[5:06] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[5:12] * yuriw2 (~Adium@c-71-202-126-141.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:13] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[5:14] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[5:17] * Vacum_ (~vovo@i59F79A17.versanet.de) has joined #ceph
[5:17] * Cnidus (~cnidus@2601:9:7b80:8c7:189f:14b0:38b0:34c9) has joined #ceph
[5:18] * BillK (~BillK-OFT@58-7-167-189.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[5:22] * Cnidus (~cnidus@2601:9:7b80:8c7:189f:14b0:38b0:34c9) Quit (Read error: Connection reset by peer)
[5:23] * BillK (~BillK-OFT@106-68-241-105.dyn.iinet.net.au) has joined #ceph
[5:24] * kaizh (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) has joined #ceph
[5:24] * kaizh (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) Quit ()
[5:24] * Vacum (~vovo@i59F792CE.versanet.de) Quit (Ping timeout: 480 seconds)
[5:25] * Cnidus (~cnidus@2601:9:7b80:8c7:189f:14b0:38b0:34c9) has joined #ceph
[5:25] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Remote host closed the connection)
[5:32] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[5:42] * sarob (~sarob@2601:9:7080:13a:8543:6502:1da0:81aa) has joined #ceph
[5:46] * JoeGruher (~JoeGruher@jfdmzpr04-ext.jf.intel.com) Quit (Remote host closed the connection)
[5:50] * sarob (~sarob@2601:9:7080:13a:8543:6502:1da0:81aa) Quit (Ping timeout: 480 seconds)
[5:54] * Cnidus (~cnidus@2601:9:7b80:8c7:189f:14b0:38b0:34c9) Quit (Read error: Connection reset by peer)
[5:56] * Cnidus (~cnidus@2601:9:7b80:8c7:189f:14b0:38b0:34c9) has joined #ceph
[5:58] * sputnik1_ (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[6:04] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) has joined #ceph
[6:07] * haomaiwa_ (~haomaiwan@117.79.232.197) Quit (Remote host closed the connection)
[6:08] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) has joined #ceph
[6:10] * haomaiwa_ (~haomaiwan@117.79.232.187) has joined #ceph
[6:11] * haomaiwa_ (~haomaiwan@117.79.232.187) Quit (Remote host closed the connection)
[6:11] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) Quit (Read error: Connection reset by peer)
[6:12] * haomaiwang (~haomaiwan@49.4.189.43) has joined #ceph
[6:16] * BillK (~BillK-OFT@106-68-241-105.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[6:19] * BillK (~BillK-OFT@58-7-151-211.dyn.iinet.net.au) has joined #ceph
[6:19] * lavi (~lavi@eth-seco21th2-46-193-64-32.wb.wifirst.net) Quit (Quit: Leaving)
[6:30] * KevinPerks1 (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[6:31] * BillK (~BillK-OFT@58-7-151-211.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[6:32] * BillK (~BillK-OFT@106-69-56-113.dyn.iinet.net.au) has joined #ceph
[6:35] * Cnidus (~cnidus@2601:9:7b80:8c7:189f:14b0:38b0:34c9) Quit (Read error: Connection reset by peer)
[6:35] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) has joined #ceph
[6:35] * Cnidus (~cnidus@2601:9:7b80:8c7:189f:14b0:38b0:34c9) has joined #ceph
[6:35] * Cube (~Cube@66-87-64-150.pools.spcsdns.net) Quit (Quit: Leaving.)
[6:37] * haomaiwang (~haomaiwan@49.4.189.43) Quit (Remote host closed the connection)
[6:38] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) has joined #ceph
[6:38] * ajazdzewski (~quassel@2001:4dd0:ff00:9081:d1de:a710:4c59:fe21) has joined #ceph
[6:41] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) Quit (Quit: Computer has gone to sleep.)
[6:42] * sarob (~sarob@2601:9:7080:13a:c40e:56b7:ff41:dcf4) has joined #ceph
[6:51] * sarob (~sarob@2601:9:7080:13a:c40e:56b7:ff41:dcf4) Quit (Ping timeout: 480 seconds)
[6:52] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) Quit (Remote host closed the connection)
[6:59] <pawel_v> Anybody can help me with understanding how to deal with monitor assertion failure on assert(version == pg_map.version) ?
[7:14] * doubleg (~doubleg@69.167.130.11) Quit (Read error: Operation timed out)
[7:14] * doubleg (~doubleg@69.167.130.11) has joined #ceph
[7:15] * flaxy (~afx@78.130.174.164) has joined #ceph
[7:17] * Psi-Jack_ (~Psi-Jack@psi-jack.user.oftc.net) Quit (Ping timeout: 480 seconds)
[7:18] * Psi-Jack_ (~Psi-Jack@psi-jack.user.oftc.net) has joined #ceph
[7:25] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[7:26] * gaveen (~gaveen@220.247.234.28) has joined #ceph
[7:26] * Psi-Jack_ (~Psi-Jack@psi-jack.user.oftc.net) Quit (Read error: Operation timed out)
[7:28] * zerick (~eocrospom@190.118.36.79) has joined #ceph
[7:28] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[7:32] * flaxy (~afx@78.130.174.164) Quit (Ping timeout: 480 seconds)
[7:38] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) has joined #ceph
[7:38] * danieagle (~Daniel@186.214.57.77) has joined #ceph
[7:40] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[7:40] * mattt (~textual@94.236.7.190) has joined #ceph
[7:42] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[7:44] * pawel_v (~vps@c-67-169-176-2.hsd1.ca.comcast.net) has left #ceph
[7:46] * sarob (~sarob@2601:9:7080:13a:6d21:46f9:f5bc:e808) has joined #ceph
[7:54] * sarob (~sarob@2601:9:7080:13a:6d21:46f9:f5bc:e808) Quit (Ping timeout: 480 seconds)
[7:56] * Psi-Jack (~Psi-Jack@psi-jack.user.oftc.net) has joined #ceph
[8:02] * Cnidus (~cnidus@2601:9:7b80:8c7:189f:14b0:38b0:34c9) Quit (Quit: Leaving.)
[8:06] * haomaiwa_ (~haomaiwan@49.4.189.43) has joined #ceph
[8:13] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) Quit (Ping timeout: 480 seconds)
[8:14] * zerick (~eocrospom@190.118.36.79) Quit (Remote host closed the connection)
[8:16] * Cnidus (~cnidus@2601:9:7b80:8c7:756b:3736:7ae2:5028) has joined #ceph
[8:17] * Cnidus (~cnidus@2601:9:7b80:8c7:756b:3736:7ae2:5028) Quit ()
[8:17] * mkoderer (uid11949@id-11949.ealing.irccloud.com) has joined #ceph
[8:18] * DLange (~DLange@dlange.user.oftc.net) Quit (Quit: an update a day keeps the bugs at bay)
[8:25] <glzhao> Hi folks, anyone use kernel rbd client on rhel6?
[8:26] * DLange (~DLange@dlange.user.oftc.net) has joined #ceph
[8:36] * imriz (~imriz@82.81.163.130) has joined #ceph
[8:40] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[8:47] * Cnidus (~cnidus@c-67-164-72-195.hsd1.ca.comcast.net) has joined #ceph
[8:50] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[8:51] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[8:57] * Cnidus (~cnidus@c-67-164-72-195.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:57] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[9:01] * rendar (~s@host197-178-dynamic.19-79-r.retail.telecomitalia.it) has joined #ceph
[9:03] * Sysadmin88 (~IceChat77@176.254.32.31) Quit (Quit: Copywight 2007 Elmer Fudd. All wights wesewved.)
[9:21] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:30] * flaxy (~afx@78.130.174.164) has joined #ceph
[9:31] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[9:31] * ChanServ sets mode +v andreask
[9:31] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit ()
[9:31] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[9:31] * ChanServ sets mode +v andreask
[9:44] * hjjg (~hg@p3EE3262E.dip0.t-ipconnect.de) has joined #ceph
[9:47] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[9:49] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[9:55] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[9:59] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[10:01] <jerker> glzhao: i did on my testsystem yesterday. it was very slow. I will now setup KVM with rbd instead and run rhel6(actually sl6) on top of that
[10:02] <jerker> glzhao: it did work though, but io utilization was 100% under very mediocre load.
[10:02] * c74d (~c74d@2002:4404:712c:0:2cf5:8a4c:b6bc:24a0) Quit (Remote host closed the connection)
[10:05] * yanzheng (~zhyan@134.134.137.73) Quit (Quit: Leaving)
[10:11] <glzhao> jerker: thanks very much
[10:11] <glzhao> jerker: did you build the rpm yourself
[10:12] <jerker> glzhao: I used the elrepo.org repository and kernel-ml rpm.
[10:12] * danieagle (~Daniel@186.214.57.77) Quit (Quit: Muito Obrigado por Tudo! :-))
[10:12] <glzhao> jerker: thanks again
[10:13] * allsystemsarego (~allsystem@188.26.167.156) has joined #ceph
[10:22] * ajazdzewski (~quassel@2001:4dd0:ff00:9081:d1de:a710:4c59:fe21) Quit (Ping timeout: 480 seconds)
[10:23] * fdmanana (~fdmanana@bl10-252-34.dsl.telepac.pt) has joined #ceph
[10:25] * yguang11 (~yguang11@2406:2000:ef96:e:4b5:8141:98ce:1570) Quit (Remote host closed the connection)
[10:25] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[10:27] * srenatus (~stephan@185.27.182.2) has joined #ceph
[10:27] * mattt (~textual@94.236.7.190) Quit (Ping timeout: 480 seconds)
[10:29] * mattt (~textual@94.236.7.190) has joined #ceph
[10:31] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Read error: Operation timed out)
[10:35] <srenatus> hmm I'm facing lots of "client misdirects" when using linux 3.11.0's ceph client (mounting a mapped rbd), but no trouble using the same rbd attached to a VM (libvirt, openstack).... any ideas?
[10:36] * LeaChim (~LeaChim@host86-166-182-74.range86-166.btcentralplus.com) has joined #ceph
[10:37] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[10:37] * ChrisNBlum (~Adium@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[10:39] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:40] * markednmbr1 (~markednmb@cpc8-lewi13-2-0-cust979.2-4.cable.virginm.net) has joined #ceph
[10:40] <markednmbr1> Hi all
[10:41] <markednmbr1> I'm testing ceph with a 2 node setup, i've configured everything following the configuration docs from "node1" (monitor)
[10:41] <markednmbr1> however when activating osd it doesn't seem to mount them on node2
[10:42] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[10:42] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[10:44] <markednmbr1> the only logs I have on "node" 2 are ceph-osd..log
[10:44] <markednmbr1> and they are empty
[10:47] * thomnico (~thomnico@2a01:e35:8b41:120:c533:3678:1051:e7d2) has joined #ceph
[10:48] <srenatus> hmm everything's fine for files <= 400M
[10:48] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[10:48] <jerker> markednmbr1: do they show up in "ceph osd tree" command?
[10:48] <srenatus> the next biggest file is 79x M, and for this one and all the bigger ones, I/O errors, misdirect warnings
[10:48] <markednmbr1> they aren't in there at all
[10:49] <markednmbr1> even though the prepare and activate commands seemed to work
[10:49] <markednmbr1> hmm
[10:49] <jerker> markednmbr1: you follow the quick installation with ceph-install?
[10:49] <jerker> sorry ceph-deploy
[10:49] <markednmbr1> yea
[10:52] * sleinen (~Adium@2001:620:0:46:8d06:a751:e936:613) has joined #ceph
[10:52] * yguang11 (~yguang11@2406:2000:ef96:e:cc29:7642:5bc7:f642) has joined #ceph
[10:52] <jerker> after the preflight preperations (able to ssh into all nodes with root) they it should only be a couple of commands to get the cluster running.. For me it was just "ceph-deploy new n1 n2 n3" "ceph-deploy install n1 n2 n3" "ceph-deploy osd prepare n1:sdb n2:sdb n3:sdb" "ceph-deploy osd activate n1:sdb n2:sdb n3:sdb" and then "ceph osd tree" should work.
[10:53] <jerker> Insert a "ceph-deploy mon create-initial" there too
[10:54] <markednmbr1> I'm doing ceph-deploy osd prepare node2:/dev/sdb:/dev/sda5
[10:54] <markednmbr1> which completes successfully
[10:55] <markednmbr1> then ceph-deploy osd activate node2:/dev/sdb:/dev/sda5
[10:55] <markednmbr1> which also looks like it works.. but it hasn't mounted it on node2
[10:55] <jerker> mount?
[10:55] <jerker> ah, nothing in "df"
[10:56] <markednmbr1> yea
[10:56] <jerker> [root@esc4 ~]# df | grep sdb
[10:56] <jerker> /dev/sdb1 3905109820 41228 3905068592 1% /var/lib/ceph/osd/ceph-0
[10:56] <markednmbr1> yea exactly
[10:56] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[10:56] <markednmbr1> I can see the osd's I created on "node1" mounted correctly
[10:56] <markednmbr1> but not "node2"
[10:56] <jerker> Do you have to do a zap first maybe?
[10:57] <markednmbr1> i just tried that and prepared and activated again but nothing..
[10:57] <jerker> Then I do not know. Is the file system and partition created? Can you mount it manually?
[11:01] <markednmbr1> yea it mounts manually fine
[11:01] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Read error: No route to host)
[11:01] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[11:01] * ChanServ sets mode +v andreask
[11:02] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[11:04] <jerker> Strange. I do not know. At my testcluster I have a node where "ceph-create-keys" refuse to exit/finish... Also strange. It works on another identical node.
[11:06] * yguang11 (~yguang11@2406:2000:ef96:e:cc29:7642:5bc7:f642) Quit (Remote host closed the connection)
[11:06] * yguang11 (~yguang11@2406:2000:ef96:e:cc29:7642:5bc7:f642) has joined #ceph
[11:08] * yguang11_ (~yguang11@2406:2000:ef96:e:4c77:d8eb:52f2:b57b) has joined #ceph
[11:10] <dwm> FYI: My experiments with the tgt iscsi target have not been particularly promising thus far, at least when using a VMware machine as the initiator.
[11:14] * yguang11 (~yguang11@2406:2000:ef96:e:cc29:7642:5bc7:f642) Quit (Ping timeout: 480 seconds)
[11:16] <fghaas> dwm: is this with the librbd backing store, or when re-exporting a kernel rbd device?
[11:18] <dwm> That's using the librbd backing store, rather than a kernel RBD device.
[11:19] <dwm> I've since been experimenting with the LIO target in-kernel implementation, and having rather more success.
[11:19] <dwm> However, that depends on using the kernel RBD facilities.
[11:19] <markednmbr1> weird -- every prepare/activate command on that node works but no matter what it doesn't add them to the osd map
[11:20] <dwm> Initial investigations (and the error messages on the VMware host) suggest that tgtd simply doesn't understand the full range of SCSI commands that VMware is trying to use.
[11:20] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[11:22] <jerker> dwm: interesting. I will go for the KVM/QEMU route. In the long run, if EMC is interested, they will look into it. But Ceph/KVM is very disruptive to what VMware is doing so I would not bet on it until it has got a market share. /personal guess/
[11:22] * ajazdzewski (~quassel@lpz-66.sprd.net) has joined #ceph
[11:23] <dwm> jerker: Hence why I'm looking at iSCSI gateways. Expecting e.g. VMware to implement RADOS support might be a bit far-fetched, particularly with their competing vSAN offering.
[11:25] * yguang11_ (~yguang11@2406:2000:ef96:e:4c77:d8eb:52f2:b57b) Quit (Remote host closed the connection)
[11:25] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[11:26] <jerker> This was a couple of years ago but at that time my old collegues could not get the native Linux iSCSI stuff be nearly as stable as the proprietary ones. I sort of gave up that route. But I have not looked into it lately.
[11:26] <jerker> Proprietary hardware I mean.
[11:27] <fghaas> dwm: it's entirely possible that tgt is missing some features that VMware is using; the wireshark packet dissector for your iSCSI traffic could tell you for certain
[11:30] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[11:33] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[11:34] <dwm> fghaas: That's my hypothesis -- hence flagging it as an item of interest.
[11:35] <fghaas> I know that the LIO team (then RisingTide, now Datera) jumped through several hoops to make their stuff work with VMware
[11:36] <fghaas> but I'm sure that others (including me, for one) would appreciate if you could share your findings re tgt/rbd
[11:36] <fghaas> and vmware
[11:37] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[11:37] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[11:37] * ChrisNBlum (~Adium@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Ping timeout: 480 seconds)
[11:37] * yguang11 (~yguang11@2406:2000:ef96:e:5cf0:eb86:cdc7:fc9c) has joined #ceph
[11:51] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[11:56] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[11:59] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[12:05] * sleinen1 (~Adium@130.59.94.216) has joined #ceph
[12:05] * c74d (~c74d@2002:4404:712c:0:76de:2bff:fed4:2766) has joined #ceph
[12:05] * sleinen (~Adium@2001:620:0:46:8d06:a751:e936:613) Quit (Ping timeout: 480 seconds)
[12:07] * sleinen (~Adium@2001:620:0:26:3da3:ff74:4983:749) has joined #ceph
[12:09] * yguang11_ (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[12:11] * ChrisNBlum (~Adium@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[12:12] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[12:12] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[12:13] * sleinen1 (~Adium@130.59.94.216) Quit (Ping timeout: 480 seconds)
[12:13] * yguang11 (~yguang11@2406:2000:ef96:e:5cf0:eb86:cdc7:fc9c) Quit (Ping timeout: 480 seconds)
[12:28] <markednmbr1> weird
[12:30] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) has left #ceph
[12:30] <markednmbr1> I can see in dmesg it is mounting the device after the activate
[12:31] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[12:31] <markednmbr1> but it just doesn't show in df
[12:31] <markednmbr1> but if I mount it manually it does
[12:31] <markednmbr1> wtf!
[12:35] * thomnico (~thomnico@2a01:e35:8b41:120:c533:3678:1051:e7d2) Quit (Quit: Ex-Chat)
[12:35] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[12:36] * jcsp (~Adium@0001bf3a.user.oftc.net) has joined #ceph
[12:40] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[12:41] <markednmbr1> [ 6924.709021] XFS (sdd1): Mounting Filesystem
[12:41] <markednmbr1> [ 6924.798472] XFS (sdd1): Ending clean mount
[12:42] <markednmbr1> nowhere to be seen and not in the osd list
[12:44] * diegows (~diegows@190.190.5.238) has joined #ceph
[12:47] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[12:52] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[12:53] * pressureman (~pressurem@62.217.45.26) has joined #ceph
[12:54] <jerker> markednmbr1: Does it show up in "mount"?
[12:55] <pressureman> hi... i have a question about enabling rbd caching on a qemu/libvirt VM. is it as simple as adding cache='writeback' to the 'driver' element in the 'disk' node of the vm xml definition?
[12:55] <markednmbr1> i've reset everything and am trying with the monitor on the host that wasnt working
[12:55] <pressureman> e.g. <driver name='qemu' cache='writeback'/>
[12:56] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) has joined #ceph
[12:59] * i_m (~ivan.miro@gbibp9ph1--blueice1n2.emea.ibm.com) has joined #ceph
[13:00] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[13:00] * i_m (~ivan.miro@gbibp9ph1--blueice1n2.emea.ibm.com) Quit ()
[13:00] * i_m (~ivan.miro@gbibp9ph1--blueice4n1.emea.ibm.com) has joined #ceph
[13:02] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[13:05] <dwm> fghaas: Nothing I'm doing is confidential; I'll see what I can arrange.
[13:06] * stus (~keny@163.117.85.196) has joined #ceph
[13:06] <stus> hello
[13:07] * thomnico (~thomnico@2a01:e35:8b41:120:c533:3678:1051:e7d2) has joined #ceph
[13:08] <stus> I get a strange error when initially deploying a monitor (Ubuntu 12.04), initctl emit ceph-mon fails, and ceph-deploy stops reporting an error
[13:08] <stus> the trange thing is that I can service ceph-mon start id=blah ceph-mon just fine
[13:08] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) has left #ceph
[13:10] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[13:11] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) Quit (Ping timeout: 480 seconds)
[13:14] * garphy`aw is now known as garphy
[13:18] * glzhao (~glzhao@220.181.11.232) Quit (Quit: leaving)
[13:23] <alfredodeza> stus: what is the error? :)
[13:23] <markednmbr1> hmm now it won't mount them on either node :P
[13:23] <markednmbr1> jacker: no it doesn't
[13:25] * owenmurr (~owenmurr@193.60.143.15) has joined #ceph
[13:29] * circ-user-x1H1v (~circuser-@hq01.euroweb.de) has joined #ceph
[13:29] * fdmanana (~fdmanana@bl10-252-34.dsl.telepac.pt) Quit (Quit: Leaving)
[13:30] <circ-user-x1H1v> hello
[13:31] <circ-user-x1H1v> may i ask a question
[13:32] <dwm> circ-user-x1H1v: Sure, though there's no guarantee anyone around right now will know the answer. :-)
[13:33] <circ-user-x1H1v> thats fine, i might give you guys a try :P
[13:33] <circ-user-x1H1v> ive set up an test-cluster with 3 mon servers
[13:34] <circ-user-x1H1v> running on xenserver - had to reboot this server. now 1 mon is not working properl
[13:34] <circ-user-x1H1v> y
[13:35] <circ-user-x1H1v> trying to stop and start this mon again using (sudo start/stop ceph-mon-all)
[13:35] <circ-user-x1H1v> but ceph -s still saying health HEALTH_WARN 1 mons down
[13:36] <circ-user-x1H1v> 0: 192.168.77.119:6789/0 mon.CEPHSERVER
[13:36] <circ-user-x1H1v> 1: 192.168.77.121:6789/0 mon.CEPHNODE01
[13:36] <circ-user-x1H1v> 2: 192.168.77.122:6789/0 mon.CEPHNODE02
[13:36] <circ-user-x1H1v> did i do anything wrong?
[13:37] <circ-user-x1H1v> the above output created by ceph mon dump
[13:37] <dwm> circ-user-x1H1v: I would suggest looking at the logs for the Ceph mon node, typically in /var/log/ceph/.
[13:38] <dwm> One issue I've been having on some of my smaller nodes is running out of disk space; I'd also run `dh -h` to see if your filesystems have gone full.
[13:38] <jerker> circ-user-x1H1v: Try "ceph health detail" to see what mon is down.
[13:38] <jerker> circ-user-x1H1v: "ceph mon dump" only seem to list what should be up, if all is fine.
[13:38] <circ-user-x1H1v> ah
[13:38] <circ-user-x1H1v> output as follow
[13:38] <circ-user-x1H1v> mon.CEPHNODE02 (rank 2) addr 192.168.77.122:6789/0 is down (out of quorum)
[13:39] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[13:39] * ChanServ sets mode +v andreask
[13:40] <circ-user-x1H1v> output ceph-mon.CEPHNODE02.log
[13:40] <circ-user-x1H1v> 2014-03-11 13:35:06.287876 7ff0018d1780 -1 accepter.accepter.bind unable to bind to 0.0.0.0:6800: Address already in use
[13:41] <jerker> circ-user-x1H1v: I would have stopped ceph at that node. See that no processes are alive. ("ps axuw | grep ceph") and then restart ceph. (This is nothing with ceph, just my general way of solving such things.)
[13:42] <jerker> sorry for my lousy english
[13:43] <markednmbr1> Hmmm now when trying to activate I am getting "librados: client.bootstrap-osd authentication error (1) Operation not permitted"
[13:43] <jerker> markednmbr1: you are not running SELINUX or something at the node?
[13:44] <markednmbr1> nope
[13:44] <kraken> http://i.imgur.com/foEHo.gif
[13:44] <jerker> markednmbr1: doublecheck :-)
[13:44] <markednmbr1> its a base debian wheezy install
[13:44] <markednmbr1> no selinux
[13:44] <jerker> good :)
[13:48] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[13:50] * stus (~keny@163.117.85.196) Quit (Quit: This computer has gone to sleep)
[13:53] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) has joined #ceph
[13:55] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[13:55] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[13:56] <markednmbr1> sigh
[14:01] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[14:03] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[14:08] <markednmbr1> don't see how I can't have permission to the cluster, as ceph status works
[14:08] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) has joined #ceph
[14:18] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[14:18] * owenmurr (~owenmurr@193.60.143.15) Quit (Quit: Lost terminal)
[14:20] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) has joined #ceph
[14:21] <markednmbr1> just reinstalled for the 5th time and its all working now
[14:21] <markednmbr1> I have NO idea what I was doing wsrong
[14:21] <markednmbr1> :s
[14:21] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) Quit (Quit: ZNC - http://znc.sourceforge.net)
[14:22] * sroy (~sroy@207.96.182.162) has joined #ceph
[14:23] * fdmanana (~fdmanana@bl10-252-34.dsl.telepac.pt) has joined #ceph
[14:28] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[14:28] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) Quit (Quit: mtanski)
[14:31] * simulx (~simulx@66-194-114-178.static.twtelecom.net) Quit (Read error: No route to host)
[14:31] * JeffK (~JeffK@38.99.52.10) Quit (Read error: No route to host)
[14:32] * simulx (~simulx@vpn.expressionanalysis.com) has joined #ceph
[14:34] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[14:34] <jackhill> Hi, does CephFS support strong authentication and server mediated authorization as described here http://www.gluster.org/community/documentation/index.php/Strong_Authentication (best description I could find). If not, is it on the roadmap?
[14:40] <jackhill> ah, found http://wiki.ceph.com/Planning/Blueprints/Firefly/Strong_AuthN_and_AuthZ_for_CephFS
[14:41] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[14:46] * BillK (~BillK-OFT@106-69-56-113.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[14:52] * sleinen (~Adium@2001:620:0:26:3da3:ff74:4983:749) Quit (Quit: Leaving.)
[14:52] * sleinen (~Adium@130.59.94.216) has joined #ceph
[14:52] * hjjg_ (~hg@p3EE33164.dip0.t-ipconnect.de) has joined #ceph
[14:54] * hjjg (~hg@p3EE3262E.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[14:57] * sleinen1 (~Adium@2001:620:0:26:2dff:eb98:7fd8:9a04) has joined #ceph
[14:58] * scuttlemonkey (~scuttlemo@c-107-5-193-244.hsd1.mi.comcast.net) Quit (Ping timeout: 480 seconds)
[15:00] * sleinen (~Adium@130.59.94.216) Quit (Ping timeout: 480 seconds)
[15:01] * markbby (~Adium@168.94.245.4) has joined #ceph
[15:03] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:05] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[15:10] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[15:12] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[15:13] * jedanbik (~jedanbik@dhcp152-54-6-141.wireless.europa.renci.org) has joined #ceph
[15:16] * loicd reading http://redhatstorage.redhat.com/2014/03/04/red-hat-storage-outperforms-ceph-by-more-than-2x-using-small-file-io-workloads-for-openstack-clouds/
[15:19] <loicd> there seem to be enough details to repeat the experiment, that's interesting
[15:20] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[15:20] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[15:20] <dmsimard> ahah, that's kind of a low punch, considering ceph just certified with redhat
[15:20] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[15:21] <loicd> no doubt this will match but the graphs are so consistently favorable to gluster that some metrics have been concealed.
[15:21] <dmsimard> that's marketing for you
[15:21] <loicd> s/that some/that I'm convinced that/
[15:21] <kraken> loicd meant to say: no doubt this will match but the graphs are so consistently favorable to gluster that I'm convinced that metrics have been concealed.
[15:21] <loicd> :-D
[15:22] <darkfader> loicd: if they benchmarked using fuse there's the immediate question about sync writes etc
[15:22] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[15:23] <darkfader> there might be a performance benefit testing gluster with ib verbs vs. ceph
[15:24] <darkfader> but i'm really puzzed what changed in gluster to make small io fast
[15:24] <darkfader> thats where it always sucked
[15:24] <loicd> marketing ?
[15:24] * darkfader slaps back of head
[15:24] <darkfader> yes you're right.
[15:24] <loicd> you find *one* case where it performs better and you broadcast it, prime time
[15:25] <loicd> it may be genuine, I don't really know ;-) but I would be surprised, as much as you.
[15:26] * mnash (~chatzilla@vpn.expressionanalysis.com) Quit (Read error: Connection reset by peer)
[15:27] * mnash (~chatzilla@vpn.expressionanalysis.com) has joined #ceph
[15:27] * srenatus (~stephan@185.27.182.2) Quit (Quit: leaving)
[15:28] <darkfader> would it be mean to ask RH if they expect further speedups once someone triggered a self-heal?
[15:28] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit (Quit: Bye!)
[15:29] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[15:29] * markednmbr1 (~markednmb@cpc8-lewi13-2-0-cust979.2-4.cable.virginm.net) Quit (Quit: Leaving)
[15:29] <darkfader> two things i noticed over the last 2 weeks: gluster has some iscsi frontend now (maybe actually a lio backend, which would also bring in a async flag)
[15:30] <darkfader> and there's a libgfs or something that is used by opennebula
[15:30] <darkfader> so they found a way to stop using their fuse client without anyone losing face
[15:31] <darkfader> ah, they didn't use ssds for journal i think
[15:32] <darkfader> and yup, there's ib adapters in the servers, just coincidence of course
[15:38] * gregmark (~Adium@68.87.42.115) has joined #ceph
[15:41] * gaveen (~gaveen@220.247.234.28) Quit (Remote host closed the connection)
[15:41] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[15:42] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) has joined #ceph
[15:49] <pressureman> is it possible to get around a "currently waiting for missing object" problem? i've lost one out of two OSDs on a cluster, and am trying to get the one remaining OSD to continue
[15:49] <pressureman> i realise i've lost up to 3 PGs
[15:50] <pressureman> at the moment the cluster seems to just be hung however
[15:50] * reed (~reed@75-101-54-131.dsl.static.sonic.net) has joined #ceph
[15:50] <pressureman> osd.0 showing slow requests and "v4 waiting for missing object"
[15:50] <andreask> min-size for the pool is 1?
[15:52] <pressureman> yes i've reduced it to 1
[15:53] <pressureman> it seems like the cluster just won't carry on until it gets past this missing object
[15:53] <pressureman> ceph pg query just hangs forever
[15:54] <pressureman> i've marked osd.1 as lost
[15:54] <pressureman> and removed it from crush
[15:55] <jerker> darkfader: do they mention how redundant (how many replicas) their setups are?
[15:56] <darkfader> jerker: i didn't see it; but gluster would scale better with that (at least to some point)
[15:57] <darkfader> the fastest you could do in there setup is striping with replicas(nodes) and the "cache" translator
[15:57] <darkfader> which is just a nice way of saying you write to memory with no guarantees
[15:57] <darkfader> stripe/mirror/cache is the only useful setup and makes you have grey hair
[15:58] <darkfader> jerker: have a look in the pdf, the last pages are more tech and less bullshit
[16:00] <jerker> darkfader: i am reading but have not found it yet
[16:01] <darkfader> hehe
[16:01] * Cnidus (~cnidus@2601:9:7b80:8c7:3951:d9c:7785:d26) has joined #ceph
[16:03] * gdavis33 (~gdavis@38.122.12.254) has left #ceph
[16:07] <gregsfortytwo1> loicd: unless their docs are better than the previous results they released, I don't think there are enough details
[16:07] <gregsfortytwo1> as they don't specify the gluster translators
[16:08] <loicd> gregsfortytwo1: :-/
[16:08] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) has joined #ceph
[16:08] <gregsfortytwo1> and some of the translators have names like "writeback-cache"
[16:09] <gregsfortytwo1> which you can use on both clients and servers
[16:09] * Cnidus (~cnidus@2601:9:7b80:8c7:3951:d9c:7785:d26) Quit (Ping timeout: 480 seconds)
[16:09] * schlitzer|work (~schlitzer@2a02:2e0:2810:0:224:27ff:fefe:4091) has joined #ceph
[16:11] * mkoderer (uid11949@id-11949.ealing.irccloud.com) Quit (Quit: Connection closed for inactivity)
[16:21] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[16:23] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[16:30] * circ-user-x1H1v (~circuser-@hq01.euroweb.de) Quit (Remote host closed the connection)
[16:38] * Cnidus (~cnidus@24.130.34.71) has joined #ceph
[16:43] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[16:45] * JoeGruher (~JoeGruher@134.134.139.70) has joined #ceph
[16:49] * sarob (~sarob@2601:9:7080:13a:e104:46e0:6718:b25) has joined #ceph
[16:51] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[16:52] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[16:53] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has left #ceph
[16:54] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[16:55] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit (Quit: Bye!)
[16:55] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[16:55] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[16:59] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[17:08] * thomnico (~thomnico@2a01:e35:8b41:120:c533:3678:1051:e7d2) Quit (Quit: Ex-Chat)
[17:09] * thomnico (~thomnico@2a01:e35:8b41:120:c533:3678:1051:e7d2) has joined #ceph
[17:11] <pressureman> is there a way to cancel blocked io requests? i have an OSD that has been blocking for over an hour
[17:13] * i_m (~ivan.miro@gbibp9ph1--blueice4n1.emea.ibm.com) Quit (Quit: Leaving.)
[17:14] * ajazdzewski (~quassel@lpz-66.sprd.net) Quit (Remote host closed the connection)
[17:21] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:22] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) Quit (Quit: ZNC - http://znc.sourceforge.net)
[17:25] * bandrus (~Adium@adsl-75-5-250-121.dsl.scrm01.sbcglobal.net) has joined #ceph
[17:27] * xarses (~andreww@12.164.168.117) has joined #ceph
[17:29] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[17:33] * nwat (~textual@eduroam-231-66.ucsc.edu) has joined #ceph
[17:37] * sleinen1 (~Adium@2001:620:0:26:2dff:eb98:7fd8:9a04) Quit (Quit: Leaving.)
[17:37] * sleinen (~Adium@130.59.94.216) has joined #ceph
[17:39] * sleinen1 (~Adium@130.59.94.216) has joined #ceph
[17:39] * sleinen (~Adium@130.59.94.216) Quit (Read error: Connection reset by peer)
[17:39] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Read error: Connection reset by peer)
[17:39] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[17:40] * rmoe (~quassel@12.164.168.117) has joined #ceph
[17:40] * sleinen (~Adium@2001:620:0:25:8c6e:d02a:8cb3:c431) has joined #ceph
[17:43] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[17:45] * thomnico (~thomnico@2a01:e35:8b41:120:c533:3678:1051:e7d2) Quit (Quit: Ex-Chat)
[17:45] * thomnico (~thomnico@2a01:e35:8b41:120:c533:3678:1051:e7d2) has joined #ceph
[17:45] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Ping timeout: 480 seconds)
[17:47] * sleinen1 (~Adium@130.59.94.216) Quit (Ping timeout: 480 seconds)
[17:49] * alram (~alram@38.122.20.226) has joined #ceph
[17:51] * kaizh (~kaizh@128-107-239-234.cisco.com) has joined #ceph
[17:54] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[17:54] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[17:55] * Vacum (~vovo@i59F4AD06.versanet.de) has joined #ceph
[17:56] * Vacum_ (~vovo@i59F79A17.versanet.de) Quit (Read error: Connection reset by peer)
[17:57] * neurodrone (~neurodron@static-108-30-171-7.nycmny.fios.verizon.net) has joined #ceph
[17:58] * tryggvil (~tryggvil@178.19.53.254) Quit (Quit: tryggvil)
[17:59] * mattt (~textual@94.236.7.190) Quit (Read error: Operation timed out)
[18:00] * markbby (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[18:01] * pressureman (~pressurem@62.217.45.26) Quit (Quit: Ex-Chat)
[18:02] * markbby (~Adium@168.94.245.4) has joined #ceph
[18:08] * markbby1 (~Adium@168.94.245.4) has joined #ceph
[18:08] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[18:09] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[18:12] * JeffK (~JeffK@38.99.52.10) has joined #ceph
[18:12] * Cube (~Cube@12.248.40.138) has joined #ceph
[18:17] * sarob (~sarob@2601:9:7080:13a:e104:46e0:6718:b25) Quit (Remote host closed the connection)
[18:17] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[18:17] * sarob (~sarob@2001:4998:effd:7801::1002) has joined #ceph
[18:20] * sputnik1_ (~sputnik13@207.8.121.241) has joined #ceph
[18:21] * markednmbr1 (~markednmb@cpc8-lewi13-2-0-cust979.2-4.cable.virginm.net) has joined #ceph
[18:21] <markednmbr1> Hello
[18:21] <markednmbr1> so, how does ceph add data to the osds by default
[18:22] <markednmbr1> i've followed the basic config docs, and have 2 nodes with osds (1 monitor)
[18:22] * ChrisNBlum (~Adium@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Ping timeout: 480 seconds)
[18:22] <markednmbr1> when I add data it seems to spread it across all osd's - can someone point me to docs on how this works?
[18:23] <markednmbr1> im adding data through archipelago (librados)
[18:24] <markednmbr1> is this where I need to learn about crush maps? :)
[18:25] <ircolle> markednmbr1 - http://ceph.com/docs/master/rados/operations/crush-map/
[18:25] <markednmbr1> thanks!
[18:26] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) has joined #ceph
[18:26] * sjusthm (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) has joined #ceph
[18:27] * jeff-YF (~jeffyf@67.23.117.122) Quit (Quit: jeff-YF)
[18:27] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Quit: sync && halt)
[18:27] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[18:32] <markednmbr1> so it looks like by default it knows to replicate between 2 hosts
[18:33] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[18:36] * sarob_ (~sarob@2601:9:7080:13a:e104:46e0:6718:b25) has joined #ceph
[18:36] * sarob (~sarob@2001:4998:effd:7801::1002) Quit (Read error: Connection reset by peer)
[18:36] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[18:41] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) has joined #ceph
[18:42] <Vacum> markednmbr1: no. the default crush rule distributes replicas of all placement groups over all OSD hosts
[18:42] <Vacum> markednmbr1: the number of replicas is defined for each pool
[18:43] <markednmbr1> ok, so that isn't in the crush map
[18:43] <markednmbr1> ?
[18:44] <Vacum> markednmbr1: perhaps I misunderstood your "so it looks like by default it knows to replicate between 2 hosts" ? :)
[18:45] <markednmbr1> No I'm most likely wrong, it just looked like it had written the same amount to node2 as it had to node1
[18:45] <Vacum> markednmbr1: by default it will distribute your pg replicas over all hosts in your crushmap.
[18:45] <markednmbr1> I see that it doesn't have any rules for the pools i created now
[18:46] * markbby1 (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[18:46] <markednmbr1> Vacum - so it doesn't know any difference between osd-{0,1,2} and osd-{3,4,5}
[18:46] <Vacum> markednmbr1: its not 1 rule per pool. you define a set of rules and then assign one of those rules to each pool
[18:46] <markednmbr1> which are on different hosts
[18:47] <markednmbr1> oh right I just see the rules are called the same as the pools in the default install?
[18:47] <Vacum> markednmbr1: from the default crushmap all OSDs on one host are treated equally, yes
[18:47] <Vacum> and the default crush rules
[18:47] <markednmbr1> so it is not making the replica of 0,1,2 be 3,4,5?
[18:47] * markbby (~Adium@168.94.245.4) has joined #ceph
[18:47] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[18:48] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[18:48] <Vacum> markednmbr1: replication is not done on osd level. its not like osd.0 replicates with osd.3
[18:48] <Vacum> markednmbr1: replication is done on placement group level
[18:48] * hjjg_ (~hg@p3EE33164.dip0.t-ipconnect.de) Quit (Quit: Lost terminal)
[18:48] <markednmbr1> ok
[18:50] <Vacum> markednmbr1: ie you create a new pool with 300 placementgroups and replica count 2. this will result in 300 * 2 pgs being created. and the default rule will distribute each pg's 2 replicas to 2 different hosts, on a "random" osd there
[18:50] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[18:51] <markednmbr1> ok, in the crush map which is the default rule, so I can see how that distribution works
[18:51] <Vacum> mh, i don't have a default crush map at hand at the moment :)
[18:51] <markednmbr1> is it "rule data"
[18:51] <markednmbr1> ruleset 0
[18:51] <Vacum> yes
[18:52] * garphy is now known as garphy`aw
[18:53] <markednmbr1> ok i'm going to have to take alot more time learning that I think! :)
[18:53] <Vacum> markednmbr1: you can run ceph osd dump |less and right at the top see all your pools and which rule they use
[18:53] <markednmbr1> just to go back a bit, when I created the pool I used "100 100"
[18:53] <markednmbr1> so it has 100 placement groups
[18:53] <markednmbr1> how should I specify correctly the amount of placement groups?
[18:54] <Vacum> markednmbr1: there is a rule of thumb that you should end up with 100 pgs per OSD
[18:54] <markednmbr1> are they a measurement of storage or something else
[18:55] <Vacum> markednmbr1: if you store an object to your pool, the object is mapped (with a hash of its name) to a placement group. so this decides in which pg it gets stored
[18:55] <Vacum> markednmbr1: and the pgs are mapped with crush to the osds, according to your crushmap and rule
[18:56] <Vacum> markednmbr1: more pgs means more crush calculations. less pgs means potentially less evenly filled osds (iirc)
[18:57] <markednmbr1> ok so it's nothing to do with the amount of storage
[18:57] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[18:57] <markednmbr1> just about how to evenly distribute the data?
[18:57] <markednmbr1> (pgs)
[18:58] * The_Bishop (~bishop@2001:470:50b6:0:ac66:1a90:db0d:ffec) Quit (Ping timeout: 480 seconds)
[18:58] * tryggvil (~tryggvil@178.19.53.254) Quit (Ping timeout: 480 seconds)
[18:58] <Vacum> and about replication
[18:59] * imriz (~imriz@82.81.163.130) Quit (Ping timeout: 480 seconds)
[18:59] <Vacum> and with a more sophisticated crush rule you can influence how pgs are mapped to osds, ie ensure that replicas of pgs stored in different racks
[18:59] <markednmbr1> is it possible to change the amount of pgs in a pool after its creation?
[18:59] <markednmbr1> it seems that total PG's should be osds *100 / replicas
[18:59] <Vacum> markednmbr1: increase yes, decrease not afaik
[19:00] <markednmbr1> so if you add new osds then you need to increase this amount right?
[19:00] <markednmbr1> ah right ok that makes sense
[19:00] * sleinen1 (~Adium@2001:620:0:46:850a:43e:3c73:8ce9) has joined #ceph
[19:00] <Vacum> markednmbr1: but: increasing the number of pgs will result in data shifting around
[19:00] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[19:01] <markednmbr1> right - so it would immediately rebalance on to the new osds
[19:01] <markednmbr1> ?
[19:01] <Vacum> markednmbr1: when adding osds, this is even true without changing the number of pgs
[19:02] <Vacum> markednmbr1: you can set new osd's weight to 0, then increase slowly
[19:02] <markednmbr1> ok, how does it manage the load when you are doing that on a live cluster
[19:02] <markednmbr1> I see, you keep modifying the map slowly?
[19:03] <Vacum> yes. plus you can define how many pgs are concurrently backfilling
[19:03] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[19:04] <markednmbr1> ok, this is very interesting
[19:04] <markednmbr1> thanks for your help!
[19:04] <Vacum> welcome
[19:04] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:07] * sleinen (~Adium@2001:620:0:25:8c6e:d02a:8cb3:c431) Quit (Ping timeout: 480 seconds)
[19:07] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) has joined #ceph
[19:10] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[19:12] * ChrisNBlum (~Adium@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[19:12] * kaizh (~kaizh@128-107-239-234.cisco.com) Quit (Read error: Connection reset by peer)
[19:13] * kaizh (~kaizh@128-107-239-236.cisco.com) has joined #ceph
[19:13] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[19:20] * Psi-Jack (~Psi-Jack@psi-jack.user.oftc.net) Quit (Quit: ZNC - http://znc.in)
[19:20] * ChrisNBlum (~Adium@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Read error: Connection reset by peer)
[19:21] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[19:21] * ChanServ sets mode +v andreask
[19:21] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has left #ceph
[19:23] * ChrisNBlum (~Adium@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[19:29] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[19:36] * kaizh (~kaizh@128-107-239-236.cisco.com) Quit (Remote host closed the connection)
[19:41] * bitblt (~don@128-107-239-235.cisco.com) has joined #ceph
[19:41] * jedanbik (~jedanbik@dhcp152-54-6-141.wireless.europa.renci.org) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[19:42] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[19:43] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[19:48] * garphy`aw is now known as garphy
[19:51] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[19:53] * garphy is now known as garphy`aw
[19:53] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has left #ceph
[19:54] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) has joined #ceph
[19:54] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[19:54] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[19:55] <ksingh> hello cephers , what are your thoughts of using RAID 0 underneath ceph OSD ? do you think off any advantage or disadvantage of this kind of setup ??
[19:56] <singler> I use RAID 0 of one disk (I guess many people does this)
[19:57] * nwat (~textual@eduroam-231-66.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[19:57] <singler> there is no use of disk striping on disk level, because OSDs tripe data, also, I if one disk in RAID 0 would fail, then all RAID would fail
[19:58] <dmsimard> oh noes, where is scuttlemonkey
[19:59] * Boltsky (~textual@office.deviantart.net) has joined #ceph
[20:00] * markbby (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[20:01] <ksingh> singler : right now our POC ceph cluster uses RAID 0 ( single disk stripe ) underneath , do you think we should not use RAID 0 , when moving to production ?
[20:01] <ksingh> need your advice
[20:02] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[20:02] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) has joined #ceph
[20:02] * markbby (~Adium@168.94.245.4) has joined #ceph
[20:02] <Vacum> ksingh: it adds another layer of indirection. and possible mis-alignment (ie if using 4K drives that emulate 512b sectors)
[20:03] <singler> single disk raid 0 is same as no RAID (some controllers do not give access to disk without raid)
[20:03] <singler> or what Vacum says
[20:04] <bens> my servers (HP) make me do raid 0 to do anything
[20:04] <Vacum> singler: single disk raid 0 still takes a few sectors away from the disk. the controller uses that to store raid / volume group information there
[20:04] <Vacum> advantage: if you use a BBU controller, you can use write through cache with that setup
[20:05] <ksingh> bens : we also uses HPDL380 servers , do you know does the raid card allows to use disk with RAID 0 Or no raid at alll
[20:05] * nwat (~textual@eduroam-231-66.ucsc.edu) has joined #ceph
[20:05] <bens> raid 0 if you want to present one drive
[20:05] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) has joined #ceph
[20:05] <bens> depends on the controller. we use the 420i for our ceph
[20:06] <Vacum> we did raid0 single drives too on lsi controllers - after reboots we always lost at least one raid0 that way. the controller didn't cope well with it
[20:06] <bens> HP's appear to be fine.
[20:06] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[20:06] <singler> I have no problems with HP too
[20:07] <bens> hubba hubba
[20:07] <bens> http://www8.hp.com/us/en/products/proliant-servers/product-detail.html?oid=6464822
[20:07] * tryggvil (~tryggvil@178.19.53.254) Quit (Quit: tryggvil)
[20:08] <bens> 54 drives in 3.4u
[20:08] <ksingh> so i think , i should we should change our setup from RAID 0 to no raid at all , is my undestanding correct
[20:08] <bens> ksingh: you can't. you have to use raid zero for a 1:1 ratio of drive to OSDs
[20:09] <bens> That is, with the raid controller your server likely has
[20:10] <ksingh> thanks Bens , Singler , Vacum for piece of advice
[20:11] <bens> I am here for you,.
[20:11] <singler> np
[20:15] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[20:16] * sjusthm (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[20:18] <dmick> dwm: interested to know which commands are failing; there is a test suite in stgt
[20:24] <dwm> dmick: Numerous errors of the form reported VMware-side: 2014-03-10T11:02:13.347Z cpu2:670324)ScsiDeviceIO: 2337: Cmd(0x412e8020c280) 0x89, CmdSN 0x5b46 from world 36882 to dev "t10.IET_____000100020000000000000000000000000000000000000000" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.
[20:24] * sarob_ (~sarob@2601:9:7080:13a:e104:46e0:6718:b25) Quit (Remote host closed the connection)
[20:24] * sarob (~sarob@2601:9:7080:13a:e104:46e0:6718:b25) has joined #ceph
[20:25] <dwm> Unpacking the sense data, 0x5 indicates ILLEGAL REQUEST, while 0x20 indicates INVALID COMMAND OPERATION CODE
[20:26] * SpamapS (~clint@184.105.137.237) Quit (Read error: Operation timed out)
[20:28] * sjusthm (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) has joined #ceph
[20:28] <dwm> ... which makes sense. Looking at http://www.t10.org/lists/op-num.htm#OPG_2, 10-byte commands prefixed with 41 indicate WRITE SAME, which is what the VMware host will have been doing at the time: formatting a new VMDK by filling the content with zeroes.
[20:29] * jedanbik (~jedanbik@dhcp152-54-6-141.wireless.europa.renci.org) has joined #ceph
[20:30] * SpamapS (~clint@xencbyrum2.srihosting.com) has joined #ceph
[20:30] * sputnik1_ (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[20:31] * sputnik1_ (~sputnik13@207.8.121.241) has joined #ceph
[20:31] <dmick> ah. that can probably be fixed fairly easily. let me have a look at the code and see if it's generic or rbd-driver
[20:32] <dwm> I did note that the ILO code did indeed write out zeroes across the entire RBD block device. Possible opportunity for optimisation?
[20:32] <dmick> ILO?
[20:33] * sarob (~sarob@2601:9:7080:13a:e104:46e0:6718:b25) Quit (Ping timeout: 480 seconds)
[20:33] <dwm> Sorry, I keep typoing that -- LIO iSCSI target implementation -- see also linux-iscsi.org.
[20:33] <dmick> ah.
[20:33] <dmick> and yes, AFAIK LIO doesn't have any knowledge of rbd, so it's treating it like a normal device
[20:33] <dwm> (Merged in-kernel as of 2.6.38.)
[20:34] <dmick> there are opportunities there
[20:36] <dwm> Hmm, according to this stgt email from May 2012, stgt claims support for WRITE SAME(10). Perhaps it doesn't handle all cases exercised by a modern VMware host?
[20:36] <dwm> (Link: http://lists.wpkg.org/pipermail/stgt/2012-May/005264.html)
[20:40] * sarob (~sarob@2601:9:7080:13a:9cf8:b386:9855:374) has joined #ceph
[20:40] * JCL (~JCL@2601:9:5980:39b:3d2b:c85:8d36:3108) Quit (Quit: Leaving.)
[20:41] * sarob (~sarob@2601:9:7080:13a:9cf8:b386:9855:374) Quit (Remote host closed the connection)
[20:41] * JCL (~JCL@2601:9:5980:39b:81a6:e0e2:8d71:9a71) has joined #ceph
[20:41] * sarob (~sarob@2601:9:7080:13a:9cf8:b386:9855:374) has joined #ceph
[20:47] * kaizh (~kaizh@128-107-239-234.cisco.com) has joined #ceph
[20:48] * jeff-YF (~jeffyf@67.23.117.122) Quit (Quit: jeff-YF)
[20:49] * sarob (~sarob@2601:9:7080:13a:9cf8:b386:9855:374) Quit (Ping timeout: 480 seconds)
[20:52] * ShaunR (~ShaunR@staff.ndchost.com) has joined #ceph
[20:59] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[21:01] * dxd828 (~dxd828@dsl-dynamic-77-44-45-38.interdsl.co.uk) has joined #ceph
[21:02] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[21:07] * Psi-Jack (~Psi-Jack@psi-jack.user.oftc.net) has joined #ceph
[21:09] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[21:18] * nwat (~textual@eduroam-231-66.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[21:20] * fatih (~fatih@78.186.36.182) has joined #ceph
[21:24] * thomnico (~thomnico@2a01:e35:8b41:120:c533:3678:1051:e7d2) Quit (Quit: Ex-Chat)
[21:28] * fatih_ (~fatih@78.186.36.182) has joined #ceph
[21:32] * JoeGruher (~JoeGruher@134.134.139.70) Quit (Remote host closed the connection)
[21:34] * fatih (~fatih@78.186.36.182) Quit (Ping timeout: 480 seconds)
[21:34] * Cube (~Cube@12.248.40.138) Quit (Quit: Leaving.)
[21:34] * Cube (~Cube@12.248.40.138) has joined #ceph
[21:43] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) has joined #ceph
[21:46] * garphy`aw is now known as garphy
[21:46] * Cube1 (~Cube@12.248.40.138) has joined #ceph
[21:51] * sarob (~sarob@ip-64-134-225-149.public.wayport.net) has joined #ceph
[21:53] * Cube (~Cube@12.248.40.138) Quit (Ping timeout: 480 seconds)
[21:57] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) has joined #ceph
[21:57] <bitblt> has anyone been able to migrate nova instances using virsh non-shared storage? (eg --copy-storage-all)
[21:59] * sroy (~sroy@207.96.182.162) Quit (Quit: Quitte)
[22:00] * markbby (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[22:01] <bens> if you get an answer to that you will solve all my problems
[22:01] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) has joined #ceph
[22:04] * richard-gs (~yguo@206.173.10.4.ptr.us.xo.net) Quit (Quit: Leaving.)
[22:07] <bitblt> heh
[22:07] <bitblt> let me know if you get one first
[22:08] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[22:08] * ChanServ sets mode +v andreask
[22:08] <bitblt> i just read the libvirt source on this and it's not promising (based on my limited understanding)
[22:08] <janos> yeah i doin't think it's ever worked well
[22:08] <janos> i think i got it to work once about 2 years ago
[22:09] <janos> which is a shame, that would be awesome
[22:09] <bitblt> ouch
[22:09] <janos> the idea is fantastic
[22:09] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[22:09] <bitblt> yeah i totally agree..it just seems like something that would be a given
[22:10] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[22:10] <janos> from my limited understanding it's a pretty complicated and hairy hand off
[22:11] <bitblt> that's too bad. well i guess i'll have to decide between rbd or nfs then
[22:11] <bitblt> and the annoying bit of matching up the nova uid/gid
[22:13] * JoeGruher (~JoeGruher@134.134.137.71) has joined #ceph
[22:15] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) Quit (Quit: Leaving)
[22:16] * allsystemsarego (~allsystem@188.26.167.156) Quit (Quit: Leaving)
[22:19] <bens> I have a consultant I am working with that says it will "available soon"
[22:21] * Sysadmin88 (~IceChat77@176.254.32.31) has joined #ceph
[22:21] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[22:22] * alram_ (~alram@38.122.20.226) has joined #ceph
[22:22] * Meistarin_ (sid19523@id-19523.charlton.irccloud.com) has joined #ceph
[22:22] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has left #ceph
[22:23] * garphy` (~garphy@frank.zone84.net) has joined #ceph
[22:23] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[22:23] * Meistarin_ (sid19523@id-19523.charlton.irccloud.com) Quit ()
[22:23] * Meistarin_ (sid19523@id-19523.charlton.irccloud.com) has joined #ceph
[22:23] <bitblt> bens, the copy storage part?
[22:23] <Cnidus> gday all
[22:24] * finster_ (~finster@cmdline.guru) has joined #ceph
[22:24] * goerk_ (~goerk@ip-176-198-112-20.unitymediagroup.de) has joined #ceph
[22:24] * NaioN_ (stefan@andor.naion.nl) has joined #ceph
[22:24] * kuu_ (~kuu@virtual362.tentacle.fi) has joined #ceph
[22:24] * swizgard (~swizgard@port-87-193-133-18.static.qsc.de) has joined #ceph
[22:24] * r0r_taga_ (~nick@greenback.pod4.org) has joined #ceph
[22:24] * paradon (~thomas@60.234.66.253) has joined #ceph
[22:24] * baffle_ (baffle@jump.stenstad.net) has joined #ceph
[22:24] * joelio_ (~Joel@88.198.107.214) has joined #ceph
[22:24] * nyerup_ (irc@jespernyerup.dk) has joined #ceph
[22:25] <Cnidus> can anyone point me in the direction of documentation etc on what network protocols ceph uses to communicate (intra node, from rbd client to RADOS cluster etc)
[22:25] * ctd_ (~root@00011932.user.oftc.net) has joined #ceph
[22:25] * ido_ (~ido@lolcocks.com) has joined #ceph
[22:25] * dgc_ (~redacted@bikeshed.us) has joined #ceph
[22:25] * Elbandi_ (~ea333@elbandi.net) has joined #ceph
[22:25] * lx0 (~aoliva@lxo.user.oftc.net) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * alram (~alram@38.122.20.226) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * fdmanana (~fdmanana@bl10-252-34.dsl.telepac.pt) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * Meistarin (sid19523@0001c3c8.user.oftc.net) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * joao (~joao@a95-92-32-211.cpe.netcabo.pt) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * r0r_taga (~nick@greenback.pod4.org) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * joelio (~Joel@88.198.107.214) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * nyerup (irc@jespernyerup.dk) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * finster (~finster@cmdline.guru) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * dgc (~redacted@bikeshed.us) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * baffle (baffle@jump.stenstad.net) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * garphy (~garphy@frank.zone84.net) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * amospalla (~amospalla@0001a39c.user.oftc.net) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * Elbandi (~ea333@elbandi.net) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * NaioN (stefan@andor.naion.nl) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * swizgard_ (~swizgard@port-87-193-133-18.static.qsc.de) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * paradon_ (~thomas@60.234.66.253) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * kuu (~kuu@virtual362.tentacle.fi) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * houkouonchi-home (~linux@2001:470:c:c69::2) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * ctd (~root@00011932.user.oftc.net) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * ido (~ido@00014f21.user.oftc.net) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * raso (~raso@deb-multimedia.org) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * kraken (~kraken@gw.sepia.ceph.com) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * goerk (~goerk@ip-176-198-112-20.unitymediagroup.de) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) Quit (reticulum.oftc.net solenoid.oftc.net)
[22:25] <Cnidus> looking into what sort of network toplogies are optimal, etc
[22:25] <bens> bitblt: the live migration part
[22:26] <bitblt> bens, right, but you mean without shared storage right?
[22:26] * bdonnahue (~James@24-148-64-18.c3-0.mart-ubr2.chi-mart.il.cable.rcn.com) has left #ceph
[22:26] <janos> that's the important part of this equation
[22:27] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[22:28] * joao (~joao@a95-92-32-211.cpe.netcabo.pt) has joined #ceph
[22:28] * ChanServ sets mode +o joao
[22:28] * raso (~raso@deb-multimedia.org) has joined #ceph
[22:28] * amospalla (~amospalla@0001a39c.user.oftc.net) has joined #ceph
[22:29] * houkouonchi-home (~linux@houkouonchi-1-pt.tunnel.tserv15.lax1.ipv6.he.net) has joined #ceph
[22:29] * kraken (~kraken@gw.sepia.ceph.com) has joined #ceph
[22:29] * scuttlemonkey (~scuttlemo@wsip-70-184-96-220.ph.ph.cox.net) has joined #ceph
[22:29] * ChanServ sets mode +o scuttlemonkey
[22:29] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[22:29] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) has joined #ceph
[22:29] * fdmanana (~fdmanana@bl10-252-34.dsl.telepac.pt) has joined #ceph
[22:30] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[22:30] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[22:33] * Meistarin_ is now known as Meistarin
[22:33] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[22:36] * jedanbik (~jedanbik@dhcp152-54-6-141.wireless.europa.renci.org) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[22:38] * rendar (~s@host197-178-dynamic.19-79-r.retail.telecomitalia.it) Quit ()
[22:38] * nwat (~textual@eduroam-241-205.ucsc.edu) has joined #ceph
[22:39] * chutz (~chutz@rygel.linuxfreak.ca) Quit (Quit: Leaving)
[22:40] * dxd828 (~dxd828@dsl-dynamic-77-44-45-38.interdsl.co.uk) Quit (Quit: Textual IRC Client: www.textualapp.com)
[22:42] * fatih_ (~fatih@78.186.36.182) Quit (Quit: Linkinus - http://linkinus.com)
[22:46] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[22:50] * neurodrone (~neurodron@static-108-30-171-7.nycmny.fios.verizon.net) Quit (Quit: neurodrone)
[22:51] * neurodrone (~neurodron@static-108-30-171-7.nycmny.fios.verizon.net) has joined #ceph
[22:54] * theonceandfuturewarrenusui (~Warren@2607:f298:a:607:c568:4757:5bf1:b42f) has joined #ceph
[22:56] * markednmbr1 (~markednmb@cpc8-lewi13-2-0-cust979.2-4.cable.virginm.net) Quit (Quit: Leaving)
[22:59] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[22:59] * neurodrone (~neurodron@static-108-30-171-7.nycmny.fios.verizon.net) Quit (Quit: neurodrone)
[22:59] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[23:03] * sarob (~sarob@ip-64-134-225-149.public.wayport.net) Quit (Remote host closed the connection)
[23:04] * sarob (~sarob@ip-64-134-225-149.public.wayport.net) has joined #ceph
[23:05] * eternaleye (~eternaley@50.245.141.73) Quit (Ping timeout: 480 seconds)
[23:07] * jamespage (~jamespage@culvain.gromper.net) Quit (Quit: Coyote finally caught me)
[23:07] * eternaleye (~eternaley@50.245.141.73) has joined #ceph
[23:07] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[23:12] * ChrisNBlum (~Adium@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Quit: Leaving.)
[23:12] * sarob (~sarob@ip-64-134-225-149.public.wayport.net) Quit (Ping timeout: 480 seconds)
[23:12] * bitblt (~don@128-107-239-235.cisco.com) Quit (Ping timeout: 480 seconds)
[23:12] * Cnidus (~cnidus@24.130.34.71) Quit (Quit: Leaving.)
[23:15] * nwat (~textual@eduroam-241-205.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[23:17] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) has joined #ceph
[23:20] * markbby (~Adium@168.94.245.3) has joined #ceph
[23:21] * jeff-YF (~jeffyf@67.23.117.122) Quit (Quit: jeff-YF)
[23:22] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) Quit (Quit: Computer has gone to sleep.)
[23:23] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[23:24] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[23:28] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[23:32] * BillK (~BillK-OFT@106-69-56-113.dyn.iinet.net.au) has joined #ceph
[23:33] * geraintjones (~geraint@222-152-77-45.jetstream.xtra.co.nz) Quit (Quit: geraintjones)
[23:37] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) Quit (Quit: Leaving.)
[23:39] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[23:39] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[23:40] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[23:40] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) has joined #ceph
[23:40] * Cnidus (~cnidus@24.130.34.71) has joined #ceph
[23:40] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[23:42] * garphy` is now known as garphy``aw
[23:46] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[23:48] * markbby1 (~Adium@168.94.245.4) has joined #ceph
[23:51] * markbby2 (~Adium@168.94.245.3) has joined #ceph
[23:51] * markbby (~Adium@168.94.245.3) Quit (Remote host closed the connection)
[23:51] * markbby1 (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[23:52] * awaay_ (~ircap@90.174.0.218) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.