#ceph IRC Log

Index

IRC Log for 2012-12-29

Timestamps are in GMT/BST.

[0:04] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) Quit (Remote host closed the connection)
[0:06] * drokita (~drokita@199.255.228.10) Quit (Read error: Connection reset by peer)
[0:07] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Remote host closed the connection)
[0:08] <joshd1> jefferai: it'll be rebuilt tonight probably, the builder is working pretty hard
[0:10] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[0:14] * The_Bishop_ (~bishop@f052101057.adsl.alicedsl.de) has joined #ceph
[0:18] * madkiss (~madkiss@178.188.60.118) Quit (Quit: Leaving.)
[0:21] * The_Bishop (~bishop@e179005132.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[0:24] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[0:32] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[0:32] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[0:46] * themgt (~themgt@24-177-233-102.dhcp.gnvl.sc.charter.com) has joined #ceph
[0:47] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[0:51] <themgt> is there a way to move an OSD to a different host? 'ceph osd tree' somehow shows an OSD mapped to the wrong host (the correct host is still specified in the config file)
[0:53] <joshd1> ceph osd crush move
[1:01] <themgt> cool thx. actually used crush set, but looks like it's working
[1:21] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (resistance.oftc.net synthon.oftc.net)
[1:21] * denken (~denken@dione.pixelchaos.net) Quit (resistance.oftc.net synthon.oftc.net)
[1:21] * Kioob (~kioob@luuna.daevel.fr) Quit (resistance.oftc.net synthon.oftc.net)
[1:21] * jochen (~jochen@laevar.de) Quit (resistance.oftc.net synthon.oftc.net)
[1:21] * michaeltchapman (~mxc900@150.203.248.116) Quit (resistance.oftc.net synthon.oftc.net)
[1:21] * wonko_be_ (bernard@november.openminds.be) Quit (resistance.oftc.net synthon.oftc.net)
[1:21] * l3akage (~l3akage@martinpoppen.de) Quit (resistance.oftc.net synthon.oftc.net)
[1:21] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) Quit (resistance.oftc.net synthon.oftc.net)
[1:21] * paravoid (~paravoid@scrooge.tty.gr) Quit (resistance.oftc.net synthon.oftc.net)
[1:21] * Lennie`away (~leen@lennie-1-pt.tunnel.tserv11.ams1.ipv6.he.net) Quit (resistance.oftc.net synthon.oftc.net)
[1:21] * Anticimex (anticimex@netforce.csbnet.se) Quit (resistance.oftc.net synthon.oftc.net)
[1:22] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[1:22] * denken (~denken@dione.pixelchaos.net) has joined #ceph
[1:22] * Kioob (~kioob@luuna.daevel.fr) has joined #ceph
[1:22] * jochen (~jochen@laevar.de) has joined #ceph
[1:22] * michaeltchapman (~mxc900@150.203.248.116) has joined #ceph
[1:22] * wonko_be_ (bernard@november.openminds.be) has joined #ceph
[1:22] * Anticimex (anticimex@netforce.csbnet.se) has joined #ceph
[1:22] * paravoid (~paravoid@scrooge.tty.gr) has joined #ceph
[1:22] * Lennie`away (~leen@lennie-1-pt.tunnel.tserv11.ams1.ipv6.he.net) has joined #ceph
[1:22] * l3akage (~l3akage@martinpoppen.de) has joined #ceph
[1:22] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) has joined #ceph
[1:24] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) Quit (resistance.oftc.net synthon.oftc.net)
[1:24] * l3akage (~l3akage@martinpoppen.de) Quit (resistance.oftc.net synthon.oftc.net)
[1:24] * wonko_be_ (bernard@november.openminds.be) Quit (resistance.oftc.net synthon.oftc.net)
[1:24] * michaeltchapman (~mxc900@150.203.248.116) Quit (resistance.oftc.net synthon.oftc.net)
[1:24] * jochen (~jochen@laevar.de) Quit (resistance.oftc.net synthon.oftc.net)
[1:24] * Kioob (~kioob@luuna.daevel.fr) Quit (resistance.oftc.net synthon.oftc.net)
[1:24] * denken (~denken@dione.pixelchaos.net) Quit (resistance.oftc.net synthon.oftc.net)
[1:24] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (resistance.oftc.net synthon.oftc.net)
[1:24] * paravoid (~paravoid@scrooge.tty.gr) Quit (resistance.oftc.net synthon.oftc.net)
[1:24] * Lennie`away (~leen@lennie-1-pt.tunnel.tserv11.ams1.ipv6.he.net) Quit (resistance.oftc.net synthon.oftc.net)
[1:24] * Anticimex (anticimex@netforce.csbnet.se) Quit (resistance.oftc.net synthon.oftc.net)
[1:24] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[1:24] * denken (~denken@dione.pixelchaos.net) has joined #ceph
[1:24] * Kioob (~kioob@luuna.daevel.fr) has joined #ceph
[1:24] * jochen (~jochen@laevar.de) has joined #ceph
[1:24] * michaeltchapman (~mxc900@150.203.248.116) has joined #ceph
[1:24] * wonko_be_ (bernard@november.openminds.be) has joined #ceph
[1:24] * Anticimex (anticimex@netforce.csbnet.se) has joined #ceph
[1:24] * paravoid (~paravoid@scrooge.tty.gr) has joined #ceph
[1:24] * Lennie`away (~leen@lennie-1-pt.tunnel.tserv11.ams1.ipv6.he.net) has joined #ceph
[1:24] * l3akage (~l3akage@martinpoppen.de) has joined #ceph
[1:24] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) has joined #ceph
[1:24] * themgt (~themgt@24-177-233-102.dhcp.gnvl.sc.charter.com) Quit (Quit: themgt)
[1:24] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has left #ceph
[1:27] * dmick (~dmick@2607:f298:a:607:1530:43d8:4550:1cb4) Quit (Quit: Leaving.)
[1:34] * dmick (~dmick@2607:f298:a:607:201e:e502:9174:ab93) has joined #ceph
[2:07] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[2:28] * sagelap (~sage@27.sub-70-197-131.myvzw.com) has joined #ceph
[2:29] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[2:39] * sagelap1 (~sage@76.89.177.113) has joined #ceph
[2:45] * sagelap (~sage@27.sub-70-197-131.myvzw.com) Quit (Ping timeout: 480 seconds)
[2:48] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[3:02] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[3:02] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[3:04] * Ryan_Lane (~Adium@216.38.130.165) Quit (Quit: Leaving.)
[3:14] * nwl_ (~levine@atticus.yoyo.org) has joined #ceph
[3:14] * nwl (~levine@atticus.yoyo.org) Quit (Read error: Connection reset by peer)
[4:09] * Aiken_ is now known as Aiken
[4:33] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[4:42] * BManojlovic (~steki@243-166-222-85.adsl.verat.net) Quit (Ping timeout: 480 seconds)
[4:55] * themgt (~themgt@71-90-234-152.dhcp.gnvl.sc.charter.com) has joined #ceph
[4:55] * nwl_ (~levine@atticus.yoyo.org) Quit (Ping timeout: 480 seconds)
[5:05] * Karcaw (~evan@68-186-68-219.dhcp.knwc.wa.charter.com) has joined #ceph
[5:05] * nwl (~levine@atticus.yoyo.org) has joined #ceph
[5:34] * LeaChim (~LeaChim@5ad684ae.bb.sky.com) Quit (Ping timeout: 480 seconds)
[5:35] * gaveen (~gaveen@112.135.13.242) has joined #ceph
[5:39] <themgt> I'm a bit confused, I see: "HEALTH_WARN 1 near full osd(s)" | "73034 MB data, 155 GB used, 255 GB / 432 GB avail"
[5:46] <lxo> I'm looking inot why getdents often fails with -ENOMEM on very large dirs using the ceph kernel module. it's the kcmalloc in parse_reply_info_dir that fails. any ideas of backup plans at that point for when the allocation attempt fails?
[5:48] <themgt> hmm, "ceph osd reweight-by-utilization" seems promising
[5:51] <lxo> themgt, what's the available disk space on each of your OSDs? surely one of them is running low (like more than 85% use). failing that, there's a bug somewhere
[5:57] * Cube (~Cube@c-38-80-203-117.rw.zetabroadband.com) Quit (Quit: Leaving.)
[5:58] <themgt> lxo: osd.0 has a lot smaller drive than the others, so it's nearing full. it looks like the reweight may be working to move data off that. I was under the assumption ceph just sort of automatically would use other non-full OSDs if just one was filling up?
[5:59] <lxo> no, it follows the crush function blindly. that's where the per-disk weights can be set
[6:01] <janos> when setting weights - is that as simple as getting disks proportional to each other? like a 500GB disk set to 1 and a 1TB disk set to 2?
[6:02] <janos> it might be nice for future that when a new cluster is being created it makes those determinations for the crush map
[6:04] * Karcaw (~evan@68-186-68-219.dhcp.knwc.wa.charter.com) Quit (Remote host closed the connection)
[6:08] * Cube (~Cube@c-38-80-203-117.rw.zetabroadband.com) has joined #ceph
[6:10] <iggy> lxo: if i read docs right, just don't return 0 (or negative) and whatever is calling it _should_ retry
[6:12] <lxo> iggy, err... what are you reading?
[6:14] <lxo> problem is, this is a GFP_NOFS alloc request, which means the memory system can't even flush pages. I believe it's not safe to change that to some laxer GFP type because we're holding locks for this one filesystem msg subsystem, so we'd rather not deadlock trying to send messages to flush pages while we're holding those locks
[6:15] <lxo> now, we *might* be lucky that a retry (that kmalloc itself should have already done) will succeed, especially if we allow it to block, but I'm not exactly hopeful
[6:16] <lxo> I'm thinking it would make more sense to pre-parse the reply at that point just to check that it's correct and then parse it properly after we release the locks, *or* return -EAGAIN to a caller that can then perform the allocation without holding locks and then try again
[6:19] <lxo> yet another possibility is to try to allocate multiple smaller chunks of memory
[6:20] <lxo> but I'm not sure the kernel would run into this sort of memory fragmentation problem, since it can remap pages and stuff
[6:24] <iggy> i was looking at the kernel man page return values section
[6:26] <iggy> but looking again, i may have been misreading
[6:26] <iggy> i was reading it like the caller is supposed to keep trying until it gets 0 (end of dir)
[6:27] <iggy> but yeah, you may not be able to allocate any mem which would return 0
[6:27] <iggy> without blocking like you said
[6:28] * Cube (~Cube@c-38-80-203-117.rw.zetabroadband.com) Quit (Quit: Leaving.)
[6:43] * Karcaw (~evan@68-186-68-219.dhcp.knwc.wa.charter.com) has joined #ceph
[7:12] * gaveen (~gaveen@112.135.13.242) Quit (Ping timeout: 480 seconds)
[7:21] * gaveen (~gaveen@112.135.16.116) has joined #ceph
[7:26] <themgt> lxo: thanks. after re-weighting and letting it work itself out, everything looks good
[7:28] <lxo> iggy, what man page are you speaking of? the problem I'm getting at is getting memory to parse a mscd response
[7:30] * SkyEye (~gaveen@112.135.9.53) has joined #ceph
[7:33] * gaveen_ (~gaveen@112.135.37.201) has joined #ceph
[7:36] * gaveen (~gaveen@112.135.16.116) Quit (Ping timeout: 480 seconds)
[7:40] * SkyEye (~gaveen@112.135.9.53) Quit (Ping timeout: 480 seconds)
[7:52] * SkyEye (~gaveen@112.135.0.60) has joined #ceph
[7:54] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[7:58] * gaveen_ (~gaveen@112.135.37.201) Quit (Ping timeout: 480 seconds)
[7:58] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[8:00] * SkyEye (~gaveen@112.135.0.60) Quit (Remote host closed the connection)
[8:17] * Cube (~Cube@c-38-80-203-117.rw.zetabroadband.com) has joined #ceph
[8:20] * dmick (~dmick@2607:f298:a:607:201e:e502:9174:ab93) Quit (Quit: Leaving.)
[8:26] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[8:34] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[9:15] * loicd (~loic@magenta.dachary.org) has joined #ceph
[9:18] * NightDog (~Karl@ti0131a340-dhcp0997.bb.online.no) has joined #ceph
[9:25] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[9:41] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) has joined #ceph
[9:48] * renzhi (~renzhi@116.226.64.176) Quit (Quit: Leaving)
[10:08] <iggy> lxo: i was reading the getdents kernel man page... i thought you were referring to getting the info back to the user
[10:21] * Psi-jack (~psi-jack@psi-jack.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:22] <lxo> so, I changed the code around kcalloc in parse_reply_info_dir to show many entries it needs in case of failure, and to retry once. that showed it failes when asking for memory for 700+, sometimes 1000+ entries
[10:23] <lxo> I put in another hard retry with GFP_KERNEL|__GFP_NOFAIL and that has worked fine so far, but I gather that's not acceptable, becuase NOFAIL is deprecated and GFP_KERNEL might deadlock
[10:23] * joshd1 (~jdurgin@2602:306:c5db:310:3da4:6b57:1f57:d0ff) Quit (Quit: Leaving.)
[10:24] <lxo> so I'm exploring other possibilities, such as turning the struct used to hold readdir results in ceph_mds_reply_info_parsed into a linked list of such structs, so that we can alloc smaller chunks but still hold all the info together
[10:25] <lxo> the OOM dumps indicate the problem is the lack of large contiguous mem, and that allocating smaller pieces would work
[10:25] * Psi-jack (~psi-jack@psi-jack.user.oftc.net) has joined #ceph
[10:26] <lxo> of course if even that doesn't work, we have to fail, but perhaps we should somehow avoid exposing the failure to the user, such as retrying the readdir request from the mds asking for fewer entries or somesuch
[10:27] <lxo> anyway, it is surely the case that, if we fail due to a failure to allocate memory, we ought to advance *p to end, so that we don't generate huge errors about the unparsed dir fragments received from the mds
[10:32] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) has joined #ceph
[10:34] <lxo> sage, any thoughts/comments on the above?
[10:55] <lxo> or should I just mount with a lower max_readdir/max_readdir_bytes?
[11:26] <Kioob> Hi
[11:27] <Kioob> In the doc I can see that RBD are thin-provisioned, great. But where can I see the real usage of each image ? �rbd info� only give the full size, no ?
[11:40] * The_Bishop_ (~bishop@f052101057.adsl.alicedsl.de) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[11:50] * ScOut3R (~ScOut3R@catv-86-101-215-1.catv.broadband.hu) has joined #ceph
[11:53] * BManojlovic (~steki@243-166-222-85.adsl.verat.net) has joined #ceph
[12:14] * LeaChim (~LeaChim@5ad684ae.bb.sky.com) has joined #ceph
[12:22] * jtangwk (~Adium@2001:770:10:500:1cae:c4c8:f588:7d89) has joined #ceph
[12:23] * jtangwk1 (~Adium@2001:770:10:500:91de:c49e:c91:43e3) Quit (Read error: Operation timed out)
[12:30] * The_Bishop (~bishop@2001:470:50b6:0:6c4a:8c56:e402:6154) has joined #ceph
[13:56] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[14:22] * noob2 (~noob2@pool-71-244-111-36.phlapa.fios.verizon.net) has joined #ceph
[14:25] * noob21 (~noob2@ext.cscinfo.com) has joined #ceph
[14:25] * noob21 (~noob2@ext.cscinfo.com) has left #ceph
[14:30] * noob2 (~noob2@pool-71-244-111-36.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[14:31] * madkiss (~madkiss@178.188.60.118) Quit (Quit: Leaving.)
[14:34] * stxShadow (~Jens@ip-178-201-147-146.unitymediagroup.de) has joined #ceph
[14:45] * ScOut3R (~ScOut3R@catv-86-101-215-1.catv.broadband.hu) Quit (Remote host closed the connection)
[14:47] * loicd (~loic@78.250.247.237) has joined #ceph
[14:57] * mgalkiewicz (~mgalkiewi@toya.hederanetworks.net) has left #ceph
[15:04] * nwl (~levine@atticus.yoyo.org) Quit (Ping timeout: 480 seconds)
[15:14] * nwl (~levine@atticus.yoyo.org) has joined #ceph
[15:20] * loicd (~loic@78.250.247.237) Quit (Ping timeout: 480 seconds)
[15:21] * KindOne (~KindOne@h138.181.130.174.dynamic.ip.windstream.net) Quit (Ping timeout: 480 seconds)
[15:22] * KindOne (~KindOne@h53.49.186.173.dynamic.ip.windstream.net) has joined #ceph
[15:22] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[15:32] * loicd (~loic@78.250.247.237) has joined #ceph
[15:38] * nwl (~levine@atticus.yoyo.org) Quit (Ping timeout: 480 seconds)
[15:48] * nwl (~levine@atticus.yoyo.org) has joined #ceph
[16:00] * madkiss (~madkiss@178.188.60.118) Quit (Quit: Leaving.)
[16:11] * ron-slc_ (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[16:15] * loicd (~loic@78.250.247.237) Quit (Ping timeout: 480 seconds)
[16:32] * astalsi (~astalsi@c-69-255-38-71.hsd1.md.comcast.net) has joined #ceph
[16:36] <Kioob> outch....
[16:37] <Kioob> I have added 8 OSD in my cluster (total of 24 OSD), and reweight them to 0.2
[16:37] <Kioob> It was not a good idea...
[16:37] <Vjarjadian> what happened?
[16:38] <stxShadow> slow down i think :)
[16:38] <Kioob> with 8% degraded, all the cluster is now very very slow
[16:38] <Vjarjadian> i thought your supposed to add them at 0 and them move them up in 0.2 incremends
[16:38] <Vjarjadian> it'll speed up once it rebalances
[16:38] <Kioob> yes, but for now, production is down :D
[16:38] <stxShadow> yes ... same here
[16:39] <stxShadow> we only increment by 0.01 -> scripted at night
[16:39] <Kioob> yeah... much better idea
[16:39] <Vjarjadian> maybe adding 8 OSDs at a time was too much
[16:39] <Kioob> of couse Vjarjadian :)
[16:39] <Vjarjadian> especially when thats 1/3 of your cluster
[16:40] <Kioob> and... I think it's tunable from bobtail
[16:40] <Kioob> but can't re-find that in the doc
[16:40] * loicd (~loic@78.250.242.109) has joined #ceph
[16:41] <Kioob> maybe it works with 0.55
[16:41] <Vjarjadian> were all 8 OSDs on one host?
[16:41] <Kioob> yes
[16:41] <Vjarjadian> well, probably rebalancing at 100MB a sec... if you have gigabit... hopefully you didnt install 4TB drives :)
[16:42] <Kioob> I have 10Gbps network
[16:42] <Vjarjadian> nice
[16:42] <Kioob> but still not enough :D
[16:42] <stxShadow> network is normaly not the problem
[16:43] <stxShadow> we have 10 GE too
[16:43] <stxShadow> and we use max 2,5 Gbit
[16:43] <stxShadow> if we rebalance
[16:44] <stxShadow> ok .... we've got only 4 osd per node
[16:44] <stxShadow> each 2 tb
[16:44] <Kioob> (I use 1TB drive)
[16:44] <stxShadow> ok ... the same amount of space then ... :)
[16:49] <Kioob> the parameter �osd max backfills� can help, no ?
[16:50] <Vjarjadian> kioob, you'll have to let me know if it becomes unresponsive every time you bump it up 0.2...
[16:51] <Kioob> Vjarjadian: I think I will not try that anymore...
[16:53] <Kioob> http://ceph.com/dev-notes/whats-new-in-the-land-of-osd/ <== I was reading that, and I can see : “osd max backfills” defines a limit on how many PGs are allowed to recover to or from a single OSD at any one time
[16:53] <Kioob> it seems to be a good idea
[16:56] * loicd (~loic@78.250.242.109) Quit (Ping timeout: 480 seconds)
[16:58] <Kioob> ok, osd_max_backfills is already in v0.55
[16:59] <Kioob> ... with a value of 5. It's not a lot.
[17:07] * loicd (~loic@78.250.242.109) has joined #ceph
[17:22] * loicd (~loic@78.250.242.109) Quit (Ping timeout: 480 seconds)
[17:25] * stxShadow1 (~Jens@jump.filoo.de) has joined #ceph
[17:25] * ScOut3R (~ScOut3R@1F2E59B1.dsl.pool.telekom.hu) has joined #ceph
[17:26] * stxShadow1 (~Jens@jump.filoo.de) has left #ceph
[17:29] * stxShadow (~Jens@ip-178-201-147-146.unitymediagroup.de) Quit (Ping timeout: 480 seconds)
[17:29] * ScOut3R (~ScOut3R@1F2E59B1.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[17:32] * scuttlemonkey (~scuttlemo@96-42-146-5.dhcp.trcy.mi.charter.com) has joined #ceph
[17:32] * ChanServ sets mode +o scuttlemonkey
[17:32] * loicd (~loic@78.250.255.71) has joined #ceph
[17:40] * loicd (~loic@78.250.255.71) Quit (Ping timeout: 480 seconds)
[17:51] * scuttlemonkey (~scuttlemo@96-42-146-5.dhcp.trcy.mi.charter.com) Quit (Quit: This computer has gone to sleep)
[17:51] * loicd (~loic@78.250.255.71) has joined #ceph
[17:57] * themgt (~themgt@71-90-234-152.dhcp.gnvl.sc.charter.com) Quit (Quit: themgt)
[17:57] * themgt (~themgt@71-90-234-152.dhcp.gnvl.sc.charter.com) has joined #ceph
[17:58] * scuttlemonkey (~scuttlemo@96-42-146-5.dhcp.trcy.mi.charter.com) has joined #ceph
[17:58] * ChanServ sets mode +o scuttlemonkey
[18:00] * scuttlemonkey (~scuttlemo@96-42-146-5.dhcp.trcy.mi.charter.com) Quit ()
[18:08] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[18:28] * ron-slc_ (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[18:29] * ron-slc_ (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[18:29] * loicd (~loic@78.250.255.71) Quit (Ping timeout: 480 seconds)
[18:48] * BManojlovic (~steki@243-166-222-85.adsl.verat.net) Quit (Ping timeout: 480 seconds)
[18:49] * BManojlovic (~steki@24-172-222-85.adsl.verat.net) has joined #ceph
[18:52] * roald (~Roald@87.209.150.214) has joined #ceph
[19:01] * loicd (~loic@2a01:e35:8aa2:fa50:82c:6216:f50d:ce01) has joined #ceph
[19:07] * madkiss (~madkiss@178.188.60.118) Quit (Quit: Leaving.)
[19:13] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) Quit (Remote host closed the connection)
[19:14] * ron-slc_ (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[19:17] * glowell1 (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) has joined #ceph
[19:19] * glowell (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[19:56] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Quit: slang)
[19:58] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[20:00] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) has joined #ceph
[20:21] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Quit: slang)
[20:28] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[20:39] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Quit: slang)
[20:47] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[20:54] * Etherael1 (~eric@node-37t.pool-125-24.dynamic.totbb.net) Quit (Ping timeout: 480 seconds)
[21:15] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Quit: slang)
[21:34] * NightDog (~Karl@ti0131a340-dhcp0997.bb.online.no) Quit (Quit: This computer has gone to sleep)
[21:35] * NightDog (~Karl@ti0131a340-dhcp0997.bb.online.no) has joined #ceph
[21:37] * BManojlovic (~steki@24-172-222-85.adsl.verat.net) Quit (Quit: Ja odoh a vi sta 'ocete...)
[22:03] * Etherael (~eric@node-4si.pool-125-24.dynamic.totbb.net) has joined #ceph
[22:04] * NightDog (~Karl@ti0131a340-dhcp0997.bb.online.no) Quit (Quit: This computer has gone to sleep)
[22:06] * NightDog (~Karl@ti0131a340-dhcp0997.bb.online.no) has joined #ceph
[22:10] * nwl_ (~levine@atticus.yoyo.org) has joined #ceph
[22:10] * nwl (~levine@atticus.yoyo.org) Quit (Read error: Connection reset by peer)
[22:26] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) Quit (Remote host closed the connection)
[22:26] * NightDog (~Karl@ti0131a340-dhcp0997.bb.online.no) Quit (Quit: This computer has gone to sleep)
[22:27] * loicd (~loic@2a01:e35:8aa2:fa50:82c:6216:f50d:ce01) Quit (Quit: Leaving.)
[22:27] * loicd (~loic@2a01:e35:8aa2:fa50:82c:6216:f50d:ce01) has joined #ceph
[22:31] * NightDog (~Karl@ti0131a340-dhcp0997.bb.online.no) has joined #ceph
[22:32] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) has joined #ceph
[22:34] * The_Bishop (~bishop@2001:470:50b6:0:6c4a:8c56:e402:6154) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[22:35] * nwl_ (~levine@atticus.yoyo.org) Quit (Ping timeout: 480 seconds)
[22:39] * nwl (~levine@atticus.yoyo.org) has joined #ceph
[22:42] * The_Bishop (~bishop@2001:470:50b6:0:c965:2a01:9176:a308) has joined #ceph
[22:49] * ScOut3R (~ScOut3R@catv-188-142-165-159.catv.broadband.hu) has joined #ceph
[22:58] * ScOut3R (~ScOut3R@catv-188-142-165-159.catv.broadband.hu) Quit (Remote host closed the connection)
[23:01] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[23:05] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) Quit (Remote host closed the connection)
[23:05] * madkiss1 (~madkiss@178.188.60.118) has joined #ceph
[23:05] * madkiss (~madkiss@178.188.60.118) Quit (Read error: Connection reset by peer)
[23:06] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) has joined #ceph
[23:15] * madkiss1 (~madkiss@178.188.60.118) Quit (Quit: Leaving.)
[23:16] * dxd828 (~dxd828@host217-43-125-241.range217-43.btcentralplus.com) has joined #ceph
[23:19] * nwl (~levine@atticus.yoyo.org) Quit (Ping timeout: 480 seconds)
[23:29] * nwl (~levine@atticus.yoyo.org) has joined #ceph
[23:43] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) has left #ceph
[23:43] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.