#ceph IRC Log

Index

IRC Log for 2012-10-28

Timestamps are in GMT/BST.

[0:03] * nhmlap (~nhm@184-97-251-146.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[0:17] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[0:35] * Leseb (~Leseb@5ED17881.cm-7-2b.dynamic.ziggo.nl) has joined #ceph
[0:39] * stass (stas@ssh.deglitch.com) Quit (Read error: Connection reset by peer)
[0:57] * tziOm (~bjornar@ti0099a340-dhcp0778.bb.online.no) Quit (Remote host closed the connection)
[0:58] * stass (stas@ssh.deglitch.com) has joined #ceph
[1:06] * danieagle (~Daniel@177.97.248.22) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[1:20] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[1:32] * madkiss (~madkiss@178.188.60.118) Quit (Quit: Leaving.)
[1:36] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[1:36] * glowell1 (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[1:43] * Kioob (~kioob@luuna.daevel.fr) Quit (Ping timeout: 480 seconds)
[1:49] * glowell1 (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) has joined #ceph
[2:11] * MikeMcClurg (~mike@cpc10-cmbg15-2-0-cust205.5-4.cable.virginmedia.com) has left #ceph
[2:20] * LarsFronius (~LarsFroni@95-91-242-157-dynip.superkabel.de) Quit (Quit: LarsFronius)
[2:37] * Leseb (~Leseb@5ED17881.cm-7-2b.dynamic.ziggo.nl) Quit (Quit: Leseb)
[2:15] * nhmlap (~nhm@184-97-251-146.mpls.qwest.net) has joined #ceph
[2:46] * lofejndif (~lsqavnbok@28IAAIO4Y.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[2:58] * jeffhung_ (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[3:22] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[3:25] * scalability-junk (~stp@188-193-208-44-dynip.superkabel.de) has joined #ceph
[3:31] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[3:40] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[3:51] * scalability-junk (~stp@188-193-208-44-dynip.superkabel.de) Quit (Ping timeout: 480 seconds)
[3:53] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[3:53] * scalability-junk (~stp@188-193-208-44-dynip.superkabel.de) has joined #ceph
[3:56] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[4:05] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[4:12] * nhmlap (~nhm@184-97-251-146.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[4:14] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[4:31] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (Read error: Connection reset by peer)
[4:34] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[4:53] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (Read error: Connection reset by peer)
[4:53] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[4:57] * loicd (~loic@magenta.dachary.org) has joined #ceph
[5:09] * Cube1 (~Cube@12.248.40.138) has joined #ceph
[5:12] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[5:12] * Cube1 (~Cube@12.248.40.138) Quit ()
[5:16] * rweeks (~rweeks@12.25.190.226) has joined #ceph
[5:24] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[5:48] * rweeks (~rweeks@12.25.190.226) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[6:20] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Remote host closed the connection)
[6:22] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[6:32] * scalability-junk (~stp@188-193-208-44-dynip.superkabel.de) Quit (Quit: Leaving)
[6:54] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[6:54] * loicd (~loic@magenta.dachary.org) has joined #ceph
[7:05] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[7:08] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[7:09] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit ()
[8:43] * iltisanni (d4d3c928@ircip1.mibbit.com) Quit (Ping timeout: 480 seconds)
[9:57] * LarsFronius (~LarsFroni@2a02:8108:3c0:79:cd88:a4d6:9edb:d629) has joined #ceph
[10:04] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[10:09] * madkiss1 (~madkiss@178.188.60.118) has joined #ceph
[10:09] * madkiss (~madkiss@178.188.60.118) Quit (Read error: Connection reset by peer)
[10:17] * Kioob (~kioob@luuna.daevel.fr) has joined #ceph
[10:20] * Kioob (~kioob@luuna.daevel.fr) Quit ()
[10:26] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[10:53] * tziOm (~bjornar@ti0099a340-dhcp0778.bb.online.no) has joined #ceph
[10:58] * loicd (~loic@magenta.dachary.org) has joined #ceph
[11:10] * jakku (~jakku@ac232088.dynamic.ppp.asahi-net.or.jp) Quit (Remote host closed the connection)
[11:24] * long (~chatzilla@118.186.58.111) has joined #ceph
[11:28] * Qten (Q@qten.qnet.net.au) has joined #ceph
[12:01] * long (~chatzilla@118.186.58.111) Quit (autokilled: Off (2012-10-28 11:01:27))
[12:01] * AaronSchulz (~chatzilla@216.38.130.166) Quit (autokilled: Off (2012-10-28 11:01:27))
[12:03] * long (~chatzilla@118.186.58.111) has joined #ceph
[12:04] * AaronSchulz (~chatzilla@216.38.130.166) has joined #ceph
[12:18] * LarsFronius_ (~LarsFroni@95-91-242-160-dynip.superkabel.de) has joined #ceph
[12:23] * LarsFronius (~LarsFroni@2a02:8108:3c0:79:cd88:a4d6:9edb:d629) Quit (Ping timeout: 480 seconds)
[12:23] * LarsFronius_ is now known as LarsFronius
[12:29] * MissDee (~dee@jane.earlsoft.co.uk) has joined #ceph
[12:29] <MissDee> hi all
[12:30] <MissDee> can a cephfs be mounted from multiple locations?
[12:39] <Robe> that's the whole idea behind it ;)
[12:40] <MissDee> ok, just checking
[12:40] <MissDee> I couldn't see anywhere that explicitly said either way
[12:41] <MissDee> I wasn;t sure if it was just distributed or shared too
[12:41] <Robe> mind you, I'm not running any prod sites myself
[12:41] <Robe> yeah, that are the details that get lost over the years ;)
[12:41] <MissDee> I;m not sure how I;d use it though
[12:41] <MissDee> at work, we're mainly a windows house
[12:42] <MissDee> but my personal servers are linux
[12:42] <Robe> gotta run, ttyl
[12:42] <MissDee> ok,t hanks
[13:07] * madkiss1 (~madkiss@178.188.60.118) Quit (Quit: Leaving.)
[13:27] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[13:34] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[13:36] * Leseb (~Leseb@5ED17881.cm-7-2b.dynamic.ziggo.nl) has joined #ceph
[13:40] <joao> MissDee, I believe there's work on cifs / samba
[13:54] * Q310 (~qgrasso@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[13:55] * ninkotech_ (~duplo@89.177.137.231) has joined #ceph
[14:34] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[14:34] <madkiss> Robe: :P
[14:40] * long (~chatzilla@118.186.58.111) Quit (Quit: ChatZilla 0.9.89 [Firefox 16.0.2/20121024073032])
[14:54] * deepsa_ (~deepsa@122.172.159.224) has joined #ceph
[14:55] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) has joined #ceph
[14:57] * deepsa (~deepsa@122.172.33.114) Quit (Ping timeout: 480 seconds)
[14:57] * deepsa_ is now known as deepsa
[15:20] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[15:20] * loicd (~loic@magenta.dachary.org) has joined #ceph
[15:26] * mdxi (~mdxi@74-95-29-182-Atlanta.hfc.comcastbusiness.net) Quit (Remote host closed the connection)
[15:34] * danieagle (~Daniel@186.214.92.172) has joined #ceph
[15:46] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Quit: This computer has gone to sleep)
[15:49] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[16:01] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[16:01] * loicd (~loic@2a01:e35:2eba:db10:120b:a9ff:feb7:cce0) has joined #ceph
[16:03] * nhmlap (~nhm@184-97-251-146.mpls.qwest.net) has joined #ceph
[16:08] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) Quit (Remote host closed the connection)
[16:13] * Q310 (~qgrasso@ip-121-0-1-110.static.dsl.onqcomms.net) Quit (Ping timeout: 480 seconds)
[16:20] * Leseb (~Leseb@5ED17881.cm-7-2b.dynamic.ziggo.nl) Quit (Quit: Leseb)
[16:21] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) has joined #ceph
[16:30] * mdxi (~mdxi@74-95-29-182-Atlanta.hfc.comcastbusiness.net) has joined #ceph
[16:40] * Leseb (~Leseb@5ED17881.cm-7-2b.dynamic.ziggo.nl) has joined #ceph
[16:41] * Leseb (~Leseb@5ED17881.cm-7-2b.dynamic.ziggo.nl) Quit ()
[16:41] <nwl> z
[16:42] * madkiss (~madkiss@178.188.60.118) Quit (Quit: Leaving.)
[16:43] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[16:43] * madkiss (~madkiss@178.188.60.118) Quit ()
[16:50] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Quit: This computer has gone to sleep)
[17:07] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[18:00] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) Quit (Quit: tryggvil)
[18:03] * loicd (~loic@2a01:e35:2eba:db10:120b:a9ff:feb7:cce0) Quit (Quit: Leaving.)
[18:13] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Quit: This computer has gone to sleep)
[18:30] * loicd (~loic@magenta.dachary.org) has joined #ceph
[18:44] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[19:00] * stxShadow (~Jens@ip-178-203-169-190.unitymediagroup.de) has joined #ceph
[19:02] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) has joined #ceph
[19:14] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[19:14] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Quit: This computer has gone to sleep)
[19:21] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[19:30] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[19:32] * mgalkiewicz (~mgalkiewi@staticline-31-183-94-25.toya.net.pl) has joined #ceph
[19:35] <mgalkiewicz> any support today?
[19:41] * loicd (~loic@90.84.144.118) has joined #ceph
[19:42] * justinwarner (~ceg442049@osis111.cs.wright.edu) has joined #ceph
[19:42] <mikeryan> mgalkiewicz: what can i do for you?
[19:43] <mikeryan> i can help with general issues, problems outside my expertise will have to wait until business hours tomorrow
[19:43] <justinwarner> mikeryan: I have a simple question (I think).
[19:44] * Leseb_ (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[19:44] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[19:44] * Leseb_ is now known as Leseb
[19:44] <justinwarner> After doing a "service ceph -a start", and I receive a couple lines from each (mon mds osd, only using one machine currently) saying "starting ..." with no real errors (Says it failed to read /dev/sr0, but after says it started the osd.0 anyways), does that mean that it's working?
[19:45] <mikeryan> justinwarner: are you just running a single OSD on one machine?
[19:45] <justinwarner> Yes sir.
[19:46] <mikeryan> hm, you can check to see if the process is running
[19:46] <mikeryan> ps aux | grep ceph-osd
[19:46] <justinwarner> It returns 2 processes running.
[19:46] <justinwarner> One from the /usr/bin/ceph-osd
[19:47] <mikeryan> good start, let's check the PG status, with this command:
[19:47] <mikeryan> ceph pg dump
[19:48] <justinwarner> A good bit came out, table format. State column says active+degraded (For all)
[19:49] <mikeryan> hm, i think that's normal with a single OSD
[19:49] <justinwarner> Also has some totals at the bottom, kbused kbavail, these show approximate amounts of the partition I gave it to use.
[19:49] <mikeryan> by default the PGs are created with a replication of > 1, which means your PGs want to have a primary OSD and at least one replica
[19:49] <justinwarner> Replica?
[19:49] <mikeryan> yes, kb* is more or less the output of df
[19:49] <justinwarner> Gotya.
[19:49] <mikeryan> a replica is an OSD that serves sort of as a backup of the primary OSD
[19:50] <mikeryan> it's how ceph provides robustness against failures
[19:50] <justinwarner> Ah
[19:50] <justinwarner> To do this you need multiple OSD's?
[19:50] <justinwarner> I'm guessing*
[19:50] <mikeryan> you can run a cluster with a single OSD, but it's not very interesting..
[19:51] <justinwarner> In the end I need to connect up 30 machines, but I was just testing on one to make sure, then I was going to add to it later on.
[19:51] <mikeryan> you can try bringing up multiple OSDs on your single machine
[19:51] <mikeryan> if you want to get an idea of what a real cluster looks like and how it works, i recommend running at least two OSDs
[19:52] <mikeryan> that way you can have *some* replication
[19:52] <justinwarner> Can I follow: http://ceph.com/docs/master/cluster-ops/add-or-rm-osds/
[19:52] <justinwarner> To do this?
[19:53] <mikeryan> i think so
[19:53] <justinwarner> Or is their a better approach?
[19:53] <mikeryan> general cluster administration is not one of my specialties, so i can't really give a good recommendation there
[19:53] <mikeryan> you should probably try to follow this guide and let me know if you run into trouble
[19:54] <justinwarner> Alright
[19:54] <justinwarner> Sounds great. Thanks a lot =)
[19:54] <mikeryan> np
[19:54] * mgalkiewicz_ (~mgalkiewi@staticline-31-183-94-25.toya.net.pl) has joined #ceph
[19:54] <mikeryan> mgalkiewicz_: i can provide basic support today
[19:55] * stxShadow (~Jens@ip-178-203-169-190.unitymediagroup.de) Quit (Read error: Connection reset by peer)
[19:55] <mikeryan> (not sure if you saw that message before)
[19:55] <mgalkiewicz_> mikeryan: yeah thx I am describing it
[19:55] * mgalkiewicz (~mgalkiewi@staticline-31-183-94-25.toya.net.pl) Quit (Ping timeout: 480 seconds)
[19:59] <mgalkiewicz_> mikeryan: I am probably experiencing a bug with 0.53 or doing sth wrong: https://gist.github.com/3969472
[20:00] * mgalkiewicz_ (~mgalkiewi@staticline-31-183-94-25.toya.net.pl) has left #ceph
[20:00] * mgalkiewicz_ (~mgalkiewi@staticline-31-183-94-25.toya.net.pl) has joined #ceph
[20:01] <mgalkiewicz_> I have brand new cluster 1mon, 1mds, 1osd on the same machine
[20:02] <mgalkiewicz_> all I do is creating new pool and run rados bench
[20:02] <mgalkiewicz_> I can see in admin socket that op is waiting for osdmap
[20:06] <mikeryan> interesting, you're the second user recently who's run into this problem
[20:06] <mikeryan> how easy is this to reproduce?
[20:06] <mikeryan> will it happen every time you create a pool and run rados bench?
[20:08] <mgalkiewicz_> well it occurs almost everytime I run rados bench
[20:08] <mikeryan> can you do ceph pg dump and paste the results for me?
[20:08] <mikeryan> after you create the pool, but before you run rados bench
[20:09] <mgalkiewicz_> I have reinstalled the cluster with 0.52 and checking
[20:09] <mikeryan> our other user ran into this problem on 0.48.3, so it likely that it affects 0.52 as well
[20:09] <mgalkiewicz_> mikeryan: all pgs are active+degraded if it is what u ar looking for
[20:09] <mikeryan> hm, i'd also like to see the OSD status from the end of the command's output
[20:11] <mgalkiewicz_> well give me a sec to check whether the problem is also with 0.52
[20:16] <mgalkiewicz_> ok now the test was performed without slow request during it but cleaning up looks bad: https://gist.github.com/3969530
[20:17] <mgalkiewicz_> and pg dump before running it: https://gist.github.com/3969546
[20:18] <mikeryan> wow, this problem happens with only one OSD
[20:18] <mgalkiewicz_> all operations are now more then 200sec age
[20:18] * loicd (~loic@90.84.144.118) Quit (Quit: Leaving.)
[20:18] <mgalkiewicz_> and rados bench is still running after showing the stats
[20:19] <mikeryan> so unfortunately i don't know what the problem is, but i know one of the other programmers is looking at a similar problem
[20:19] <mikeryan> i can let him know what you just told me tomorrow during business hours
[20:19] <mgalkiewicz_> great
[20:19] <mikeryan> sorry i can't help more
[20:19] <mgalkiewicz_> do you know how to force cleanup after bench?
[20:20] <mgalkiewicz_> removing the pool will do the trick?
[20:20] <mikeryan> removing the pool will definitely do the trick
[20:20] <mikeryan> there's a cleanup command too, which should probably work
[20:20] <mgalkiewicz_> yeah I tried but not sure what the prefix is
[20:21] <mgalkiewicz_> I have tried with benchmark_data_n11c1_7820 without success
[20:21] <mgalkiewicz_> rados cleanup benchmark_data_n11c1_7820
[20:22] <mikeryan> rados -p <pool> cleanup benchmark
[20:22] * tziOm (~bjornar@ti0099a340-dhcp0778.bb.online.no) Quit (Remote host closed the connection)
[20:22] <mgalkiewicz_> right
[20:23] <mgalkiewicz_> Removed 0 objects
[20:23] <mgalkiewicz_> does it mean that there is nothing to clean?
[20:23] <mgalkiewicz_> empty ceph cluster suddenly uses 1280MB
[20:24] <mgalkiewicz_> before test it was much less
[20:26] <mikeryan> that means there were no benchmark objects in the cluster
[20:27] <mikeryan> by default rados bench cleans up after itself, so that's probably why you couldn't delete anything
[20:27] <mikeryan> if you want the objects to not get cleaned up, you have to use --no-cleanup
[20:29] <mgalkiewicz_> yeah but I thought that the objects were not deleted because of the timeout
[20:29] <mikeryan> ah, a good point
[20:29] <mikeryan> it appears they were deleted, based on the output of rados cleanup
[20:29] <mgalkiewicz_> and cluster usage
[20:30] <mgalkiewicz_> ok now I am experiencing the same problem with 0.52 which was with 0.53
[20:30] <mikeryan> that's useful info
[20:32] <mgalkiewicz_> for 25 second there were no writes from bench
[20:32] <mgalkiewicz_> and it does not finish after desired 60 seconds
[20:33] <mgalkiewicz_> I will paste some outputs in a minute
[20:34] <mgalkiewicz_> mikeryan: https://gist.github.com/3969597
[20:35] <mgalkiewicz_> do you know whether the problem is with writing a journal or data?
[20:37] <mgalkiewicz_> is it possible that something is broken with bench which causes such slow request? maybe the clients would not be affected?
[20:38] <mgalkiewicz_> I am using 1ssd disk for journal and 1ssd for data, this is basically the only difference between my other cluster where I did not see such problems
[20:48] * justinwarner (~ceg442049@osis111.cs.wright.edu) Quit (Quit: Leaving.)
[20:57] <mikeryan> mgalkiewicz_: what file systems are you running on the SSDs ?
[20:57] <mgalkiewicz_> btrfs
[20:58] <mgalkiewicz_> journal is on xfs
[20:59] <mikeryan> try running against xfs instead of btrfs and see if you still have a problem
[20:59] <mgalkiewicz_> already did
[21:00] * deepsa (~deepsa@122.172.159.224) Quit (Ping timeout: 480 seconds)
[21:01] <mikeryan> doubt the problem is related to the bencher, it doesn't do anything weird
[21:01] <mikeryan> just creates/deletes many objects in parallel
[21:01] <mgalkiewicz_> I will connect regular client and look for slow request anyway
[21:10] * loicd (~loic@magenta.dachary.org) has joined #ceph
[21:12] * lofejndif (~lsqavnbok@659AABJ8A.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:18] <mgalkiewicz_> mikeryan: looks like my ssd for data is causing problems
[22:23] * lofejndif (~lsqavnbok@659AABJ8A.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[22:39] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[22:40] * MikeMcClurg (~mike@3239056-cl69.boa.fiberby.dk) has joined #ceph
[22:55] * lofejndif (~lsqavnbok@9KCAACONR.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:55] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[23:33] * f4m8 (f4m8@kudu.in-berlin.de) Quit (Remote host closed the connection)
[23:33] * todin (tuxadero@kudu.in-berlin.de) Quit (Remote host closed the connection)
[23:38] * Q310 (~qgrasso@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[23:43] * Q310 (~qgrasso@ip-121-0-1-110.static.dsl.onqcomms.net) Quit ()
[23:53] <mgalkiewicz_> mikeryan: the problem was probably with discard option for xfs and btrfs mount
[23:53] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.