#ceph IRC Log

Index

IRC Log for 2012-08-11

Timestamps are in GMT/BST.

[0:02] * JJ (~JJ@12.248.40.138) Quit (Quit: Leaving.)
[0:07] * bchrisman (~Adium@108.60.121.114) Quit (Quit: Leaving.)
[0:10] <dpemmons> how much overhead is associated with an object?
[0:13] <gregaf> dpemmons: what kind of overhead are you thinking of?
[0:13] <dpemmons> extra storage
[0:13] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[0:14] <dpemmons> I'm pondering whether it could make sense to store media packets as individual objects
[0:14] <dpemmons> they tend to be 1-10s of kb
[0:15] <dpemmons> but if there's a lot of chatter to look each one up, or a lot of storage overhead for something that size it would make sense
[0:15] <dpemmons> *wouldn't
[0:18] <joshd> there's not too much storage overhead, but you should try to avoid small reads/writes to get better performance
[0:18] <gregaf> well, you need to access them individually over the network
[0:19] <dpemmons> yeah
[0:19] <gregaf> sjust could better describe the overhead, which isn't large but at those sizes might be pushing it a bit
[0:19] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: Leaving)
[0:20] <dpemmons> I could pretty easily group packets up to an ideal size too
[0:20] <dpemmons> (thinking out loud here)
[0:23] <joshd> the ideal size depends on your hardware, but grouping them into objects on the order of megabytes will be better
[0:30] * JJ (~JJ@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[0:32] * glowell1 (~Adium@38.122.20.226) Quit (Quit: Leaving.)
[0:41] <sjust> writes are syncronously replicated, so write latency might not be great for large numbers of small objects
[0:41] <sjust> as far as space is concerned, a few hundred bytes per object, I think
[0:41] <sjust> +filesystem per-file overhead
[0:41] <sjust> each object is a filesystem file somewhere
[0:42] <sjust> we have an object map abstraction where each object has a key-value mapping attached to it
[0:55] * mpw (~mpw@chippewa-nat.cray.com) Quit (Quit: Nettalk6 - www.ntalk.de)
[1:01] <alexxy> NaioN: nhm: any ideas why i'm getting this
[1:01] <alexxy> 2012-08-11 02:59:59.771992 7fa5baaea780 -1 ** ERROR: error creating empty object store in /var/lib/ceph/osd: (22) Invalid argument
[1:01] <alexxy> btrfs mounted on /var/lib/ceph/osd
[1:01] <alexxy> 2012-08-11 03:01:34.001062 7f4bf5ea9780 -1 provided osd id 0 != superblock's -1
[1:02] <joshd> sjust: that looks like a bad initialization
[1:03] <sjust> alexxy: version?
[1:03] <alexxy> 0.49
[1:03] <sjust> how was the osd created?
[1:03] <alexxy> i get this error while mkcephfs
[1:04] <alexxy> or after ceph create osd
[1:04] <alexxy> ceph-osd -i 0 --mkfs --mkjournal
[1:04] <sjust> I think you need to do ceph-osd --mkfs?
[1:04] <sjust> oh
[1:04] <sjust> hmm
[1:04] <alexxy> btrfs created
[1:04] <dmick> would be nice to see those derr's
[1:04] <sjust> that happens during ceph-osd --mkfs?
[1:04] <alexxy> and mounted with relatime
[1:04] <alexxy> yep
[1:05] <sjust> reliably?
[1:05] <alexxy> c-0-0 ~ # mount | grep ceph
[1:05] <alexxy> /dev/sda2 on /var/lib/ceph/osd type btrfs (rw,relatime,space_cache)
[1:05] <dmick> oh there's the derr right there
[1:05] <sjust> if you re-mkfs the filesystem and rerun ceph-osd --mkfs it happens again?
[1:06] <alexxy> c-0-0 ~ # ls /var/lib/ceph/osd/
[1:06] <alexxy> ceph_fsid current fsid journal magic ready snap_1 snap_2 store_version whoami
[1:06] <alexxy> yep
[1:06] <sjust> was this an existing osd?
[1:06] <alexxy> c-0-0 ~ # ceph-osd -i 0 --mkfs --mkjournal
[1:06] <alexxy> 2012-08-11 03:06:15.150052 7fd4b1a8f780 -1 provided osd id 0 != superblock's -1
[1:06] <alexxy> 2012-08-11 03:06:15.154419 7fd4b1a8f780 -1 ** ERROR: error creating empty object store in /var/lib/ceph/osd: (22) Invalid argument
[1:06] <alexxy> no
[1:06] <alexxy> it was empty
[1:07] <alexxy> i can reformat and remount btrfs partition
[1:07] <sjust> did you rerun mkfs.btrfs between the two calls to cehp-osd --mkfs?
[1:07] <sjust> yeah, that would help
[1:07] <sjust> --mkfs won't wipe out existing contents
[1:08] * loicd (~loic@brln-4d0cc179.pool.mediaWays.net) Quit (Ping timeout: 480 seconds)
[1:08] <alexxy> funny
[1:08] <alexxy> now it works
[1:08] <sjust> yeah, seems like there was garbage in the directory that was confusing it
[1:08] * sagelap1 (~sage@145.sub-166-250-67.myvzw.com) Quit (Ping timeout: 480 seconds)
[1:20] * sagelap (~sage@166.sub-166-250-64.myvzw.com) has joined #ceph
[1:21] <Leseb> hi guys
[1:22] <Leseb> while playing, a little bit too much with rbd and mapping, it happened that rbd shows unman images
[1:23] <Leseb> the only way is too map it again??? http://pastebin.com/kxcMxMMn
[1:23] <alexxy> ceph> health
[1:23] <alexxy> HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean
[1:23] <alexxy> what this could mean
[1:24] <alexxy> i added osd
[1:25] <Leseb> pg state value at the bottom of the page http://ceph.com/docs/master/dev/placement-group/
[1:25] <gregaf> alexxy: did your OSD actually come up?
[1:25] <alexxy> yep
[1:25] <joshd> Leseb: which kernel version?
[1:25] <alexxy> or how can i check?
[1:25] <gregaf> ceph -s
[1:25] <joshd> Leseb: did you unmap before unmounting the fs on top (it's a bug that that's possible, but it is right now)?
[1:25] <Leseb> joshd: hi Josh! 3.2
[1:26] <Leseb> it's not the first that I encounter this 'issue'
[1:26] * BManojlovic (~steki@212.200.240.248) Quit (Remote host closed the connection)
[1:26] <Leseb> *time
[1:27] <Leseb> I will try to reproduce it
[1:27] <alexxy> c-0-3 ~ # ceph -s
[1:27] <alexxy> health HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean
[1:27] <alexxy> monmap e1: 3 mons at {bootsrv=10.254.254.250:6789/0,store=10.5.1.1:6789/0,store1=10.5.1.2:6789/0}, election epoch 6, quorum 0,1,2 bootsrv,store,store1
[1:27] <alexxy> osdmap e7: 3 osds: 3 up, 3 in
[1:27] <alexxy> pgmap v18: 192 pgs: 192 creating; 0 bytes data, 3072 MB used, 387 GB / 399 GB avail
[1:27] <alexxy> mdsmap e7: 1/1/1 up {0=store=up:creating}, 2 up:standby
[1:27] <Leseb> but does it sound familiar to you?
[1:27] <joshd> Leseb: hmm, there were also some changes to the rbd device id assignment
[1:27] <joshd> they could be related (they were after 3.2, let me check)
[1:28] <Leseb> is there a sort of interval?
[1:28] <Leseb> thanks :)
[1:29] <joshd> there were a few race conditions
[1:29] <joshd> if possible I'd suggest trying 3.5
[1:29] <alexxy> gregaf: ^^
[1:29] <Leseb> for testing or production? is 3.5 a recommendation?
[1:29] <joshd> for testing that particular issue
[1:31] <gregaf> alexxy: hmm, odd ?????can you go look and see if the osd daemons are still running? (ps or top)
[1:31] <Leseb> at the moment it's not really possible, I'm kinda busy to bench ceph
[1:31] <joshd> I'd recommend later kernels in general for the rbd client, since it's been seeing bug fixes recently
[1:31] <gregaf> it certainly looks like it ought to be working
[1:32] <Leseb> but if it persists I will consider the 3.5 quickly :)
[1:32] <alexxy> gregaf: yep they are still running
[1:32] <Leseb> recommendation for production though?
[1:32] <gregaf> if they are running, you can try???joshd, what's the option to trigger pg creates?
[1:32] <alexxy> 2012-08-11 03:31:11.210399 mon.0 [INF] pgmap v21: 192 pgs: 192 creating; 0 bytes data, 3072 MB used, 387 GB / 399 GB avail
[1:32] <alexxy> 2012-08-11 03:31:17.006957 mon.0 [INF] pgmap v22: 192 pgs: 192 creating; 0 bytes data, 4097 MB used, 516 GB / 532 GB avail
[1:33] <alexxy> its after i add one more osd
[1:33] <gregaf> alexxy: "ceph pg force_create_pg" ??? try that
[1:33] <alexxy> usage: pg force_create_pg <pg>
[1:33] <alexxy> ghmm
[1:34] <alexxy> may be because i started cluster without osd?
[1:35] <joshd> Leseb: there are more important bug fixes in the pipeline, so moving to 3.5 would just be temporary
[1:35] <gregaf> alexxy: how did you add the OSDs?
[1:35] <gregaf> and can you pastebin the results of "ceph pg dump"
[1:35] <alexxy> http://ceph.com/docs/master/ops/manage/grow/osd/
[1:35] <gregaf> maybe your crushmap didn't get updated
[1:35] <alexxy> but i didnt created crushmap
[1:35] <Leseb> joshd: ok thanks for the precision
[1:36] <gregaf> okay, if you didn't adjust the CRUSH map then no data is going to be placed on the OSDs; you need to do that
[1:36] <alexxy> gregaf: how to create it?
[1:36] <gregaf> you don't need to create it, just adjust it
[1:36] <gregaf> the convenient link there leads to http://ceph.com/docs/master/ops/manage/crush/#adjusting-crush
[1:37] <alexxy> ok
[1:38] <alexxy> seems like i missed it
[1:45] * tnt_ (~tnt@17.127-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[1:48] <alexxy> how to set number of copyes in ceph?
[1:49] <joshd> iirc ceph osd pool set_size
[1:50] <joshd> <poolname> <total_copies>
[1:50] <alexxy> aha
[1:50] <alexxy> http://ceph.com/wiki/Adjusting_replication_level
[1:51] <joshd> see also http://ceph.com/docs/master/control/
[1:52] <alexxy> ok
[1:52] <alexxy> i have 3 mons and 3 mds
[1:52] <alexxy> what will be right way to mount it?
[1:55] * sagelap (~sage@166.sub-166-250-64.myvzw.com) Quit (Ping timeout: 480 seconds)
[1:55] <dmick> alexxy: what you are trying to mount?
[1:55] <alexxy> cephfs
[1:56] <alexxy> should i use 1 mon
[1:56] <alexxy> or all 3
[1:56] <alexxy> like
[1:56] <alexxy> mount -t ceph mon1,mon2,mon3:/ /mnt/ceph
[1:56] <dmick> you need mds servers there, and you can use one or several (several will be tried in order in case one or more mds is down)
[1:57] <dmick> but are you sure you want to use cephfs?
[1:57] <joshd> dmick: no mds, just mon
[1:57] <dmick> !
[1:57] <joshd> mon are the only ones with well known addresses
[1:57] <dmick> sorry
[1:57] <dmick> of course joshd is right
[1:58] <alexxy> Warning: Don't mount ceph using kernel driver on the osd server. Perhaps it will freeze the ceph client and your osd server.
[1:58] <alexxy> is this still valid?
[1:58] <dmick> yes. this is the same problem as nfs loopback mounts; you can hit deadlock
[1:59] * EmilienM (~EmilienM@ede67-1-81-56-23-241.fbx.proxad.net) has left #ceph
[2:00] <maelfius> dmick: that applies to CephFS only, not RBD right?
[2:03] <dmick> I think if you were to mount a filesystem on a kernel RBD device on the OSD machine, you'd run into the same sort of problem
[2:04] <maelfius> dmick: I'll see if I can deadlock something in that scenario. should be fun :)
[2:04] <dmick> so I think it depends somewhat on the nature of the code consuming the block device
[2:04] <maelfius> I could see that, it would likely be treated as a standard file system for something such as VM root-disks/ephemeral storage
[2:05] <maelfius> (have some users who want virtualization and stable/recoverable storage incase a hypervisor goes away)
[2:06] <maelfius> but in either case, I'll see if I can deadlock it in the same way the NFS deadlock occurs.
[2:48] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) Quit (Ping timeout: 480 seconds)
[2:48] <Leseb> joshd: still here?
[2:48] * maelfius (~Adium@66.209.104.107) Quit (Quit: Leaving.)
[2:48] <dmick> he's stepped away for a minute, but go ahead
[2:48] <Leseb> ok thank you
[2:48] <Leseb> I building a little test cluster
[2:49] <Leseb> during the mkcephfs I got this: ** ERROR: error creating empty object store in /srv/ceph/osd0: (21) Is a directory
[2:49] <Leseb> I don't really get it...
[2:49] <Leseb> the previous message was OSD::mkfs: FileStore::mkfs failed with error -21
[2:50] <Leseb> the filesystem used is XFS
[2:52] <Leseb> any idea? I'm running ubuntu 12.04, kernel 3.2 and O.48 argonaut
[2:54] <dmick> can you pastebin your ceph.conf?
[2:55] <Leseb> http://pastebin.com/YvrtiVcq :)
[2:57] * adjohn (~adjohn@69.170.166.146) Quit (Quit: adjohn)
[2:58] <Leseb> (of course all the directories exist and I'm running mkcephfs as root)
[2:58] <dmick> is /srv/ceph/osd0 empty?
[2:59] <Leseb> not it's not
[2:59] <Leseb> it only contains: current fsid store_version
[3:00] <dmick> and this is a new cluster, right, you're not trying to reuse OSD data? if so, try cleaning out that directory
[3:00] <Leseb> I already rm -rf and re-do the mkcephfs and same result :/
[3:00] <dmick> hm
[3:01] <Leseb> I just did exactly the same setup 2 days ago with different machines???.
[3:07] <dmick> I'm tracing the mkfs code to see when it returns EISDIR
[3:08] <Leseb> thank you for that :)
[3:09] <dmick> are there files in 'current/'?
[3:10] * adjohn (~adjohn@69.170.166.146) has joined #ceph
[3:10] <Leseb> yes
[3:10] * chutzpah (~chutz@100.42.98.5) Quit (Quit: Leaving)
[3:10] <Leseb> file: commit_op_seq and dir omap
[3:10] <dmick> could you tell me what they are please :)
[3:11] <dmick> heh thanks
[3:11] <Leseb> the content of omap dir: 000003.log CURRENT LOCK LOG MANIFEST-000002
[3:12] <dmick> how about /srv/ceph/journals/osd0? Anything there?
[3:13] * lofejndif (~lsqavnbok@9KCAAAK5A.tor-irc.dnsbl.oftc.net) has joined #ceph
[3:13] <Leseb> it's empty
[3:14] <dmick> so is that a directory?
[3:15] <Leseb> got it
[3:15] <Leseb> it is :/
[3:15] <dmick> that would be the problem :)
[3:15] <Leseb> obviously
[3:15] <Leseb> f*ck
[3:16] <dmick> could have been a more-useful error message in there
[3:16] <dmick> I'll file an issue about it
[3:16] <Leseb> yes we could have same time
[3:17] <Leseb> I don't why during modifying the path of the journal and thought about a dir and not a file
[3:17] <Leseb> whatever
[3:17] <Leseb> the log could be more relevant :)
[3:17] <dmick> it happens.
[3:17] <dmick> yeah, we can definitely make that message point to the actual problem.
[3:19] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[3:19] <Leseb> now it's better :)
[3:19] <Leseb> dmick: big thanks
[3:19] <dmick> np
[3:20] <dmick> sorry I wasn't quicker is all
[3:21] <Leseb> arf there no need to apologize
[3:22] <Leseb> I don't know your timezone but for it's pretty late for me (3:21 am), it can explain something ^^
[3:24] <dmick> do you see "mkjournal error creating journal on" in your log?
[3:25] <dmick> (and yes, it's 6:25PM here, so I'm a little fresher)
[3:26] <Leseb> yes it's there
[3:26] <dmick> ah, ok. so it was just "too much in the log to see the actual problem".
[3:26] <Leseb> actually I didn't know that ceph started to log at this level
[3:26] <dmick> which is still unfortunate, but at least it tried to say the right thing
[3:27] <Leseb> If I knew that the log was filled I wouldn't have request your help
[3:28] <dmick> oh, you were seeing those other errors on the console?
[3:28] <Leseb> but maybe this could make the output log from the shell clearer
[3:28] <Leseb> no I wasn't
[3:29] <dmick> where did the messages about -21 appear?
[3:29] <Leseb> from the shell
[3:29] <Leseb> but it's not as explicit as the /var/log/ceph/osd.log
[3:29] <dmick> oh, sorry, terminology. "on the console" meant to say "in your terminal"
[3:30] <dmick> and yeah, it's a shame that the mkjournal message didn't come to the terminal as well.
[3:31] <Leseb> the terminal only shows me
[3:31] <Leseb> === osd.0 ===
[3:31] <Leseb> 2012-08-11 03:10:23.967661 7f5cd260f780 -1 OSD::mkfs: FileStore::mkfs failed with error -21
[3:31] <Leseb> 2012-08-11 03:10:23.967701 7f5cd260f780 -1 ** ERROR: error creating empty object store in /srv/ceph/osd0: (21) Is a directory
[3:31] <Leseb> failed: '/sbin/mkcephfs -d /tmp/mkcephfs.5ZBxofGW9l --init-daemon osd.0'
[3:31] <dmick> yes, I understand now
[3:31] <Leseb> and the log shows also this error + 0 filestore(/srv/ceph/osd0) mkjournal error creating journal on /srv/ceph/journals/osd0: (21) Is a directory
[3:32] <Leseb> :)
[3:32] <dmick> ideally both would have shown both. IOW:
[3:32] <dmick> dout(0) << "mkjournal error creating journal on " << journalpath
[3:32] <dmick> << ": " << cpp_strerror(ret) << dendl;
[3:32] <dmick> should probably really be derr
[3:34] <dmick> http://tracker.newdream.net/issues/2938
[3:35] <Leseb> cool :)
[3:35] <dmick> thanks for the help
[3:35] <Leseb> np
[3:35] <dmick> What are you looking at doing with Ceph?
[3:36] <Leseb> I plan to integrate ceph in an openstack environnement
[3:36] <Leseb> I will use it as a glance backend
[3:37] <Leseb> and I will use rbd mapped device
[3:37] <dmick> cool
[3:37] <Leseb> for customers data
[3:37] <Leseb> I will put a fs on it and export via NFS
[3:38] <Leseb> btw a couple as weeks ago josh posted something that I reported him
[3:38] <Leseb> http://tracker.newdream.net/issues/2657
[3:39] <Leseb> it's related to cephFS, but no one seemed to put interest in it :(
[3:39] <dmick> yes, there's a lot to do
[3:39] <dmick> we're accepting patch submissions! :)
[3:40] <Leseb> haha ok
[3:40] <Leseb> I don't really have time and I won't use cephFS
[3:40] <Leseb> too risky at the moment
[3:41] <Leseb> but will have a bright future
[3:41] <dmick> yes, as we keep saying, we hope to be able to spend some serious development time on that soon
[3:41] <Leseb> as ceph in general :)
[3:41] <Leseb> yes but putting a lot of effort in rados and rbd was a good decision at the first place
[3:44] * adjohn (~adjohn@69.170.166.146) Quit (Quit: adjohn)
[3:47] * adjohn (~adjohn@69.170.166.146) has joined #ceph
[3:48] * adjohn (~adjohn@69.170.166.146) Quit ()
[3:49] <Leseb> dmick: thanks again for your help, truly appreciated
[3:49] <dmick> yw
[3:49] <Leseb> cheers!
[3:49] <dmick> sleep well :)
[3:49] <Leseb> I will :)
[3:57] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) Quit (Quit: Leaving.)
[4:08] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Quit: Leseb)
[4:22] * lofejndif (~lsqavnbok@9KCAAAK5A.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[4:38] * deepsa (~deepsa@122.172.210.222) Quit (Quit: Computer has gone to sleep.)
[5:21] * sagelap (~sage@2600:1012:b002:3039:5920:78a2:19d0:3b38) has joined #ceph
[5:21] * Ryan_Lane (~Adium@216.38.130.165) Quit (Quit: Leaving.)
[5:38] * sagelap (~sage@2600:1012:b002:3039:5920:78a2:19d0:3b38) Quit (Ping timeout: 480 seconds)
[5:54] * JamesD is now known as Guest2557
[5:58] * dmick (~dmick@38.122.20.226) has left #ceph
[6:22] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[6:48] * mdxi (~mdxi@74-95-29-182-Atlanta.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[7:05] * maelfius (~Adium@pool-71-160-33-115.lsanca.fios.verizon.net) has joined #ceph
[7:19] * mdxi (~mdxi@74-95-29-182-Atlanta.hfc.comcastbusiness.net) has joined #ceph
[7:54] * maelfius (~Adium@pool-71-160-33-115.lsanca.fios.verizon.net) Quit (Quit: Leaving.)
[8:39] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[8:41] * Ryan_Lane1 (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[8:41] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[8:41] * Ryan_Lane1 (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit ()
[9:12] * JJ (~JJ@cpe-76-175-17-226.socal.res.rr.com) Quit (Quit: Leaving.)
[9:14] * loicd (~loic@brln-4d0cc179.pool.mediaWays.net) has joined #ceph
[10:04] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[10:31] <alexxy> heh
[10:31] <alexxy> cephfs upsed while running bonnie
[10:32] <alexxy> kernel is 3.4.3
[10:32] <alexxy> ohh
[10:32] <alexxy> 3.3.4
[10:32] <alexxy> https://gist.github.com/3322503
[10:50] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[10:58] * tnt (~tnt@17.127-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[10:59] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[11:30] <alexxy> NaioN: nhm ^^^
[12:20] * EmilienM (~EmilienM@ede67-1-81-56-23-241.fbx.proxad.net) has joined #ceph
[13:05] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[13:33] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[13:34] <exec> alexxy: you have mounted cephfs on the same machine with osd?
[13:34] <alexxy> exec: no
[13:34] <alexxy> its diffrent machine
[13:34] <exec> then it's an issue, yup
[13:34] <alexxy> it isnt osd nor mds/mon
[13:35] <alexxy> may be i should update kernel
[13:35] <alexxy> but i will do it later
[13:35] * BManojlovic (~steki@212.200.240.248) has joined #ceph
[13:35] <exec> I've got bunch of kernel erros when tried to use osd with rbd or cephfs.
[13:35] <alexxy> but null pointer dereference is bad
[13:36] <exec> aha.
[13:36] * exec going to cook
[13:36] <alexxy> i got this while running bonnie++
[13:42] * steki-BLAH (~steki@bojanka.net) has joined #ceph
[13:46] * BManojlovic (~steki@212.200.240.248) Quit (Ping timeout: 480 seconds)
[14:02] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[14:19] * lxo (~aoliva@lxo.user.oftc.net) Quit (Quit: later)
[14:23] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[14:29] * Deuns (~kvirc@192.41.216.7) has joined #ceph
[14:29] <Deuns> hello all
[14:30] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[14:30] <Deuns> I have 3 OSD in my cluster and I try to remove one. I followed the documentation but now I get an unhealthy cluster : "health HEALTH_WARN 70 pgs stuck unclean" and my monitoring doesn't like that :p
[14:31] <Deuns> Is there anything I could try ? I already tried to change the crushmap but without success
[14:32] <Deuns> I also tried to set the size of the pool to a lower value but still unlucky
[14:35] * dabeowulf (dabeowulf@free.blinkenshell.org) Quit (Remote host closed the connection)
[14:44] * loicd (~loic@brln-4d0cc179.pool.mediaWays.net) Quit (Ping timeout: 480 seconds)
[14:58] * deepsa (~deepsa@117.203.19.216) has joined #ceph
[15:34] * steki-BLAH (~steki@bojanka.net) Quit (Ping timeout: 480 seconds)
[15:38] * Cube (~Adium@c-38-80-203-198.rw.zetabroadband.com) has joined #ceph
[16:28] * lxo (~aoliva@lxo.user.oftc.net) Quit (Quit: later)
[16:34] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[16:39] * EmilienM (~EmilienM@ede67-1-81-56-23-241.fbx.proxad.net) Quit (Remote host closed the connection)
[16:40] * EmilienM (~EmilienM@ede67-1-81-56-23-241.fbx.proxad.net) has joined #ceph
[16:41] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[17:02] * loicd (~loic@brln-4d0cc179.pool.mediaWays.net) has joined #ceph
[17:29] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[17:33] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit ()
[17:35] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:17] * Guest2557 (~chatzilla@cpc2-hawk2-0-0-cust96.aztw.cable.virginmedia.com) Quit (Ping timeout: 480 seconds)
[18:31] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[18:35] * mpw (~mpw@75.17.201.69) has joined #ceph
[18:38] * mpw2 (~mpw@chippewa-nat.cray.com) has joined #ceph
[18:43] * mpw (~mpw@75.17.201.69) Quit (Ping timeout: 480 seconds)
[18:53] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:08] * lofejndif (~lsqavnbok@28IAAGQCH.tor-irc.dnsbl.oftc.net) has joined #ceph
[19:22] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[20:01] * DLange (~DLange@dlange.user.oftc.net) Quit (Quit: rebooting. Just for fun.)
[20:04] * DLange (~DLange@dlange.user.oftc.net) has joined #ceph
[21:09] * JJ (~JJ@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[21:21] * steki-BLAH (~steki@bojanka.net) has joined #ceph
[21:37] * JJ (~JJ@cpe-76-175-17-226.socal.res.rr.com) Quit (Quit: Leaving.)
[22:01] * lofejndif (~lsqavnbok@28IAAGQCH.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[22:30] * exec (~v@109.232.144.194) Quit (Ping timeout: 480 seconds)
[22:51] * Cube (~Adium@c-38-80-203-198.rw.zetabroadband.com) Quit (Quit: Leaving.)
[23:05] * EmilienM (~EmilienM@ede67-1-81-56-23-241.fbx.proxad.net) Quit (Remote host closed the connection)
[23:07] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[23:23] * lofejndif (~lsqavnbok@28IAAGQH9.tor-irc.dnsbl.oftc.net) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.