#ceph IRC Log

Index

IRC Log for 2013-07-13

Timestamps are in GMT/BST.

[0:01] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Quit: Ex-Chat)
[0:03] <sage> dmick: look ok tho?
[0:03] <dmick> oh absolutely, sorry
[0:04] <dmick> not sure why either, but, better late than never I suppose. maybe mon/crush_ops.sh should have a rest equivalent too
[0:04] <dmick> gonna work on teuthology ceph-rest-api deploy until you have comments on wip-wsgi
[0:05] <nwat> MDS admin socket for perf dump is giving me "admin_socket: exception: [Errno 111] Connection refused"
[0:06] <sage> dmick: i looked it over. no real opinion on the python stuff. only concern is that passing things via the environment seems awkward. no way to pass it directly to the ctor or somethign?
[0:07] <sage> that, and maybe teh CEPH_REST_API_CLIENT_NAME should be a generic thing like CEPH_NAME (e.g., ="client.foo"), altho common/... doesn't parse that currently
[0:07] <dmick> it's literally run as an import from the WSGI server. The only other option I know of is "relative path from the application cwd to a config file"
[0:08] <dmick> easy enough to change; it's only interpreted by my code here (passed to rados.connect)
[0:09] * PerlStalker (~PerlStalk@72.166.192.70) Quit (Quit: ...)
[0:09] <jakes> is CephFS Hadoop plugin open sourced code?
[0:11] <sage> meh, seems ok then
[0:12] <nwat> jakes: yes, there is a link at the ceph docs URL i sent you earlier
[0:12] <nwat> jakes: github.com/ceph/hadoop-common
[0:12] <dmick> sage: it took hours of reading and talking to Sandy Strong to finally understand that.
[0:13] <dmick> so I share your meh.
[0:14] <jakes> thanks nwat
[0:25] <jakes> nwat, I was interested to see the cephFileSystem Interface for hadoop
[0:26] <nwat> jakes: https://github.com/ceph/hadoop-common/blob/cephfs/branch-1.0/src/core/org/apache/hadoop/fs/ceph/CephFileSystem.java
[0:28] <jakes> thanks nwat.one more quest. Has Ceph to be installed locally in all machines or can it connect to remote cluster?
[0:30] * BillK (~BillK-OFT@124-148-212-240.dyn.iinet.net.au) has joined #ceph
[0:31] <nwat> jakes: each hadoop node needs to be a ceph client
[0:32] <dmick> sage: ok, after some chatting with Hellman, it's typically a little less procrustean than I had thought:
[0:32] <dmick> the deployer is expected to write a little Python shim that handles this more cleanly
[0:35] <dmick> so one would do something like "import ceph_rest_api; app = ceph_rest_api.create_instance('myname', 'myclustername', 'myconf')"
[0:35] <dmick> and then *that's* what the WSGI server imports
[0:35] <dmick> the WSGI sites are trying so hard to be agnostic and general that I didn't get that from them
[0:37] * infinitytrapdoor (~infinityt@109.46.160.17) has joined #ceph
[0:39] <jakes> nwat, does it mean that OSD's should not be running in hadoop nodes?.
[0:40] <mtanski> You should probably architect your hadoop onto of ceph much the way AWS does with EMR
[0:41] <mtanski> that your task nodes and data nodes (ceph) are not the same nodes
[0:43] <jakes> Oh..ok.. Is there any reason for it?.. Earlier, nwat also pointd out that of deadlock chances if kernel clients are mounted on same machine as OSD's. I didn't get it .
[0:44] * Cube (~Cube@12.248.40.138) Quit (Quit: Leaving.)
[0:44] <nwat> jakes: we run hadoop on osd nodes to avoid network traffic. in a cloud environment, like mtanski said, it is more common to split compute and storage.
[0:44] <nwat> jakes: you won't have deadlock issues using libcephfs
[0:45] <jakes> yeah, I was thinking the same. If storage is separate, how do we achieve the data localization for hadoop?
[0:45] <mtanski> It's my understand that the hadoop stack is not aware of data locality if your not using hdfs
[0:45] <nwat> jakes: you cannot
[0:46] <nwat> mtanski: ceph + hadoop is locality and topology aware
[0:46] <jakes> using the crush map?
[0:46] <mtanski> ah, wasn't aware of that
[0:48] * HauM1 (~HauM1@login.univie.ac.at) Quit (Remote host closed the connection)
[0:48] <nwat> jakes: yes. branch-1.0-topo includes the locality awareness. it is not yet merged into branch-1.0.
[0:49] <jakes> And,Why is it more common to split compute and storage?.. Why can't we use the local storage also?.. I was planning to install ceph in 8 machines and hadoop on top of it .
[0:50] <jakes> In my 8 node cluster*
[0:52] <nwat> jakes: things like qos are a lot easier when you physically provision. if you putting together a dedicated hadoop cluster, you probably want to run ceph and hadoop on the same nodes.
[0:52] <mtanski> If you have 10gigE ethernet it might not matter as much
[0:53] <mtanski> since your network is faster then your drives
[0:54] <mtanski> Also it's useful to be able to setup hadoop without hdfs and read in / out of ceph, if you already have an existing ceph cluster that being used for other things
[0:54] <mtanski> but I don't think that's your usecase
[0:58] * LeaChim (~LeaChim@2.216.167.255) Quit (Ping timeout: 480 seconds)
[0:59] <jakes> yeah.. I am running openstack across 8 nodes. Openstack runs over ceph object store. Now for running hadoop inside the vm;s, i was trying to see the options. I thought of two options. 1. run kernel client in each of the vms, mount and connect to the common object store in the host. Use this as a local filesystem 2. nwat has suggested a hadoop plugin which can be sued in each of the hadoop installations of the vm
[1:03] * infinitytrapdoor (~infinityt@109.46.160.17) Quit (Ping timeout: 480 seconds)
[1:04] <mtanski> yeah, i would use the hadoop plugin versus mounting using kclient
[1:05] <jakes> yeah.. this is what i need to know. What is the reason for it ?
[1:06] <mtanski> client is not yet recommend for prod
[1:06] * jwilliams (~jwilliams@72.5.59.176) Quit (Ping timeout: 480 seconds)
[1:06] <mtanski> kclient*
[1:19] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:24] * Cube (~Cube@12.248.40.138) has joined #ceph
[1:30] <dmick> sage: ok, I will change to CEPH_NAME, and a few other tweaks, but leave the basic "up to three env vars" config scheme for now
[1:30] <sage> sounds good
[1:30] <dmick> if we get community feedback about "why aren't you doing a shim file like everyone else" we can change horses
[1:30] <dmick> probably in a backward-compatible way if that day comes
[1:31] <jakes> nwat: There should be no configuration issues if have both ceph+hadoop on all cluster nodes. right?. I will separate compute and storage later
[1:32] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[1:32] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Remote host closed the connection)
[1:32] * xmltok_ (~xmltok@relay.els4.ticketmaster.com) has joined #ceph
[1:35] * smiley_ (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) Quit (Quit: smiley_)
[1:39] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[1:40] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[1:41] * agaran (~agaran@00017ab1.user.oftc.net) has joined #ceph
[1:41] <agaran> hello
[1:41] <agaran> is possible to limit pool size somehow?
[1:42] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) has joined #ceph
[1:44] <gregaf> I believe there's a quota feature in newer dev releases
[1:44] <gregaf> oh wait, that appears to be in cuttlefish, anybody know anything about it?
[1:45] <agaran> i have cuttlefish on osd.0, mon.a,mds.0, and 0.66 on osd.1
[1:45] <agaran> besides that bench invocations dont work (from cuttlefish to 0.66) rest works
[1:46] <agaran> and so far i managed to run it, i started 2 days ago or so, without ceph-deploy at all
[1:46] <agaran> so might be i overloked some option
[1:47] <gregaf> okay dmick, I just ran ceph -h and it gave me (at least some) help text, then it gave me a python backtrace
[1:47] <gregaf> :p
[1:48] <agaran> heh, osd from 0.66 just simply 'aborts' when there is not enough space on partition
[1:50] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[1:50] <dmick> gregaf: that's news to me
[1:53] <agaran> also, for test i use one osd with ceph data store on /, is possible to fix reported used/avail size somehow?
[1:53] <agaran> because it reports whole partition not just ceph osd consumed space
[1:53] <gregaf> hmm, fresh build isn't showing it dmick, maybe I was out of date
[1:54] <dmick> k. I won't say there are no uncaught exceptions; if you find one file it
[1:54] <gregaf> anyway agaran, if you run "ceph -h" and you have a new enough build, you'll see:
[1:54] <gregaf> osd pool set-quota <poolname> max_ set object or byte limit on pool
[1:54] <gregaf> objects|max_bytes <val>
[1:54] <gregaf> that should work on cuttlefish even if the help text doesn't contain it, though
[1:55] <agaran> says invalid argument..
[1:56] <agaran> so might be am not recent enough,
[1:56] <agaran> i have heterogenous setup, some debian nodes, some other distros.. i dont even have two same kernels nowhere..
[1:57] <agaran> but ok, i need to upgrade them maybe
[1:57] <dmick> agaran: what was your exact command? could be a syntax thing
[1:57] <agaran> rados lspools show me 4 pools, 3 standard, plus 'test' added by me, then cmd is ceph osd pool set-quota test 12
[1:58] <agaran> ok, my bad :)
[1:58] <dmick> you're missing the 'max_bytes' or 'max_objects' parameter
[1:58] <agaran> 'max_objects' was missing yes
[1:59] <agaran> it has somewhat unhelpfull responses to command syntax mistakes..
[1:59] <agaran> dmick: if you want exception, run osd on xfs telling it to use omap..
[1:59] <dmick> ?
[2:00] <agaran> i have weird setup, want to hear how i triggered exception?
[2:00] <agaran> (and crash with dump of internals and beacon to use objdump etc.. ) on osd start
[2:00] <dmick> if it's a bug that's not filed, yes, I want a bug filed somehow :)
[2:00] <agaran> hmm, ok so it crashed when there was not enough space plus enabled omap
[2:01] <agaran> ok, i have /dev/loop0 xfs filesystem mounted /var/lib/ceph/osd/ceph-1, i wanted to remove loop, so make it run on ext3,
[2:02] <agaran> and had nearly full conditon, i enable via config filestore xattr use omap = true, and ran osd, it crashed with plenty lengthy report about exceptions and stuff..
[2:02] <agaran> is that good enough report?
[2:02] * grepory1 (~Adium@50-115-70-146.static-ip.telepacific.net) Quit (Quit: Leaving.)
[2:02] <agaran> i doubt if its filled, am new here, but i dont think that my setup is 'common'
[2:03] <agaran> i would expect yelling at me by osd for not having space but not crash..
[2:03] <agaran> (and it does that when i try to start without xattr use omap = true)
[2:05] <agaran> hmm, despite limit in pool size, it still report (mounted via native kernel module, filesystem not rbd) quite not expected numbers in df
[2:06] <dmick> agaran: so the osd is crashing on startup when you're low on space, and ti's not connected to omap?
[2:07] <agaran> dmick: it crashed when i was low on space (about few mb free remained), and i switched it to use omap for xattr storage, then i disabled it, added space etc.. i can repeat test if you want exact dump of what it reported
[2:07] <agaran> and i use loop with xfs just because i wasnt able to figure why it dont work on ext3 (and ext4) but at that time i havent knew about omap xattr = true i might need
[2:09] <agaran> anyone uses ceph via kernel mount (not rbd), and encountered that du -shx . complains about circular structure?
[2:09] <dmick> it's hard to tell with your description of about four configurations which was failing and how, but if you can clarify it in a bug report, that would be great
[2:09] * sjustlaptop (~sam@2607:f298:a:697:6c03:13d5:5c8d:8563) has joined #ceph
[2:09] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[2:10] <agaran> i can try to repeat proces
[2:12] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) Quit (Quit: Leaving.)
[2:17] * sjustlaptop (~sam@2607:f298:a:697:6c03:13d5:5c8d:8563) Quit (Ping timeout: 480 seconds)
[2:19] <agaran> dmick: ceph version 0.66, (b6b48dbefadb39419f126d0e62c035e010906027) from tarball, on xfs, with no omap fiddling (so no omap settings at all in conf) it crashes when started on low space (needed about 20M free to start at all), crashes included sigsegv, sigbus, Abort, now trying to enable omap xattr storage, omap crashes with 'end of dump of recent events', i have recorded whole output of it,
[2:20] <agaran> -2> 2013-07-13 02:11:21.282287 b6090b40 5 asok(0x8ce60c0) entry start -1> 2013-07-13 02:11:21.322617 b71ce740 -1 filestore(/var/lib/ceph/osd/ceph-1) _test_fiemap failed to write to /var/lib/ceph/osd/ceph-1/fiemap_test: (28) No space left on device
[2:20] <agaran> but this seems to be most interesting part of output anyway
[2:21] * mschiff_ (~mschiff@port-10537.pppoe.wtnet.de) has joined #ceph
[2:22] <agaran> also restarting osd few times consumes space few mb on each restart (on osd volume), thats known thing?
[2:22] * mschiff (~mschiff@port-92723.pppoe.wtnet.de) Quit (Read error: Operation timed out)
[2:26] <dmick> agaran: dunno, maybe
[2:27] <agaran> well probably if one has multi tb storage, he wont notice mb a restart leaking
[2:35] <agaran> hmm anyone could mount any ceph (via kernel) and do mkdir anydir; stat . ; stat anydir; if they have same inode num?
[2:36] <nwat> jakes: i don't think there configuration problems
[2:39] <nwat> with ceph-deploy (or some other tool) how do i shutdown a cluster?
[2:43] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) has joined #ceph
[2:43] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[2:43] * leseb1 (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) has joined #ceph
[2:44] <agaran> its ok to leave idling client here?
[2:46] * buck (~buck@c-24-6-91-4.hsd1.ca.comcast.net) has left #ceph
[2:52] * leseb1 (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[2:52] <dmick> nwat: with upstart, stop ceph-all, probably
[2:52] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[2:52] <dmick> with init.d, probably /etc/init.d/ceph -a stop ?
[2:52] <dmick> I think I just hurt the cluster on mira, btw
[2:53] <dmick> was trying to generate data; it's in an odd state now
[2:53] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[2:53] <dmick> osd.5 apparently died; rados -p rbd ls is hanging for some reason
[2:54] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit ()
[2:54] <nwat> dmick: the one on mira04{123} ?
[2:55] <dmick> yes
[2:55] * jakes (~oftc-webi@128-107-239-234.cisco.com) Quit (Remote host closed the connection)
[2:55] <dmick> {6}
[2:55] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) Quit (Ping timeout: 480 seconds)
[2:56] <nwat> ahh. thats ok. i'm getting ready to finish wiring up the big ceph cluster with graphite on mira022. should i leave the smaller cluster alone?
[2:56] <dmick> umm...doesn't matter, I was just trying to make some usage for Yan
[2:56] <agaran> dmick: here rados -p data ls was hanging when one of 2 osds werent online
[2:56] <nwat> ahh
[2:56] <dmick> I'm minorly interested in why I managed to smack it so hard
[2:57] <nwat> hehe.. i'll just leave it alone
[2:57] <agaran> everyone use ceph only on 64bit kernels?
[2:57] <dmick> hm. no osd logging
[2:58] <dmick> agaran: mostly, but not exclusively 32bit
[2:58] * Tamil (~tamil@38.122.20.226) Quit (Quit: Leaving.)
[2:58] <dmick> er, 64bit I mean
[2:58] <dmick> there are a few 32-bit users
[2:58] <agaran> find also reports filesystem loop here.. (32bit kernel mounted share via mount.ceph)
[2:58] <nwat> dmick: only 042 has an updated diamond plugin, and i've fixed a few bugs that might be hitting that little cluster.
[2:59] <nwat> dmick: oh... i blasted away graphite data a little while ago. sorry, i had no idea anyone was looking at it
[2:59] <dmick> if you'd hangin out in inktank you'd know :)
[3:00] <dmick> no worries
[3:00] <dmick> I'll just get off again
[3:00] <dmick> btw, modifications coming to ceph-rest-api to support clustername
[3:00] <dmick> so you can run multiple clusters per machine with different configs
[3:03] <nwat> dmick: awesome. will that add a cluster parameter to the urls?
[3:03] <agaran> see you tomorrow
[3:03] * agaran (~agaran@00017ab1.user.oftc.net) has left #ceph
[3:03] <dmick> nwat: not the way I've done it, but that's an interesting idea
[3:04] <dmick> different instance for different cluster
[3:04] <dmick> cool, just ran the functionality test with it running under gunicorn, and it worked
[3:05] <dmick> pushed to next if you need it.
[3:09] * dpippenger (~riven@tenant.pas.idealab.com) Quit (Remote host closed the connection)
[3:12] <nwat> dmick: ceph cli talking to admin daemon on mon, osd, mds occasionally give 'connection refused'
[3:12] <nwat> dmick: accessing all the daemons exactly the same
[3:13] <dmick> hm
[3:13] <dmick> only ever seen that when the daemon was ill
[3:13] <dmick> is this after a large number of connection setup/teardowns?
[3:14] <dmick> maybe the teardown isn't doing the right close and there's a bunch of connections in END_WAIT or whatevER?
[3:14] <dmick> FIN_WAIT, I guess I mean?
[3:14] <dmick> I can try a loop and see if I can repro
[3:15] * dpippenger (~riven@tenant.pas.idealab.com) has joined #ceph
[3:15] <nwat> dmick: well, this is the 70 OSD cluster on mira that I have diamond reading from. At least one mon a couple OSDs appear to be down. Maybe I'm just seeing a result of sick daemons. I just saw a few problems in the diamond logs
[3:16] <dmick> looping on ceph daemon mon.a help; no issues so far
[3:18] <nwat> sounds like a very health mon
[3:19] <dmick> well it's not doing anything :)
[3:20] <dmick> yeah, I'm gonna go with the theory that that means there's a problem with the daemon
[3:20] <dmick> maybe could handle the error better; what's it look like?
[3:23] <nwat> admin_socket: exception [Errno 111] Connection refused to stderr... seems pretty reasonable
[3:25] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[3:28] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit ()
[3:29] <dmick> yeah, not too bad
[3:29] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[3:29] * mxmln (~maximilia@212.79.49.65) Quit ()
[3:44] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) has joined #ceph
[3:45] * Cube (~Cube@12.248.40.138) Quit (Ping timeout: 480 seconds)
[3:46] * listen1213 (~listen@218.17.63.201) has joined #ceph
[3:53] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[3:54] * smiley (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) has joined #ceph
[4:01] * listen1213 (~listen@218.17.63.201) Quit (Quit: Leaving)
[4:06] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[4:06] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[4:21] * diegows (~diegows@190.190.2.126) Quit (Ping timeout: 480 seconds)
[4:35] * julian (~julianwa@125.69.105.128) has joined #ceph
[4:36] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) has joined #ceph
[4:44] * oddomatik (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[4:53] * AfC (~andrew@jim1020952.lnk.telstra.net) has joined #ceph
[5:01] * fireD1 (~fireD@93-139-187-204.adsl.net.t-com.hr) has joined #ceph
[5:04] * AfC (~andrew@jim1020952.lnk.telstra.net) Quit (Quit: Leaving.)
[5:06] * fireD (~fireD@93-142-210-212.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[5:08] * dpippenger (~riven@tenant.pas.idealab.com) Quit (Remote host closed the connection)
[5:31] * haomaiwang (~haomaiwan@117.79.232.209) Quit (Remote host closed the connection)
[5:32] * haomaiwang (~haomaiwan@li565-182.members.linode.com) has joined #ceph
[5:39] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) Quit (Quit: Leaving.)
[5:41] * tremendous (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[5:41] * tremendous (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit ()
[5:47] * xmltok_ (~xmltok@relay.els4.ticketmaster.com) Quit (Ping timeout: 480 seconds)
[5:53] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[6:00] * piti (~piti@82.246.190.142) Quit (Ping timeout: 480 seconds)
[6:15] * AfC (~andrew@jim1020952.lnk.telstra.net) has joined #ceph
[6:17] * nwat (~oftc-webi@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: Page closed)
[6:18] * houkouonchi-work (~linux@12.248.40.138) Quit (Quit: Client exiting)
[6:30] * guppy (~quassel@guppy.xxx) Quit (Quit: No Ping reply in 180 seconds.)
[6:30] * guppy (~quassel@guppy.xxx) has joined #ceph
[6:35] * guppy (~quassel@guppy.xxx) Quit ()
[6:35] * guppy (~quassel@guppy.xxx) has joined #ceph
[6:39] * AfC (~andrew@jim1020952.lnk.telstra.net) Quit (Quit: Leaving.)
[7:00] * dmick (~dmick@2607:f298:a:607:595c:2cc7:2718:adef) Quit (Quit: Leaving.)
[7:13] * piti (~piti@82.246.190.142) has joined #ceph
[7:17] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Ping timeout: 480 seconds)
[7:25] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[7:26] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[8:28] * Volture (~quassel@office.meganet.ru) Quit (Ping timeout: 480 seconds)
[8:47] * zhangjf_zz2 (~zjfhappy@222.128.1.105) has joined #ceph
[9:09] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[9:31] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[9:40] * \ask (~ask@oz.develooper.com) Quit (Remote host closed the connection)
[9:41] * \ask (~ask@oz.develooper.com) has joined #ceph
[9:47] * LeaChim (~LeaChim@2.216.167.255) has joined #ceph
[9:48] * yy (~michealyx@58.100.82.159) has joined #ceph
[9:53] * yy (~michealyx@58.100.82.159) has left #ceph
[9:55] * zhangjf_zz2 (~zjfhappy@222.128.1.105) Quit (Quit: 离开)
[9:55] * xdeller (~xdeller@91.218.144.129) Quit (Ping timeout: 480 seconds)
[10:09] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Remote host closed the connection)
[10:12] * haomaiwang (~haomaiwan@li565-182.members.linode.com) Quit (Remote host closed the connection)
[10:13] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[10:14] * haomaiwang (~haomaiwan@li565-182.members.linode.com) has joined #ceph
[10:21] * haomaiwang (~haomaiwan@li565-182.members.linode.com) Quit (Remote host closed the connection)
[10:21] * haomaiwang (~haomaiwan@142.54.177.93) has joined #ceph
[10:23] * haomaiwa_ (~haomaiwan@li565-182.members.linode.com) has joined #ceph
[10:23] * haomaiwa_ (~haomaiwan@li565-182.members.linode.com) Quit (Remote host closed the connection)
[10:24] * haomaiwa_ (~haomaiwan@117.79.232.209) has joined #ceph
[10:28] * sig_wal1 (~adjkru@185.14.185.91) Quit (Ping timeout: 480 seconds)
[10:29] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Remote host closed the connection)
[10:30] * haomaiwang (~haomaiwan@142.54.177.93) Quit (Read error: Operation timed out)
[10:32] * sig_wall (~adjkru@185.14.185.91) has joined #ceph
[10:33] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[10:38] * dpippenger (~riven@cpe-75-85-17-224.socal.res.rr.com) has joined #ceph
[10:50] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[11:31] * Machske (~Bram@d5152D87C.static.telenet.be) Quit ()
[11:34] * Machske (~Bram@d5152D87C.static.telenet.be) has joined #ceph
[11:57] * TMM (~hp@535240C7.cm-6-3b.dynamic.ziggo.nl) Quit (Quit: Bye)
[11:57] * TMM (~hp@535240C7.cm-6-3b.dynamic.ziggo.nl) has joined #ceph
[11:58] * julian_ (~julianwa@125.69.105.128) has joined #ceph
[12:05] * julian (~julianwa@125.69.105.128) Quit (Read error: Operation timed out)
[12:15] * dpippenger (~riven@cpe-75-85-17-224.socal.res.rr.com) Quit (Quit: Leaving.)
[12:41] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[12:43] * xdeller (~xdeller@91.218.144.129) has joined #ceph
[12:57] * agaran (~agaran@00017ab1.user.oftc.net) has joined #ceph
[13:03] * rtek_ (~sjaak@rxj.nl) Quit (Ping timeout: 480 seconds)
[13:10] * rtek (~sjaak@rxj.nl) has joined #ceph
[13:25] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) has joined #ceph
[13:49] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Quit: ChatZilla 0.9.90 [Firefox 22.0/20130618035212])
[13:50] * LeaChim (~LeaChim@2.216.167.255) Quit (Ping timeout: 480 seconds)
[13:51] * LeaChim (~LeaChim@2.216.167.255) has joined #ceph
[14:02] <agaran> is there a way to tell osd to free space if it was taken out of cluster? ie delete data but not delete osd itself?
[14:17] * diegows (~diegows@190.190.2.126) has joined #ceph
[14:17] * Pauline (~middelink@2001:838:3c1:1:be5f:f4ff:fe58:e04) Quit (Quit: Leaving)
[14:22] * DarkAce-Z (~BillyMays@50.107.55.36) has joined #ceph
[14:22] * agaran (~agaran@00017ab1.user.oftc.net) Quit (Remote host closed the connection)
[14:22] * agaran (~agaran@00017ab1.user.oftc.net) has joined #ceph
[14:25] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Read error: Operation timed out)
[14:25] * diegows (~diegows@190.190.2.126) Quit (Ping timeout: 480 seconds)
[14:32] * julian_ (~julianwa@125.69.105.128) Quit (Quit: afk)
[14:35] * john_barbee_ (~jbarbee@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Quit: ChatZilla 0.9.90 [Firefox 22.0/20130618035212])
[14:42] * mschiff_ (~mschiff@port-10537.pppoe.wtnet.de) Quit (Remote host closed the connection)
[14:50] * diegows (~diegows@190.190.2.126) has joined #ceph
[14:59] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) Quit (Quit: Leaving.)
[15:00] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Ping timeout: 480 seconds)
[15:03] <agaran> if i set pool maximum size limiting how much can be written there, why mounting this pool (data so cephfs mount) shows quite much more size than pool size?
[15:13] * smiley (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) Quit (Quit: smiley)
[15:37] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[15:39] <joao> agaran, don't think there is any way of telling an osd to clear its data; tbh, it seems like a poor design choice
[15:39] <joao> if you have taken out the osd from the cluster, and want the data gone, then you (the user) should do it
[15:40] * xdeller (~xdeller@91.218.144.129) Quit (Ping timeout: 480 seconds)
[15:40] <joao> the osd should not enable you to do it
[15:40] <joao> clear the disk, remkfs the osd
[15:40] <joao> do what you want
[15:46] <agaran> well, i observed that every restart (with no data changes on cluster, nothing mounted, active etc), osd takes bit more and more space
[15:46] <agaran> wanted to tell osd to clear all data blocks to figure if that space get free or not, and if not, to find where it was consumed
[15:47] <agaran> i can do that other ways, its not big problem, just asked maybe there is a way to do this
[15:48] <agaran> other things are more important anyway
[15:49] * diegows (~diegows@190.190.2.126) Quit (Ping timeout: 480 seconds)
[15:51] * xdeller (~xdeller@91.218.144.129) has joined #ceph
[16:05] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) Quit (Ping timeout: 480 seconds)
[16:08] * xdeller (~xdeller@91.218.144.129) Quit (Read error: Operation timed out)
[16:10] * gillesMo (~gillesMo@00012912.user.oftc.net) has joined #ceph
[16:12] * gillesMo (~gillesMo@00012912.user.oftc.net) Quit ()
[16:12] * gillesMo (~gillesMo@00012912.user.oftc.net) has joined #ceph
[16:16] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) has joined #ceph
[16:20] * xdeller (~xdeller@91.218.144.129) has joined #ceph
[16:32] * gillesMo (~gillesMo@00012912.user.oftc.net) Quit (Quit: Konversation terminated!)
[16:46] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[16:49] * LeaChim (~LeaChim@2.216.167.255) Quit (Ping timeout: 480 seconds)
[16:51] * LeaChim (~LeaChim@2.216.167.255) has joined #ceph
[16:59] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[17:05] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[17:07] * DarkAce-Z is now known as DarkAceZ
[17:11] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[17:21] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[17:35] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) has joined #ceph
[17:54] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Read error: Connection reset by peer)
[18:08] <agaran> only way to place artificial limit on osd store size is put it on partition that have certain size?
[18:11] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[18:23] * BillK (~BillK-OFT@124-148-212-240.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[18:47] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[18:48] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) Quit (Ping timeout: 480 seconds)
[18:50] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit ()
[18:54] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) Quit (Quit: Leaving.)
[18:56] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Read error: Connection reset by peer)
[19:05] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[19:13] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) has joined #ceph
[19:21] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) Quit (Ping timeout: 480 seconds)
[19:32] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) has joined #ceph
[19:36] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[19:46] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Read error: Connection reset by peer)
[19:51] * s2r2 (uid322@id-322.ealing.irccloud.com) Quit (Quit: Connection closed for inactivity)
[19:56] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[19:57] * diegows (~diegows@190.190.2.126) has joined #ceph
[20:04] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[20:06] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[20:08] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Read error: Connection reset by peer)
[20:26] * miniyo (~miniyo@0001b53b.user.oftc.net) Quit (Quit: WeeChat 0.4.1)
[20:27] * miniyo (~miniyo@0001b53b.user.oftc.net) has joined #ceph
[20:28] * haomaiwang (~haomaiwan@notes4.com) has joined #ceph
[20:28] * haomaiwa_ (~haomaiwan@117.79.232.209) Quit (Read error: Connection reset by peer)
[20:29] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[20:43] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[20:44] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) has joined #ceph
[20:48] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Read error: Connection reset by peer)
[20:49] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[20:57] * leseb1 (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) has joined #ceph
[21:03] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Read error: Connection reset by peer)
[21:10] * leseb1 (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Quit: Leaving.)
[21:14] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[21:33] * ChanServ sets mode +o scuttlemonkey
[21:33] * ChanServ sets mode +o joao
[21:47] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[21:48] * smiley (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) has joined #ceph
[22:08] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Read error: Connection reset by peer)
[22:11] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[22:12] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit ()
[22:13] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[22:13] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[22:28] * wer_ (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[22:30] * cce (~cce@50.56.54.167) Quit (Server closed connection)
[22:30] * cce (~cce@50.56.54.167) has joined #ceph
[22:30] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[22:30] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Read error: No route to host)
[22:36] * terje_ (~joey@97-118-115-214.hlrn.qwest.net) Quit (Server closed connection)
[22:36] * terje (~joey@97-118-115-214.hlrn.qwest.net) has joined #ceph
[22:40] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[22:44] * jochen (~jochen@laevar.de) Quit (Server closed connection)
[22:44] * jochen (~jochen@laevar.de) has joined #ceph
[22:50] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[22:51] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[22:58] * jnq (~jon@0001b7cc.user.oftc.net) Quit (Server closed connection)
[22:58] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[22:59] * jnq (~jon@198.199.79.59) has joined #ceph
[23:08] * via (~via@95.170.88.43) Quit (Server closed connection)
[23:09] * via (~via@smtp2.matthewvia.info) has joined #ceph
[23:13] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[23:52] * vipr (~vipr@78-21-229-157.access.telenet.be) Quit (Remote host closed the connection)
[23:54] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.