#ceph IRC Log

Index

IRC Log for 2011-10-28

Timestamps are in GMT/BST.

[0:00] <nwatkins> gregaf: ping me whenever you want me to test something. thanks a lot for looking into this
[0:00] <gregaf> yeah, I will have something for you before I leave :)
[0:00] <nwatkins> was it just this "/" directory case that you never encountered?
[0:01] <gregaf> there are a bunch of issues with the function contract for readdir_r_cb
[0:01] <nwatkins> ic. that ventures into an unknown area for me :)
[0:01] <gregaf> which is used to grab directory entries and translate them from Ceph's format to the caller's format using a provided function and data blob
[0:02] <gregaf> so I can fix the way it works for Hadoop real fast but I want to check the other callers and see what they expect so it actually works in the future
[0:03] <nwatkins> Ahh, yeh that makes total sense. No big rush though--ton's of stuff to be doing here.
[0:03] <gregaf> cool
[0:27] <darkfader> gregaf: do you know when the beta support contracts will go live?
[0:27] <gregaf> nope, but I think somebody does, which is an improvement over the last time I heard that question :)
[0:27] <darkfader> heheh
[0:28] <darkfader> I have gotten a new server, which has 4 compute modules that are each wired to 3(!) disks
[0:28] <darkfader> it can't be changed to something like 1x12
[0:28] <darkfader> and that means ceph would really be my best route
[0:28] <darkfader> if i want to have any kind of performance
[0:30] <darkfader> Support contracts for Ceph were announced at the 2011 CloudConnect conference and will see the light of day in the summer of 2011.
[0:31] <darkfader> s/summer/winter/
[0:31] <darkfader> :)
[0:31] <gregaf> yeah….
[0:31] * aliguori (~anthony@32.97.110.59) Quit (Remote host closed the connection)
[0:31] <darkfader> someone should silently update that
[0:32] <gregaf> where is that text from?
[0:32] <darkfader> oh
[0:32] <darkfader> ceph.newdream.net -> support
[0:33] <stingray> sjust: all osds were restarted, I was upgrading from 0.35 to 0.36
[0:33] <gregaf> I don't know what you're talking about
[0:33] <gregaf> I see nothing like that there
[0:35] <darkfader> err
[0:35] <darkfader> it's in the last line
[0:36] <gregaf> you sure? :p
[0:36] <darkfader> omg
[0:36] <darkfader> it's past midnighrt
[0:36] <darkfader> i completely fell for it
[0:36] <darkfader> hehe
[0:37] * ognatortcele (~ognatortc@66.246.173.34) Quit (Ping timeout: 480 seconds)
[0:38] <darkfader> nite
[0:38] <gregaf> g'night!
[0:50] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[1:16] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[1:27] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Quit: Ex-Chat)
[1:28] <gregaf> nwatkins: new patch to apply on top of your old one
[1:28] <gregaf> http://pastebin.com/rCzNc2vY
[1:28] <nwatkins> Ahh, good timing. Just switched to irc desktop
[1:29] <gregaf> or if you prefer, I pushed a wip-getdir branch since I need to write a few tests as well, which has all the changes so far
[1:31] <nwatkins> that'd be better. i got fuzz warning
[1:31] <nwatkins> does your branch include the replication fix you made?
[1:41] <gregaf> yeah
[1:42] <gregaf> nwatkins: it's all of my hadoop changes for you
[1:46] <nwatkins> gregaf: i'm getting the same behavior
[1:47] <gregaf> …oh, duhdumb
[1:50] * fronlius1 (~Adium@f054105239.adsl.alicedsl.de) Quit (Quit: Leaving.)
[1:50] <gregaf> nwatkins: fetch and reset to origin
[1:51] <nwatkins> gregaf: do i need --hard on reset or not to update the working copy after the rebase?
[1:51] <gregaf> I blasted the head patch and replaced it, so I think you'll want —hard
[1:52] <gregaf> assuming you don't have your own uncommitted changes
[1:55] <nwatkins> gregaf: woot
[1:55] <gregaf> yay
[1:56] <nwatkins> Alright. I guess I'll take it to the next level now :)
[2:00] * conner (~conner@leo.tuc.noao.edu) Quit (Ping timeout: 480 seconds)
[2:02] <nwatkins> gregaf: ls is working on specific file paths, but not for directories (which always appear empty).
[2:07] <gregaf> nwatkins: hmm, so an attempt to look at a file gets the right info
[2:08] <gregaf> but looking at a directory says there are no contents?
[2:08] <nwatkins> gregaf: correct
[2:08] <gregaf> oh, I think I see it
[2:09] <gregaf> forgot to update the hadoop library for the contract change
[2:09] * conner (~conner@leo.tuc.noao.edu) has joined #ceph
[2:09] <gregaf> simple patch: http://pastebin.com/0ZHQxBFv
[2:10] <ajm> gregaf: was it you that helped me the other day fixing some files that were stuck in _temp or joshd?
[2:10] <gregaf> think you want joshd or sjust for that :)
[2:10] <ajm> hrm, ok
[2:11] <sjust> me, I thin
[2:11] <sjust> *think
[2:11] <ajm> can you show me how you calculated that hash? I did it again :D
[2:12] <sjust> ajm: one sec
[2:13] <nwatkins> gregaf: hmm... java is pegged at 100%
[2:14] <gregaf> nwatkins: probably should kill it
[2:14] <gregaf> let me look again
[2:14] <nwatkins> looks like it is sitting in pthread_join but i still don't have all the symbols available in gdb for some reason
[2:17] <sjust> ajm: https://github.com/athanatos/ceph/commit/41751e97cf36a47194ca5bfdd6d1c8badbc087d8
[2:17] <gregaf> nwatkins: I think I'm slowly rediscovering why the interface is so asinine ;)
[2:19] <nwatkins> gregaf: i take back my pthread_join comment (stupid gdb). it is stuck in that loop in getdir with the r < 0 patch
[2:20] <gregaf> yeah, actually I think that patch is wrong, my bad
[2:21] <ajm> sjust argv[1] being "10000ae7aa4.0000008f" ?
[2:21] <gregaf> nwatkins: yep, I suck at live editing, returned 0 instead of the length of the buffer
[2:22] <sjust> yeah
[2:24] <ajm> sjust: <3
[2:24] <ajm> you should s/%x/%X/ probably
[2:24] <ajm> i'd fork and just make a pull request, but I suspect its more work for you :)
[2:25] <sjust> well, I haven't actually put in the normal repository since it would need to be cleaned up quite a bit
[2:26] <ajm> o ok
[2:26] <ajm> oh interesting, i startup the osd, the file goes BACK to temp, then it complains/crashes
[2:27] <sjust> ah...journal replay...
[2:27] <gregaf> nwatkins: okay, pushed a new HEAD to wip-getdir again
[2:30] <nwatkins> still with that r < 0 patch?
[2:34] <gregaf> nwatkins: no, without it
[2:34] <nwatkins> gregaf: i'm still getting r = 0
[2:34] <nwatkins> oops
[2:34] <nwatkins> ok let me fix that
[2:34] <gregaf> that was just wrong
[2:34] <sjust> ajm: actually, shouldn't have crashed
[2:35] <sjust> the file ended up back in the __temp directory
[2:35] <sjust> ?
[2:35] <ajm> yes
[2:35] <sjust> is it still present in the other directory?
[2:36] <ajm> nope
[2:36] <sjust> oh, btrfs?
[2:36] <ajm> yes
[2:36] <sjust> ok
[2:36] <ajm> if you tell me this is a btrfs issue
[2:36] <sjust> nope
[2:36] <ajm> i'm going to fly to whomever wrote this and go reiserfs on them
[2:36] <nwatkins> gregaf: woot again. i think that's enough success for one day :)
[2:36] <ajm> (not a fan of btrfs at the moment)
[2:36] <sjust> I just forgot that we were cloning the most recent snap directory over current
[2:37] <gregaf> nwatkins: good to hear
[2:37] <gregaf> sorry that took so many tries
[2:37] <sjust> go into the most recent snap directory and do the same thing
[2:37] <ajm> oh, so i need to do this in snap not in current
[2:37] <sjust> ajm: it's not a pretty hack, but it should work
[2:37] <sjust> wait
[2:37] <sjust> no
[2:37] <sjust> that won't work
[2:37] <ajm> hrm, I don't have _temp in snap, should i move the current/_temp/foo to snap_foo/bar/
[2:37] <ajm> ok, I wait :)
[2:37] <nwatkins> gregaf: no problem. thanks a lot for hacking on this today. i'm gonna start tossing more at the system pretty soon. it's OK to keep fiiling these bug reports?
[2:38] <gregaf> nwatkins: I've got plenty else to do, but feel free to message or email me if you run into anything else :)
[2:38] <gregaf> yeah, please do!
[2:38] <nwatkins> gregaf: sounds good. i'm off for the night! thx again
[2:38] <gregaf> this *actually worked* once so I certainly want to get it back to at least that point again ;)
[2:38] <nwatkins> :)
[2:41] * nwatkins (~nwatkins@kyoto.soe.ucsc.edu) has left #ceph
[2:47] <sjust> ajm: ok, try moving all of the snap_* directories elsewhere (say, old_snaps/) leaving only current
[2:48] <sjust> ajm: then, move the file from __temp into the correct place with the correct name
[2:48] <sjust> then try restarting the osd
[2:48] <sjust> after removing __temp
[2:49] <ajm> ok
[2:50] <ajm> sjust: you say __temp, i have _temp though, thats not an issue I hope
[2:51] <ajm> also no working good
[2:51] <ajm> http://pastebin.com/wdFSTjV3
[2:54] * bchrisman (~Adium@108.60.121.114) Quit (Quit: Leaving.)
[2:55] <sjust> there aren't any /data/osd.10/snap_* left, right?
[2:55] <ajm> interesting
[2:55] <ajm> i did snap_foo_old
[2:55] <ajm> it picks that up,d oesn't it
[2:55] <sjust> yeah, probably
[2:56] <sjust> I think anything like snap_*
[2:57] <sjust> any luck?
[2:58] <ajm> sec
[2:58] <ajm> interesting
[2:58] <ajm> current/_temp now has -rw-r--r-- 1 root root 4194304 Oct 6 15:57 10000ae7aa4.0000008f__head_748F2FBD
[2:59] <sjust> right, we need to figure out what pg colletion it was going into
[2:59] <sjust> *collection
[2:59] <sjust> do you have logs?
[2:59] <ajm> sec
[2:59] <ajm> http://pastebin.com/yH3mQsZq
[3:00] <ajm> i moved that into 0.3bd_head before
[3:00] <sjust> 0.3bd_head is the collection
[3:00] <sjust> yeah, that would be it
[3:00] <ajm> 360 mv /data/osd.10/current/_temp/10000ae7aa4.0000008f_head /data/osd.10/current/0.3bd_head/DIR_D/DIR_B/DIR_F/DIR_2/DIR_F/10000ae7aa4.0000008f__head_748F2FBD
[3:00] <sjust> then remove _temp
[3:00] <ajm> yep
[3:00] <sjust> that should do it
[3:00] <ajm> now it goes back into _temp
[3:00] <sjust> again?
[3:01] <ajm> i can try again
[3:01] <sjust> yeah
[3:01] <ajm> did
[3:01] <ajm> 371 mv /data/osd.10/current/_temp/10000ae7aa4.0000008f__head_748F2FBD /data/osd.10/current/0.3bd_head/DIR_D/DIR_B/DIR_F/DIR_2/DIR_F/10000ae7aa4.0000008f__head_748F2FBD
[3:01] <sjust> and the snapshot directories are gone?
[3:02] <ajm> and again
[3:02] <ajm> its back :)
[3:02] <ajm> i moved them to broken_snap_234234
[3:02] <sjust> could you post a few hundred lines lines of log?
[3:03] <sjust> ajm: actually, I need to run
[3:03] <sjust> ajm: we need to prevent it from replaying those journal entries
[3:03] <ajm> sure, i'll clear log and get the full log
[3:03] <ajm> np
[3:03] <sjust> ajm: I'll be back on in the morning, sorry for the trouble :)
[3:04] <ajm> np
[3:27] <ajm> point of interest: 264890/51040396 degraded (0.519%)
[3:27] <ajm> what are those called ?
[3:29] <joshd> degraded objects - those that aren't replicated as many times as they should be
[3:29] <joshd> it should be fixed by recovery
[3:32] <ajm> objects was the question, the degraded part I got :)
[4:11] * joshd (~joshd@aon.hq.newdream.net) Quit (Quit: Leaving.)
[4:48] * tawanda22 (~tawanda22@83TAAAPM8.tor-irc.dnsbl.oftc.net) has joined #ceph
[4:48] * n0de (~ilyanabut@c-24-127-204-190.hsd1.fl.comcast.net) has joined #ceph
[4:49] * willenti (~willenti@28IAAAN24.tor-irc.dnsbl.oftc.net) has joined #ceph
[4:58] * willenti (~willenti@28IAAAN24.tor-irc.dnsbl.oftc.net) has left #ceph
[4:58] * tawanda22 (~tawanda22@83TAAAPM8.tor-irc.dnsbl.oftc.net) has left #ceph
[5:08] * n0de (~ilyanabut@c-24-127-204-190.hsd1.fl.comcast.net) Quit (Quit: This computer has gone to sleep)
[5:32] * phil_ (~quassel@chello080109010223.16.14.vie.surfer.at) Quit (Remote host closed the connection)
[6:53] * DanielFriesen (~dantman@S0106001731dfdb56.vs.shawcable.net) has joined #ceph
[7:00] * Nadir_Seen_Fire (~dantman@S0106001731dfdb56.vs.shawcable.net) Quit (Ping timeout: 480 seconds)
[7:28] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) has joined #ceph
[7:55] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[8:00] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) has joined #ceph
[8:12] * adjohn is now known as Guest15020
[8:12] * Guest15020 (~adjohn@70-36-139-78.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[8:12] * adjohn (~adjohn@70-36-139-78.dsl.dynamic.sonic.net) has joined #ceph
[8:36] * Nadir_Seen_Fire (~dantman@S0106001731dfdb56.vs.shawcable.net) has joined #ceph
[8:43] * DanielFriesen (~dantman@S0106001731dfdb56.vs.shawcable.net) Quit (Ping timeout: 480 seconds)
[8:53] * yoshi (~yoshi@p9224-ipngn1601marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[8:58] * DanielFriesen (~dantman@S0106001731dfdb56.vs.shawcable.net) has joined #ceph
[9:05] * Nadir_Seen_Fire (~dantman@S0106001731dfdb56.vs.shawcable.net) Quit (Ping timeout: 480 seconds)
[9:31] * fronlius (~Adium@f054105239.adsl.alicedsl.de) has joined #ceph
[9:34] * fronlius (~Adium@f054105239.adsl.alicedsl.de) Quit ()
[9:38] * adjohn (~adjohn@70-36-139-78.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[10:13] * fronlius (~Adium@testing78.jimdo-server.com) has joined #ceph
[10:42] * FoxMURDER (~fox@ip-89-176-11-254.net.upcbroadband.cz) has joined #ceph
[11:35] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) Quit (Quit: Ex-Chat)
[11:40] * yoshi (~yoshi@p9224-ipngn1601marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[11:56] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[11:57] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[12:39] * tserong (~tserong@58-6-101-93.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[12:48] * tserong (~tserong@58-6-129-110.dyn.iinet.net.au) has joined #ceph
[13:20] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) has joined #ceph
[14:33] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[15:31] * phil_ (~quassel@chello080109010223.16.14.vie.surfer.at) has joined #ceph
[15:32] * phil_ (~quassel@chello080109010223.16.14.vie.surfer.at) Quit (Remote host closed the connection)
[15:53] * wido (~wido@rockbox.widodh.nl) Quit (Remote host closed the connection)
[15:53] * wido (~wido@rockbox.widodh.nl) has joined #ceph
[16:05] * gregorg (~Greg@78.155.152.6) has joined #ceph
[16:08] * fronlius (~Adium@testing78.jimdo-server.com) Quit (Quit: Leaving.)
[16:14] * ognatortcele (~ognatortc@66.246.173.34) has joined #ceph
[16:20] * verwilst (~verwilst@dD576F053.access.telenet.be) has joined #ceph
[16:39] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[16:49] * ognatortcele (~ognatortc@66.246.173.34) Quit (Quit: ognatortcele)
[16:52] * adjohn (~adjohn@70-36-139-78.dsl.dynamic.sonic.net) has joined #ceph
[16:53] * fronlius (~Adium@testing78.jimdo-server.com) has joined #ceph
[17:36] * Iribaar (~Iribaar@200.111.172.138) Quit (Ping timeout: 480 seconds)
[17:39] * Iribaar (~Iribaar@200.111.172.138) has joined #ceph
[17:45] * adjohn (~adjohn@70-36-139-78.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[17:45] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:51] * in__ (~n0de@64.111.193.166) Quit (Quit: This computer has gone to sleep)
[18:39] * cclien (~cclien@ec2-175-41-146-71.ap-southeast-1.compute.amazonaws.com) Quit (Ping timeout: 480 seconds)
[18:39] * cclien (~cclien@ec2-175-41-146-71.ap-southeast-1.compute.amazonaws.com) has joined #ceph
[18:49] * bchrisman (~Adium@108.60.121.114) has joined #ceph
[18:54] * joshd (~joshd@aon.hq.newdream.net) has joined #ceph
[19:00] * adjohn (~adjohn@50.0.103.34) has joined #ceph
[19:05] * Iribaar (~Iribaar@200.111.172.138) Quit (Ping timeout: 480 seconds)
[19:09] * alexxy (~alexxy@79.173.81.171) Quit (Remote host closed the connection)
[19:13] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[19:14] * alexxy (~alexxy@79.173.81.171) has joined #ceph
[19:15] * Iribaar (~Iribaar@200.111.172.138) has joined #ceph
[19:19] * verwilst (~verwilst@dD576F053.access.telenet.be) Quit (Quit: Ex-Chat)
[19:46] * fronlius (~Adium@testing78.jimdo-server.com) Quit (Quit: Leaving.)
[19:46] * grape (~grape@c-76-17-80-143.hsd1.ga.comcast.net) has joined #ceph
[19:53] <grape> I have read a few things about Ceph and am exploring it as an alternative to GlusterFS to store both virtual machine images and regular files. Can anyone give me some sort of idea of the current Ceph stability and suitability for my use case?
[19:55] <gregaf> grape: for VM images you'd presumably be using rbd (the rados block device), which joshd can talk more about but is certainly stable for basic usage
[19:56] <gregaf> regular file storage stability depends on the usage patterns; it does fine for things like write-once read sometimes, but it's a much larger system that needs more testing for more active scenarios and the cooler features
[19:57] <grape> My primary concern is for block storage, so that is certainly good news
[19:58] <grape> how is VM image performance by Ceph?
[19:58] <grape> vague question, I know :-)
[19:59] <grape> I suppose a better way to put it would be does it make sense to use it for block storage from a performance perspective
[19:59] <gregaf> it varies a lot depending on which interfaces and versions you're using, plus of course the workload
[20:01] <joshd> grape: you might be interested in these benchmarks: http://learnitwithme.com/?p=303
[20:01] * Hugh (~hughmacdo@soho-94-143-249-50.sohonet.co.uk) Quit (Quit: Ex-Chat)
[20:02] <gregaf> I've heard one or two stories about people accidentally receiving an RBD-backed VM and not knowing that it was different from their production local-disk machines
[20:02] <gregaf> but that's countered by other people running dd and getting 2MB/s writes :/
[20:02] <joshd> they're old, so they didn't include the more recent changes that improve write performance
[20:02] <grape> ok thanks
[20:03] <joshd> if you're using rbd with qemu, you can get better write performance from having a writeback window
[20:03] <joshd> we haven't implemented that for the kernel rbd module yet though
[20:04] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) Quit (Quit: Ex-Chat)
[20:04] <grape> yeah, kvm/qemu is what we are working with
[20:06] <joshd> the only caveat with that is that you have to build your own qemu, since the patch that makes the writeback window safe hasn't been included in a qemu release yet
[20:06] <joshd> I think someone had a package that included it on top of 0.15.1
[20:07] <grape> that is reasonable
[20:09] <joshd> in case you do have a problem, you can take snapshots of the rbd images and export them for backup
[20:09] <grape> how much ram does each osd daemon generally consume
[20:10] <grape> or is that even an issue
[20:13] <joshd> it can be an issue during recovery (which we're working to fix), but during normal usage it's not too bad - a couple hundred megs
[20:13] <grape> oh nice
[20:13] <joshd> also note that more pools increases memory usage
[20:13] <gregaf> grape: in normal usage it's maybe 200MB? usually less, but sometimes more (during recovery, like joshd said)
[20:14] <gregaf> the pools aren't actually a problem as long as you're using them sanely
[20:15] <grape> I saw something about btrfs support, and I haven't played with that much. How is that going with Ceph?
[20:16] <gregaf> it's the default recommended file system backend for the OSDs
[20:16] <grape> ooh :-)
[20:17] <gregaf> there've been some issues lately with churn in 3.0 and 3.1 but it's generally pretty good
[20:17] <gregaf> you can also run on other filesystems with minimal issues; I don't think rbd should expose any of them (joshd?)
[20:18] <joshd> gregaf: snapshots cause clones
[20:18] <gregaf> yeah, but there's not the rapid deletion of them we get with rgw, so isn't it just handled properly, or did we not fix that yet?
[20:19] <joshd> I don't think we fixed the non-idempotent replay yet, did we?
[20:19] <gregaf> oh, maybe not :/
[20:19] <joshd> nope: http://tracker.newdream.net/issues/213
[20:20] <gregaf> anyway, an obscure bug if you have to restart your OSDs on a non-btrfs filesystem, which is unlikely to cause you trouble
[20:20] <gregaf> (interpret that with whatever amount of humor or warning you like :P)
[20:21] <psomas> about the pools, is there any point in using different pools for rbd images?
[20:21] <grape> This looks very promising. So glad to have an alternative to gluster/hekafs.
[20:22] <grape> I can interpret with humor :-) No problem there.
[20:23] <gregaf> psomas: you have to configure replication and data placement settings on the pool level
[20:24] <gregaf> so probably not, but depending on the range of your VM requirements it can be useful
[20:24] <psomas> right
[20:24] <grape> we have pretty simple reqs. just need them to stay online :-)
[20:25] <joshd> also authentication - if you wanted to use different rados users for different sets of rbd images (although this would be better handled by your vm management software)
[20:25] * fronlius (~Adium@f054105239.adsl.alicedsl.de) has joined #ceph
[20:26] <grape> I should dig into the docs more before asking, but are all of the network connections managed internally?
[20:27] <grape> assuming that 1GbE is going to become an issue sooner than later
[20:27] <joshd> not sure what you mean by managed internally
[20:27] <grape> should there be 1 nic per daemon, etc
[20:28] <joshd> oh, there's no need for 1 nic per daemon - they bind to different ports
[20:28] <grape> how does ceph allocate the network resources
[20:29] <joshd> you can configure a separate internal address for osds, that they use for osd to osd traffic
[20:30] <grape> ok great. I wanted to keep the storage network on it's own switch/nics.
[20:30] <joshd> ideally on osds you'd have a nic for this cluster traffic, and a different one for talking to clients
[20:30] <grape> yeah exactly
[20:33] <grape> awesome! I'm looking forward to this! You guys have been really helpful. Thanks so much!
[20:34] <joshd> you're welcome!
[20:35] <grape> the community attitude is the key to good software
[20:39] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) has joined #ceph
[20:41] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) Quit (Read error: Connection reset by peer)
[20:47] * adjohn (~adjohn@50.0.103.34) Quit (Quit: adjohn)
[20:52] * bchrisman (~Adium@108.60.121.114) Quit (Quit: Leaving.)
[20:53] * efoster (~efoster@76.227.70.210) Quit (Quit: Leaving)
[20:54] <grape> is the 0.34-1 ubuntu(11.10) package a decent place to start playing?
[20:55] <joshd> there was an on-disk format change in 0.35, so it's better to start with the latest release (0.37)
[20:58] <grape> gotcha. i forgot I had read about that. Thanks.
[21:15] * adjohn (~adjohn@50.0.103.34) has joined #ceph
[21:16] * WesleyS (~WesleyS@12.248.40.138) has joined #ceph
[21:19] * phil_ (~quassel@chello080109010223.16.14.vie.surfer.at) has joined #ceph
[21:22] * cp (~cp@206.15.24.21) has joined #ceph
[21:22] * WesleyS (~WesleyS@12.248.40.138) has left #ceph
[21:47] * fronlius1 (~Adium@f054113228.adsl.alicedsl.de) has joined #ceph
[21:49] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[21:51] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[21:53] * fronlius (~Adium@f054105239.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[21:54] * nwatkins (~nwatkins@kyoto.soe.ucsc.edu) has joined #ceph
[22:33] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[22:43] * phil_ (~quassel@chello080109010223.16.14.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[22:44] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[22:46] * MK_FG (~MK_FG@188.226.51.71) Quit (Remote host closed the connection)
[22:47] * MK_FG (~MK_FG@188.226.51.71) has joined #ceph
[22:48] * phil_ (~quassel@chello080109010223.16.14.vie.surfer.at) has joined #ceph
[22:52] * bchrisman (~Adium@108.60.121.114) has joined #ceph
[23:04] * fronlius1 (~Adium@f054113228.adsl.alicedsl.de) Quit (Quit: Leaving.)
[23:12] * verwilst (~verwilst@dD576F053.access.telenet.be) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.