#ceph IRC Log

Index

IRC Log for 2012-01-24

Timestamps are in GMT/BST.

[0:04] <Tv|work> daemonik: on paper, yes
[0:04] <Tv|work> daemonik: then there's the part that says "budget 8GB of RAM for zfs if you don't want to reboot often, more if you actually run apps"
[0:05] <Tv|work> but really, the biggest issue is the damn licensing
[0:05] * adjohn (~adjohn@rackspacesf.static.monkeybrains.net) has joined #ceph
[0:05] <daemonik> Tv|work: ZFS does aggressive caching, and needs that RAM for the deduplication tables and probably also the compression stuff. They made creating a subfilesystem/subvolume/whateveritscalled as inexpensive as created a new directory.
[0:05] <Tv|work> the only way zfs will be popular in the linux world is via things taking inspiration from it -- such as btrfs
[0:06] <Tv|work> oracle could change that by relicensing it, but i just don't see that happening
[0:06] <daemonik> Tv|work: FreeBSD worked around the licensing issue. btrfs is years off. ZFS is stable here and now and if I can have Ceph on two FreeBSD boxes I'll happily spend 32gb of RAM on them.
[0:06] <Tv|work> FreeBSD ain't GPL.
[0:07] <Tv|work> the license is explicitly designed to prevent integrating with GPL code
[0:07] <daemonik> I've been using it a lot more lately (at home, because of ZFS, Linux softRAID doesn't play nice on crappy hardware) and its weak licensing shows.
[0:07] <Tv|work> because Sun designed it as a competitive edge over various Linux vendors, probably mostly Red Hat
[0:08] <daemonik> Yeup, and now Solaris is borderline irrelevant. Illumos isn't very useful to the people who aren't directly involved in it for now. Thanks Sun.
[0:17] <dwm__> Regarding software-RAID and rebuilding, mdadm has a mode where you can add a write-intent bitmap to a RAID set.
[0:18] <dwm__> This allows for just resynchronising those sections which were known to be potentially dirty after an unclean shutdown.
[0:18] <dwm__> Can kick write performance quite badly with all the seeks, however, unless you stick it on another device.
[0:20] <daemonik> dwm__: There are other reasons we use ZFS. Linux's softraid can't grow. With ZFS I can casually add another mirror.
[0:20] <dwm__> daemonik: Uh, you can do that with mdadm, too.
[0:21] <daemonik> I don't know Ceph well enough to know, but perhaps it would make more sense to have individual ZFS mirrors comprise many OSDs rather than have large zpools comprise two OSDs.
[0:21] <daemonik> mdadm can grow raid10?
[0:21] <dwm__> Sure.
[0:22] <dwm__> Hell, it can even up- and down-convert between a whole host of different RAID levels non-destructively.
[0:22] <daemonik> dwm__: Ah I looked this up and I see
[0:24] <dwm__> Ah, I stand corrected -- it doesn't seem to support RAID10 at present.
[0:27] <Tv|work> daemonik: yes, as in, you don't want a single PG to grow too big, and probably don't >>100 PGs per OSD either
[0:27] <Tv|work> +want
[0:27] <daemonik> PG?
[0:27] <Tv|work> placement group
[0:27] <Tv|work> http://ceph.newdream.net/docs/latest/dev/placement-group/
[0:28] <dwm__> Hmm, that implies a practical upper bound on how much space you want a single OSD to serve.
[0:28] <Tv|work> yes; just have more OSDs
[0:28] <dwm__> Oh, sure. What do the numbers work out as for that bound?
[0:29] <Tv|work> i don't think anyone's figured out how big a PG can become, before it's too much
[0:29] <Tv|work> but a PG is the unit of recovery, and you don't want that to be too big, for best recovery speed
[0:29] <daemonik> Does this mean that Ceph would perform noticeably better one OSD to each ZFS two-disk mirror than two large OSDs on two large zpools?
[0:29] <Tv|work> eventually, number of PGs will auto-tune based on the amount of data stored
[0:30] <Tv|work> daemonik: i have only one answer to that: benchmark ;)
[0:31] <daemonik> Tv|work: If I were to set up such systems, what benchmarks should I use?
[0:31] <dwm__> daemonik: What applications do you care about?
[0:31] <Tv|work> daemonik: depends on the production workload you need to support.. sorry to be circumspect, but that's the reality
[0:32] <daemonik> Tv|work: Are there any benchmarks I could run that would be helpful? I'll post on the list.
[0:32] <Tv|work> daemonik: we want to provide more "on hardware X, workload Y performs at Z speed", but we're not ready for that quite yet
[0:32] <daemonik> Are there any outstanding issue that should deter someone from using Ceph in production?
[0:32] <Tv|work> daemonik: what would you use the measure performance of your local zfs fs
[0:33] <Tv|work> daemonik: rados, rbd and radosgw are pretty good; the distributed filesystem is not really production ready yet; pay attention to backing filesystem choice
[0:56] * adjohn is now known as Guest298
[0:56] * adjohn (~adjohn@rackspacesf.static.monkeybrains.net) has joined #ceph
[0:56] * Guest298 (~adjohn@rackspacesf.static.monkeybrains.net) Quit (Read error: No route to host)
[0:59] * adjohn is now known as Guest299
[0:59] * Guest299 (~adjohn@rackspacesf.static.monkeybrains.net) Quit (Read error: Connection reset by peer)
[0:59] * adjohn (~adjohn@rackspacesf.static.monkeybrains.net) has joined #ceph
[1:04] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has left #ceph
[1:04] * adjohn (~adjohn@rackspacesf.static.monkeybrains.net) Quit (Read error: No route to host)
[1:06] * The_Bishop (~bishop@e179006070.adsl.alicedsl.de) has joined #ceph
[1:08] * adjohn (~adjohn@rackspacesf.static.monkeybrains.net) has joined #ceph
[1:46] * yoshi (~yoshi@p9224-ipngn1601marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[1:46] * Tv|work (~Tv|work@aon.hq.newdream.net) has left #ceph
[2:10] * The_Bishop (~bishop@e179006070.adsl.alicedsl.de) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[2:34] * adjohn is now known as Guest305
[2:34] * adjohn (~adjohn@rackspacesf.static.monkeybrains.net) has joined #ceph
[2:40] * Guest305 (~adjohn@rackspacesf.static.monkeybrains.net) Quit (Ping timeout: 480 seconds)
[2:51] * chutzpah (~chutz@216.174.109.254) has joined #ceph
[2:56] * adjohn (~adjohn@rackspacesf.static.monkeybrains.net) Quit (Quit: adjohn)
[3:02] * bchrisman (~Adium@108.60.121.114) Quit (Quit: Leaving.)
[3:15] * daemonik (~Adium@static-173-55-114-2.lsanca.fios.verizon.net) Quit (Quit: Leaving.)
[3:23] * vodka (~paper@179.Red-88-11-190.dynamicIP.rima-tde.net) Quit (Quit: Leaving)
[3:48] * joshd (~joshd@aon.hq.newdream.net) Quit (Quit: Leaving.)
[3:58] * chutzpah (~chutz@216.174.109.254) Quit (Quit: Leaving)
[4:03] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[4:50] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) has joined #ceph
[5:11] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[5:55] * sage (~sage@76.89.180.250) has joined #ceph
[6:30] * sage (~sage@76.89.180.250) Quit (Ping timeout: 480 seconds)
[7:21] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[7:26] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[7:44] * Kioob (~kioob@luuna.daevel.fr) Quit (Quit: Leaving.)
[8:41] * weetabeex (~jecluis@89.181.150.162) has joined #ceph
[8:42] <weetabeex> hi
[8:58] * meyer (meyer@c64.org) has joined #ceph
[8:58] * meyer is now known as Meyer
[9:03] * mtk (~mtk@ool-44c35967.dyn.optonline.net) Quit (synthon.oftc.net graviton.oftc.net)
[9:03] * gohko (~gohko@natter.interq.or.jp) Quit (synthon.oftc.net graviton.oftc.net)
[9:03] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (synthon.oftc.net graviton.oftc.net)
[9:03] * sjust (~sam@aon.hq.newdream.net) Quit (synthon.oftc.net graviton.oftc.net)
[9:03] * gregaf1 (~Adium@aon.hq.newdream.net) Quit (synthon.oftc.net graviton.oftc.net)
[9:03] * yehudasa (~yehudasa@aon.hq.newdream.net) Quit (synthon.oftc.net graviton.oftc.net)
[9:03] * svenx_ (92744@diamant.ifi.uio.no) Quit (synthon.oftc.net graviton.oftc.net)
[9:04] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[9:21] * weetabeex (~jecluis@89.181.150.162) Quit (Quit: weetabeex)
[9:22] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[9:23] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[9:32] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[9:40] * malachhe1 (~malachhe@salhoob.inria.fr) has joined #ceph
[9:40] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[9:41] <malachhe1> hi
[9:43] <malachhe1> i get an error when starting ceph monitor
[9:43] <malachhe1> tarting mon.0 rank 0 at 172.16.240.2:6789/0 mon_data /tmp/mon0 fsid 7e0b9410-7e6a-4390-9075-b5b2cc6568ea
[9:43] <malachhe1> accepter.bind unable to bind to 172.16.240.2:6789: Cannot assign requested address
[9:43] <malachhe1> failed: 'ssh itruc-1 /usr/bin/ceph-mon -i 0 -c /tmp/ceph.conf.23700 '
[9:44] <malachhe1> someone has an idea ?
[9:47] * weetabeex (~joao@193.136.122.17) has joined #ceph
[9:47] * weetabeex (~joao@193.136.122.17) Quit ()
[9:48] * weetabeex (~joao@193.136.122.17) has joined #ceph
[10:15] * gregaf1 (~Adium@aon.hq.newdream.net) has joined #ceph
[10:15] * mtk (~mtk@ool-44c35967.dyn.optonline.net) has joined #ceph
[10:15] * gohko (~gohko@natter.interq.or.jp) has joined #ceph
[10:15] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[10:15] * sjust (~sam@aon.hq.newdream.net) has joined #ceph
[10:15] * yehudasa (~yehudasa@aon.hq.newdream.net) has joined #ceph
[10:15] * svenx_ (92744@diamant.ifi.uio.no) has joined #ceph
[10:16] * yoshi (~yoshi@p9224-ipngn1601marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[10:30] * malachhe1 (~malachhe@salhoob.inria.fr) has left #ceph
[11:12] * Kioob`Taff1 (~plug-oliv@local.plusdinfo.com) has joined #ceph
[11:41] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[11:42] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[12:20] * stass (stas@ssh.deglitch.com) Quit (Read error: Connection reset by peer)
[12:20] * stass (stas@ssh.deglitch.com) has joined #ceph
[13:38] * gregaf1 (~Adium@aon.hq.newdream.net) Quit (Read error: Connection reset by peer)
[13:38] * gregaf (~Adium@aon.hq.newdream.net) has joined #ceph
[14:03] * The_Bishop (~bishop@cable-89-16-138-109.cust.telecolumbus.net) has joined #ceph
[14:05] * lollercaust (~paper@179.Red-88-11-190.dynamicIP.rima-tde.net) has joined #ceph
[14:06] * lx0 is now known as lxo
[15:51] * lollercaust (~paper@179.Red-88-11-190.dynamicIP.rima-tde.net) Quit (Quit: Leaving)
[15:59] * lollercaust (~paper@179.Red-88-11-190.dynamicIP.rima-tde.net) has joined #ceph
[16:48] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[16:53] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[17:18] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Quit: later)
[17:19] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[17:45] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[18:03] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[18:34] * weetabeex (~joao@193.136.122.17) Quit (Ping timeout: 480 seconds)
[18:38] * lollercaust (~paper@179.Red-88-11-190.dynamicIP.rima-tde.net) Quit (Quit: Leaving)
[18:38] * jojy (~jvarghese@108.60.121.114) has joined #ceph
[18:53] * bchrisman (~Adium@108.60.121.114) has joined #ceph
[19:02] * joshd (~joshd@aon.hq.newdream.net) has joined #ceph
[19:08] * Kioob`Taff1 (~plug-oliv@local.plusdinfo.com) Quit (Quit: Leaving.)
[19:09] * adjohn (~adjohn@rackspacesf.static.monkeybrains.net) has joined #ceph
[19:10] * chutzpah (~chutz@216.174.109.254) has joined #ceph
[19:11] * adjohn is now known as Guest372
[19:11] * adjohn (~adjohn@rackspacesf.static.monkeybrains.net) has joined #ceph
[19:17] * Guest372 (~adjohn@rackspacesf.static.monkeybrains.net) Quit (Ping timeout: 480 seconds)
[19:23] * fronlius (~fronlius@f054112216.adsl.alicedsl.de) has joined #ceph
[19:26] * weetabeex (~joao@89.181.150.162) has joined #ceph
[19:36] * ceph (~hylick@32.97.110.63) has joined #ceph
[19:37] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) has joined #ceph
[19:40] <yehudasa> fred_: are you there?
[19:57] * adjohn (~adjohn@rackspacesf.static.monkeybrains.net) Quit (Remote host closed the connection)
[19:57] * adjohn (~adjohn@rackspacesf.static.monkeybrains.net) has joined #ceph
[20:12] * Kioob (~kioob@luuna.daevel.fr) has joined #ceph
[20:14] * fronlius_ (~fronlius@testing78.jimdo-server.com) has joined #ceph
[20:17] * fronlius (~fronlius@f054112216.adsl.alicedsl.de) Quit (Read error: Connection reset by peer)
[20:17] * fronlius (~fronlius@f054112216.adsl.alicedsl.de) has joined #ceph
[20:23] * fronlius_ (~fronlius@testing78.jimdo-server.com) Quit (Read error: Operation timed out)
[20:44] <dwm__> Hmm, does XFS have any ceph-significant limits on xattr size?
[20:46] <gregaf> dwm_: I don't think so…it might max out at 96k or something but I can't imagine Ceph reaching that
[20:49] * adjohn is now known as Guest379
[20:49] * adjohn (~adjohn@rackspacesf.static.monkeybrains.net) has joined #ceph
[20:50] <dwm__> Hmm, http://marc.info/?l=ceph-devel&m=131942130322957&w=2 indicates that Sage at least _used_ to believe that XFS had no restriction. :)
[20:50] <nhm> dwm__: http://ceph.newdream.net/wiki/Backend_filesystem_requirements
[20:52] <dwm__> nhm: Hmm, that sentence beginning 'XFS' seems to be missing a 'not' in there.
[20:52] <nhm> dwm__: agreed. :)
[20:53] <dwm__> Hmm, given we've been happily using XFS in production for some years now, I suspect that -- unless btrfs stabilizes substantially soon -- I might move our testing cluster over to that.
[20:53] <dwm__> (Though I think Oracle have committed to switching to btrfs for their next release? In which case, I wish Chris Mason luck..)
[20:54] <nhm> dwm__: it'd be interesting to hear how it goes for you if you go the XFS route.
[20:55] <dwm__> Given my current workload, I probably won't have time to poke it that hard for a short while..
[20:55] * Guest379 (~adjohn@rackspacesf.static.monkeybrains.net) Quit (Ping timeout: 480 seconds)
[21:19] * adjohn (~adjohn@rackspacesf.static.monkeybrains.net) Quit (Quit: adjohn)
[21:19] * adjohn (~adjohn@rackspacesf.static.monkeybrains.net) has joined #ceph
[21:51] * ceph (~hylick@32.97.110.63) Quit (Quit: Leaving.)
[21:52] * ceph (~hylick@32.97.110.65) has joined #ceph
[22:24] * fronlius_ (~fronlius@f054098205.adsl.alicedsl.de) has joined #ceph
[22:26] * adjohn is now known as Guest388
[22:26] * Guest388 (~adjohn@rackspacesf.static.monkeybrains.net) Quit (Read error: Connection reset by peer)
[22:26] * adjohn (~adjohn@rackspacesf.static.monkeybrains.net) has joined #ceph
[22:27] * adjohn (~adjohn@rackspacesf.static.monkeybrains.net) Quit (Remote host closed the connection)
[22:27] * adjohn (~adjohn@rackspacesf.static.monkeybrains.net) has joined #ceph
[22:29] * fronlius (~fronlius@f054112216.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[22:29] * fronlius_ is now known as fronlius
[22:32] * fronlius (~fronlius@f054098205.adsl.alicedsl.de) Quit (Quit: fronlius)
[22:50] * ceph (~hylick@32.97.110.65) Quit (Ping timeout: 480 seconds)
[22:53] * ceph (~hylick@32.97.110.63) has joined #ceph
[22:55] * adjohn is now known as Guest392
[22:55] * adjohn (~adjohn@rackspacesf.static.monkeybrains.net) has joined #ceph
[23:03] * Guest392 (~adjohn@rackspacesf.static.monkeybrains.net) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.