#ceph IRC Log

Index

IRC Log for 2012-07-09

Timestamps are in GMT/BST.

[0:06] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[0:09] * loicd (~loic@magenta.dachary.org) has joined #ceph
[0:19] * LarsFronius (~LarsFroni@95-91-243-249-dynip.superkabel.de) Quit (Quit: LarsFronius)
[0:38] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[0:54] * yoshi (~yoshi@p22043-ipngn1701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[1:44] * lofejndif (~lsqavnbok@9KCAAGMFO.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[3:30] * renzhi (~renzhi@69.163.36.54) has joined #ceph
[4:05] <renzhi> Hi, what kind of management software do you guys use for managing your list of (potentially very large) disks in your ceph cluster?
[4:06] <renzhi> For example, adding/removing osd, changing crush map, etc.
[4:07] <renzhi> Let's say you have to manage 2000 osd, just thinking about the ceph.conf is scaring me
[4:19] <lurbs> renzhi: I haven't done it myself, but it looks like people have been using chef: http://ceph.com/docs/master/config-cluster/chef/
[4:20] <lurbs> We're a puppet shop, so I'd end up using an equivalent sort of thing.
[4:25] <renzhi> ok, need to look at that
[4:25] <renzhi> what are you using then?
[4:26] <lurbs> Nothing yet. Only have a toy cluster at the moment.
[4:29] <renzhi> OK :)
[4:29] <renzhi> we have been doing testing, but are preparing to go live.
[4:29] <renzhi> But thinking about managing ceph.conf and the crush map manually is scary enough to me
[7:44] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[8:16] * jtang (~jtang@31.200.133.203) has joined #ceph
[9:02] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[9:14] * verwilst (~verwilst@d5152FEFB.static.telenet.be) has joined #ceph
[9:20] * jtang1 (~jtang@92.251.198.132.threembb.ie) has joined #ceph
[9:21] * LarsFronius (~LarsFroni@95-91-243-249-dynip.superkabel.de) has joined #ceph
[9:27] * loicd (~loic@83.167.43.235) has joined #ceph
[9:27] * jtang (~jtang@31.200.133.203) Quit (Ping timeout: 480 seconds)
[9:35] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[9:36] * LarsFronius (~LarsFroni@95-91-243-249-dynip.superkabel.de) Quit (Quit: LarsFronius)
[9:53] * jtang1 (~jtang@92.251.198.132.threembb.ie) Quit (Ping timeout: 480 seconds)
[9:56] * _benoit_ (~benoit@paradis.irqsave.net) has joined #ceph
[10:02] * Dr_O_ (~owen@host-78-145-29-241.as13285.net) Quit (Remote host closed the connection)
[10:04] * Dr_O (~owen@host-78-145-29-241.as13285.net) has joined #ceph
[10:05] <_benoit_> Hi
[10:06] * Dr_O is now known as Dr_O_
[10:07] <_benoit_> Is combining qemu + rbd + aio=native a good practice ?
[10:11] <Fruit> I think qemu can talk with ceph directly though
[10:12] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[10:16] * loicd (~loic@83.167.43.235) Quit (Quit: Leaving.)
[10:23] <_benoit_> Fruit: I am thinking about using the block/rbd.c qemu module
[10:23] <_benoit_> Fruit: I am wondering if combining it with aio=native on the qemu command line is ok
[10:24] <Fruit> the aio setting only has meaning when qemu uses files or devices of the operating system
[10:26] <_benoit_> ok
[10:26] <_benoit_> Thanks for the answer
[10:28] <Fruit> hrm
[10:29] <Fruit> I get the idea that block/rbd.c does use the OS devices
[10:29] <Fruit> I was kinda hoping that it'd communicate over the network itself
[10:29] * SpamapS (~clint@xencbyrum2.srihosting.com) Quit (Read error: Connection reset by peer)
[10:29] <Fruit> oh wait, it just uses librados
[10:32] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[10:33] <_benoit_> yes it use librados
[10:41] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[10:41] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[10:47] <_benoit_> Fruit: you are right Linux aio implementation is known to work only with O_DIRECT. That's a bad candidate for librdb :(
[10:48] <Fruit> well as I see it, the linux aio implementation isn't used, as it is only relevant when using files and kernel block devices
[10:48] <Fruit> and in this case you're using neither
[10:48] <_benoit_> yes
[10:50] * loicd (~loic@90.84.144.11) has joined #ceph
[10:51] <Fruit> linux-aio is an optimization for file access. you're not doing file access. so no worries. :)
[11:06] * jtang (~jtang@cpat001.wlan.net.ed.ac.uk) has joined #ceph
[11:12] * jtang (~jtang@cpat001.wlan.net.ed.ac.uk) Quit (Quit: Leaving.)
[11:24] * loicd (~loic@90.84.144.11) Quit (Ping timeout: 480 seconds)
[11:33] * loicd (~loic@90.84.144.119) has joined #ceph
[11:36] * Dr_O__ is now known as Dr_O
[11:40] * yoshi (~yoshi@p22043-ipngn1701marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[11:42] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[11:47] * jtang (~jtang@cpat001.wlan.net.ed.ac.uk) has joined #ceph
[11:51] * loicd (~loic@90.84.144.119) Quit (Ping timeout: 480 seconds)
[11:51] * loicd1 (~loic@90.84.144.90) has joined #ceph
[12:00] * jtang (~jtang@cpat001.wlan.net.ed.ac.uk) Quit (Quit: Leaving.)
[12:18] * jtang (~jtang@cpat001.wlan.net.ed.ac.uk) has joined #ceph
[12:22] * loicd1 (~loic@90.84.144.90) Quit (Ping timeout: 480 seconds)
[12:23] * jtang (~jtang@cpat001.wlan.net.ed.ac.uk) Quit ()
[12:37] * nhorman (~nhorman@2001:470:8:a08:7aac:c0ff:fec2:933b) has joined #ceph
[12:45] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[12:45] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[12:50] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Quit: LarsFronius)
[12:51] * loicd (~loic@90.84.144.120) has joined #ceph
[12:57] * loicd1 (~loic@90.84.144.32) has joined #ceph
[12:59] * jtang (~jtang@cpat001.wlan.net.ed.ac.uk) has joined #ceph
[12:59] * loicd (~loic@90.84.144.120) Quit (Ping timeout: 480 seconds)
[13:02] * jtang (~jtang@cpat001.wlan.net.ed.ac.uk) Quit ()
[13:04] * jtang (~jtang@cpat001.wlan.net.ed.ac.uk) has joined #ceph
[13:05] * jtang (~jtang@cpat001.wlan.net.ed.ac.uk) has left #ceph
[13:09] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[13:16] * widodh_ (~widodh@minotaur.apache.org) has joined #ceph
[13:17] * widodh (~widodh@minotaur.apache.org) Quit (Ping timeout: 480 seconds)
[13:21] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[13:25] * loicd1 (~loic@90.84.144.32) Quit (Ping timeout: 480 seconds)
[13:42] * loicd (~loic@90.84.144.88) has joined #ceph
[13:52] * loicd1 (~loic@90.84.144.91) has joined #ceph
[13:52] * loicd (~loic@90.84.144.88) Quit (Ping timeout: 480 seconds)
[14:03] * loicd1 (~loic@90.84.144.91) Quit (Ping timeout: 480 seconds)
[14:05] * nhorman (~nhorman@2001:470:8:a08:7aac:c0ff:fec2:933b) Quit (Ping timeout: 480 seconds)
[14:11] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[14:14] * johnl (~johnl@2a02:1348:14c:1720:6d1c:ccf:522a:ba70) Quit (Ping timeout: 480 seconds)
[14:21] <_benoit_> Fruit: ok thanks again for the responses :)
[14:44] * nhmlap (~Adium@199.106.165.33) has joined #ceph
[14:45] <nhmlap> heh, crazy. Internet on a plane.
[14:49] <liiwi> don't let the snakes in :)
[14:51] <nhmlap> lol
[15:07] <joao> nhmlap, is it worth it?
[15:08] <nhmlap> Sort of. remote ssh is pretty laggy. It's doable, but not particularly pleasant.
[15:09] <darkfader> nhmlap: try with -C, i do that when on the train
[15:09] <darkfader> the compression generally raises the latency, but when it's high it doesnt matter any more
[15:09] <darkfader> actually it gets faster since the packets become smaller
[15:09] * aliguori (~anthony@cpe-70-123-145-39.austin.res.rr.com) has joined #ceph
[15:10] <Fruit> there's a program called mosh that helps with high-latency connections
[15:10] <nhmlap> darkfader: seems like it might be marginally better
[15:13] <nhmlap> So I'm looking at network statistics during various tests on OSD nodes and per-second incoming packet size is typically around 12KB while outgoing packet size is typically around 1.2KB (this is with 4MB rados bench writes). that seems odd. I'm sure there's a bunch of pings and random other stuff happening that is dragging the numbers down, but that seems surprising.
[15:15] * loicd (~loic@2001:67c:28dc:850:1d6e:df2f:f13e:5e62) has joined #ceph
[15:17] * loicd (~loic@2001:67c:28dc:850:1d6e:df2f:f13e:5e62) Quit ()
[15:25] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[16:12] * The_Bishop (~bishop@2a01:198:2ee:0:6c68:520f:8ff2:30d6) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[16:15] * Ryan_Lane (~Adium@c-98-210-205-93.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[16:32] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[16:34] * fghaas (~florian@91.119.194.98) has joined #ceph
[16:57] * Dr_O (~owen@heppc028.ph.qmul.ac.uk) Quit (Remote host closed the connection)
[17:07] * adjohn (~adjohn@50-0-133-101.dsl.static.sonic.net) has joined #ceph
[17:17] * deepsa (~deepsa@122.172.0.114) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[17:18] * deepsa (~deepsa@122.172.0.114) has joined #ceph
[17:18] * edwardw`away (~edward@ec2-50-19-100-56.compute-1.amazonaws.com) Quit (Quit: Coyote finally caught me)
[17:25] * verwilst (~verwilst@d5152FEFB.static.telenet.be) Quit (Quit: Ex-Chat)
[17:26] <dspano> I've got a stupid question. I'm used to seeing my mirrored data as half the actual amount. I have two OSDs with 1.5T each, and ceph -w is showing 3T. Is there something wrong with my config, or is this just a convention I'm not used to.
[17:27] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:29] <nhmlap> alright, see you guys in about 90mins!
[17:32] * nhmlap (~Adium@199.106.165.33) Quit (Quit: Leaving.)
[17:32] <fghaas> dspano: that's quite expected. recall that the replica set size is a configurable option per pool, you may very well have more than 2 replicas, or no redundancy at all for that matter.
[17:33] <fghaas> (at least if I understand you correctly, that is)
[17:33] <gregaf> so, a convention that you're not used to :)
[17:33] <dspano> fghaas: My replication level is set to two at the moment.
[17:33] <dspano> gregaf: I thought so.
[17:33] <fghaas> yeah, but ceph -w doesn't take that into account, and you could change it for, say your "data" pool to, say 5 at any time
[17:35] <fghaas> gregaf: I did file those bugs you asked for over the weekend; sorry about the delay
[17:35] <gregaf> I saw, thanks
[17:36] <gregaf> "Principle of Least Astonishment"? I've always heard "Least Surprise" before
[17:36] <fghaas> also, there's something fishy about the default keyring locations in 0.48: strangely, if you have no ceph.conf, then no clients find their keyring in /etc/ceph/keyring. if you do have a ceph.conf, even though it doesn't define a keyring location, it works just fine. is that expected?
[17:36] <gregaf> google says you came out of FreeBSD :p
[17:37] <fghaas> no, I used to use "Least Surprise", then found out it's wikipedia page is a redirect to "Least Astonishment", and wikipedia is always right???
[17:37] <fghaas> s/it's/its/
[17:37] <gregaf> seriously?
[17:37] <gregaf> booooo
[17:38] <fghaas> http://en.wikipedia.org/wiki/Principle_of_least_surprise
[17:38] <gregaf> somebody taught me wrong
[17:38] <gregaf> but I really think surprise is a better word there :/
[17:39] <gregaf> anyway
[17:39] <fghaas> I think that's what Galilei, Copernicus and Kepler said too, so you're in good company
[17:39] <gregaf> we recently (for some value of recently) made stuff work a lot better when you don't have a config file
[17:40] <gregaf> but it wouldn't surprise me if there are some strange artifacts of that, like not looking up keyring files
[17:40] <gregaf> ???actually, yes, I'm almost certain that's one of them
[17:40] <gregaf> the expectation is that if you aren't providing a config file you are specifying everything necessary for the command to complete
[17:40] <fghaas> while we're at it, could we please get ceph-conftool to return a default if no value is set in the config?
[17:40] <gregaf> but apparently that's violating POLA
[17:41] <gregaf> ?
[17:42] * cattelan_away is now known as cattelan
[17:42] <fghaas> huh? "you are specifying everything necessary for the command to complete"? how can a set a keyring location with a ceph command?
[17:43] <fghaas> the "?" was about my ceph-conftool suggestion?
[17:43] <gregaf> yeah
[17:43] <gregaf> not sure what you mean there
[17:44] <fghaas> well is there currently a way to figure out the default for a config option, without consulting src/common/config_opts.h?
[17:44] <gregaf> oh, I see what you mean
[17:44] <fghaas> would be helpful, wouldn't it? :)
[17:45] <fghaas> right now, ceph-conftool returns an empty value if you ask for a config value that's not set in ceph.conf
[17:45] <fghaas> would be nice to add the ability to retrieve a default value instead
[17:45] <fghaas> s/config value/config option/
[17:45] <gregaf> yeah ?????make a feature request ;)
[17:46] <gregaf> you can set a keyring location with -k, if that's what you were asking
[17:46] <dspano> fghaas: It helps to have all your mon.x, mds.x and osd.x keyrings setup. Lol. Now it's replicating.
[17:46] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[17:47] <fghaas> dspano: that did throw a few people off guard, myself included
[17:47] <fghaas> another POLA issue there, gregaf :)
[17:47] <gregaf> keyrings?
[17:48] <gregaf> yeah...
[17:49] <dspano> So far this works so much smoother than gfs2 on drbd active/active.
[17:50] <fghaas> dspano: oh, amen
[17:50] <dspano> fghaas: Lol.
[17:51] <dspano> fghaas: Does your check my cluster deal include Openstack on ceph? I may take advantage of that once I have this cluster rebuilt clean.
[17:51] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has left #ceph
[17:51] <fghaas> yep it does
[17:52] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[17:56] <joao> sjust, around?
[17:58] <fghaas> gregaf: there you go
[17:58] <fghaas> http://tracker.newdream.net/issues/2755
[18:04] * mgalkiewicz (~mgalkiewi@toya.hederanetworks.net) has joined #ceph
[18:07] * deepsa (~deepsa@122.172.0.114) Quit (Ping timeout: 480 seconds)
[18:07] * deepsa (~deepsa@122.172.0.114) has joined #ceph
[18:07] <gregaf> joao: he's not in yet today
[18:08] <joao> okay, thanks
[18:08] <joao> need to run something about the LevelDBStore through him before committing to that approach
[18:08] <gregaf> heh, yeah
[18:09] <joao> btw, gregaf, been updating the synchronization task in the tracker
[18:09] <joao> and creating more tasks
[18:09] <gregaf> coolio
[18:17] * Tv_ (~tv@38.122.20.226) has joined #ceph
[18:24] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[18:24] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[18:26] * Meths (rift@2.25.189.76) Quit (Read error: Connection reset by peer)
[18:26] * Meths (rift@2.25.189.76) has joined #ceph
[18:40] * bchrisman (~Adium@108.60.121.114) has joined #ceph
[18:40] * Meths (rift@2.25.189.76) Quit (Read error: Connection reset by peer)
[18:45] * Meths (rift@2.25.189.76) has joined #ceph
[18:47] <joao> just a bit of off-topic, mind boggling trivia: http://en.wikipedia.org/wiki/HCESAR
[18:51] <fghaas> yehudasa: regarding your comment on the ML post, arguably I guess you could also create an SSL cert from your own CA, which your clients trust, but that's just even more horrible hackery ... and yes the point of the exercise was to work with clients that have s3.amazonaws.com hard-coded in
[18:52] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[18:53] <yehudasa> fghaas: yeah, I wouldn't go the self cert route. I do think that most used clients nowadays can configure url
[18:54] <fghaas> well the net::amazon::s3::tools apparently can't, and s3tools also requires rgw dns name
[18:57] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:57] <yehudasa> fghaas: iirc I had a patch for the net::amazon::s3::tools, but that was really long time ago, don't remember whatever happened to it
[19:00] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[19:00] * Meths (rift@2.25.189.76) Quit (Read error: Connection reset by peer)
[19:01] * Meths (rift@2.25.189.76) has joined #ceph
[19:03] * lofejndif (~lsqavnbok@19NAAAUGU.tor-irc.dnsbl.oftc.net) has joined #ceph
[19:03] * Meths (rift@2.25.189.76) Quit (Read error: Connection reset by peer)
[19:03] * chutzpah (~chutz@100.42.98.5) has joined #ceph
[19:04] * fghaas (~florian@91.119.194.98) Quit (Ping timeout: 480 seconds)
[19:04] * adjohn (~adjohn@50-0-133-101.dsl.static.sonic.net) Quit (Quit: adjohn)
[19:06] * Meths (rift@2.25.189.76) has joined #ceph
[19:12] * lofejndif (~lsqavnbok@19NAAAUGU.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[19:22] * Meths (rift@2.25.189.76) Quit (Read error: Connection reset by peer)
[19:22] * Meths (rift@2.25.189.76) has joined #ceph
[19:38] * lofejndif (~lsqavnbok@659AABJ72.tor-irc.dnsbl.oftc.net) has joined #ceph
[19:38] * Meths (rift@2.25.189.76) Quit (Read error: Connection reset by peer)
[19:38] * Meths (rift@2.25.189.76) has joined #ceph
[19:42] * adjohn (~adjohn@69.170.166.146) has joined #ceph
[19:42] * Meths (rift@2.25.189.76) Quit (Read error: Connection reset by peer)
[19:43] * Meths (rift@2.25.189.76) has joined #ceph
[19:47] * Meths (rift@2.25.189.76) Quit (Read error: Connection reset by peer)
[19:49] * joshd (~joshd@38.122.20.226) has joined #ceph
[19:49] * The_Bishop (~bishop@f052102096.adsl.alicedsl.de) has joined #ceph
[19:52] <sagewk> the dev machines at aon need their resolv.conf updated
[19:52] <sagewk> nameserver 192.168.106.2
[19:52] <sagewk> nameserver 192.168.107.5
[19:53] * Meths (rift@2.25.189.76) has joined #ceph
[19:58] * dmick (~dmick@38.122.20.226) has joined #ceph
[20:05] * nhmlap (~Adium@38.122.20.226) has joined #ceph
[20:08] * lofejndif (~lsqavnbok@659AABJ72.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[20:08] * Meths (rift@2.25.189.76) Quit (Read error: Connection reset by peer)
[20:08] * Meths (rift@2.25.189.76) has joined #ceph
[20:10] * lofejndif (~lsqavnbok@19NAAAUI8.tor-irc.dnsbl.oftc.net) has joined #ceph
[20:10] * Meths (rift@2.25.189.76) Quit (Read error: Connection reset by peer)
[20:13] * Meths (rift@2.25.189.76) has joined #ceph
[20:13] * Meths (rift@2.25.189.76) Quit ()
[20:15] * yehudasa_ (~yehudasa@38.122.20.226) has joined #ceph
[20:22] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[20:25] * lofejndif (~lsqavnbok@19NAAAUI8.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[20:37] * LarsFronius_ (~LarsFroni@p578b21b6.dip0.t-ipconnect.de) has joined #ceph
[20:37] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Read error: Connection reset by peer)
[20:38] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[20:40] <mgalkiewicz> hi guys I have problem with adding new osd https://gist.github.com/3078131
[20:45] * LarsFronius_ (~LarsFroni@p578b21b6.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[20:48] * fghaas (~florian@91.119.194.98) has joined #ceph
[20:56] * cattelan (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) Quit (Quit: Terminated with extreme prejudice - dircproxy 1.2.0)
[20:56] * cattelan (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) has joined #ceph
[20:58] * Meths (rift@2.25.193.55) has joined #ceph
[21:00] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: Leaving)
[21:09] * LarsFronius_ (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[21:11] * LarsFronius_ (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[21:12] * LarsFronius_ (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[21:14] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Ping timeout: 480 seconds)
[21:14] * LarsFronius_ is now known as LarsFronius
[21:18] * loicd (~loic@151.216.22.26) has joined #ceph
[21:19] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[21:24] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[21:35] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[21:51] <yehudasa> sagewk: for some reason skinny is really slow to resolve with these nameservers
[21:52] <sagewk> (10:52:38 AM) sagewk: nameserver 192.168.106.2
[21:52] <sagewk> (10:52:38 AM) sagewk: nameserver 192.168.107.5
[21:52] <sagewk> those ones?
[21:52] <yehudasa> yeah
[21:52] <yehudasa> oh, wait a second
[21:52] <yehudasa> 106.1
[21:53] <yehudasa> let's see if that's the problem
[21:53] <yehudasa> yeah
[21:53] <yehudasa> thanks
[21:54] * fghaas (~florian@91.119.194.98) Quit (Quit: Leaving.)
[21:59] <gregaf> mgalkiewicz: that's the same output as http://www.spinics.net/lists/ceph-devel/msg07316.html
[21:59] <gregaf> can you go through that and see if you get different results anywhere?
[21:59] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has left #ceph
[22:00] <mgalkiewicz> gregaf: checking
[22:01] <mgalkiewicz> gregaf: there are different commands executed
[22:07] * loicd (~loic@151.216.22.26) Quit (Quit: Leaving.)
[22:08] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) has joined #ceph
[22:09] <gregaf> oh, right, sorry
[22:09] <gregaf> mgalkiewicz: I assume this is with 0.48, right?
[22:14] <mgalkiewicz> well new osd is 0.48 and the other one is 0.44
[22:33] <sjust> mgalkiewicz: my best guess is that the mkfs process got interupted
[22:34] <sjust> can you blow away the osd data directory on that node and try again/
[22:34] <sjust> ?
[22:34] <sjust> on the osd that failed to mkfs, that is
[22:35] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[23:18] * yehudasa_ (~yehudasa@38.122.20.226) Quit (Ping timeout: 480 seconds)
[23:19] * The_Bishop (~bishop@f052102096.adsl.alicedsl.de) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[23:47] * danieagle (~Daniel@177.43.213.15) has joined #ceph
[23:51] * goedi (goedi@195.26.5.166) Quit (Ping timeout: 480 seconds)
[23:52] * goedi (goedi@195.26.5.166) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.