#ceph IRC Log

Index

IRC Log for 2014-07-28

Timestamps are in GMT/BST.

[0:03] * sz0_ (~sz0@94.55.197.185) has joined #ceph
[0:05] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Quit: Computer has gone to sleep.)
[0:05] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[0:07] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[0:09] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Read error: Operation timed out)
[0:10] * vbellur (~vijay@c-76-19-134-77.hsd1.ma.comcast.net) Quit (Quit: Leaving.)
[0:18] * ghartz_ (~ghartz@ip-68.net-80-236-84.joinville.rev.numericable.fr) Quit (Remote host closed the connection)
[0:24] * wer_ (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[0:25] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[0:26] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[0:29] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Read error: Operation timed out)
[0:35] * sz0_ (~sz0@94.55.197.185) Quit (Quit: My iMac has gone to sleep. ZZZzzz???)
[0:37] * sz0_ (~sz0@94.55.197.185) has joined #ceph
[0:41] <erice> jobewan: If you are still on, you may want to try ceph-deploy disk zap ceph-node1:sdb to clear out any current partitioning
[0:57] * dlan_ (~dennis@116.228.88.131) has joined #ceph
[0:59] * dlan (~dennis@116.228.88.131) Quit (Ping timeout: 480 seconds)
[1:05] * zack_dolby (~textual@p8505b4.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:07] * sz0_ (~sz0@94.55.197.185) Quit (Quit: My iMac has gone to sleep. ZZZzzz???)
[1:10] <jobewan> yea, I've tried zapping a few times
[1:10] <jobewan> This is occuring on 3 virtualbox nodes...
[1:11] <jobewan> using the 1st node as the deployer
[1:12] <jobewan> although, zapping the disks yields odd results in a few cases too. Let me show, 1 sec
[1:13] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (Ping timeout: 480 seconds)
[1:15] <jobewan> 5 disks being zapped, output: http://pastebin.com/SvVbmJhQ
[1:19] * LeaChim (~LeaChim@host86-162-79-167.range86-162.btcentralplus.com) Quit (Read error: Operation timed out)
[1:22] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[1:22] * oms101 (~oms101@p20030057EA6EF300EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:31] * oms101 (~oms101@p20030057EA268600EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[1:36] * xarses (~andreww@173-164-194-206-SFBA.hfc.comcastbusiness.net) has joined #ceph
[1:36] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) Quit (Quit: Leaving.)
[1:47] <erice> jobewan: Sorry, I don't have an answer on why partx is failing. All of my testing has been done on Ubuntu
[1:51] <jobewan> it's not even partx that's failing it seems. This is what doesn't make sense to me: Information: Moved requested sector from 34 to 2048 in
[1:51] <jobewan> order to align on 2048-sector boundaries.
[1:51] <jobewan> Could not create partition 2 from 34 to 10485760
[1:54] <erice> what size is your cache size in your ceph.conf file and what size is your virtual disks sdb, sdc, ...
[1:55] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit (Ping timeout: 480 seconds)
[1:55] <jobewan> the virt disks are only 1 Gb
[1:55] <jobewan> oh...
[1:55] <erice> That would be what size is your osd_journal_size
[1:56] <jobewan> I'm setting to 10Gb
[1:56] <jobewan> according to that setting eh
[1:56] <jobewan> I didn't even think of that until you pointed it out...
[1:56] <jobewan> need to bump up my disk sizes I suppose
[1:57] <jobewan> or just bump my journal size down to much smaller
[1:58] <erice> I am using 20GB size drives on my VMs, using VMware Fusion on a MAC with 1024 for my journal size
[1:59] * xarses (~andreww@173-164-194-206-SFBA.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[2:01] <jobewan> this is just for a test vagrant setup I'm building really... wasn't wanting to go crazy w/ the sizing
[2:01] <jobewan> I dropped the journals down to 250, and it created the drives
[2:02] <jobewan> hell, 1 gb shouldn't need much journal at all... could prob drop to 64 even :)
[2:02] <erice> I have never tried to see how small I can make it.
[2:04] <erice> I also never tried anything that small for an object store drive. I am not sure how pg mapping is going to work on a 1GB drive
[2:06] * ghartz (~ghartz@ircad17.u-strasbg.fr) has joined #ceph
[2:15] * erice_ (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[2:23] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[2:25] * zack_dolby (~textual@e0109-114-22-0-42.uqwimax.jp) has joined #ceph
[2:25] * KaZeR (~kazer@c-67-161-64-186.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[2:34] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Read error: Operation timed out)
[2:37] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Ping timeout: 480 seconds)
[2:55] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Quit: Konversation terminated!)
[2:57] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) Quit (Read error: Operation timed out)
[2:57] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) has joined #ceph
[3:02] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[3:11] * diegows (~diegows@190.190.5.238) Quit (Read error: Operation timed out)
[3:12] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[3:12] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) has joined #ceph
[3:12] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) has left #ceph
[3:39] * lucas1 (~Thunderbi@222.240.148.130) has joined #ceph
[3:53] * lucas1 (~Thunderbi@222.240.148.130) Quit (Ping timeout: 480 seconds)
[3:56] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[3:57] * haomaiwa_ (~haomaiwan@223.223.183.116) has joined #ceph
[4:02] * shang (~ShangWu@175.41.48.77) has joined #ceph
[4:02] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Read error: Operation timed out)
[4:04] * mtl2 (~Adium@c-67-174-109-212.hsd1.co.comcast.net) has joined #ceph
[4:04] * mtl1 (~Adium@c-67-174-109-212.hsd1.co.comcast.net) Quit (Read error: Connection reset by peer)
[4:15] * tserong (~tserong@203-57-208-132.dyn.iinet.net.au) Quit (Quit: Leaving)
[4:15] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[4:18] * zhaochao (~zhaochao@124.200.223.7) has joined #ceph
[4:32] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[4:33] * Francis (~Francis@221.222.215.109) Quit (Ping timeout: 480 seconds)
[4:44] * joef (~Adium@2601:9:2a00:690:20ed:4390:fa31:9adc) has joined #ceph
[4:44] * joef (~Adium@2601:9:2a00:690:20ed:4390:fa31:9adc) has left #ceph
[4:46] * haomaiwa_ (~haomaiwan@223.223.183.116) Quit (Remote host closed the connection)
[4:47] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[4:48] * bkopilov (~bkopilov@213.57.17.89) Quit (Ping timeout: 480 seconds)
[4:59] * dgbaley27 (~matt@c-98-245-167-2.hsd1.co.comcast.net) has joined #ceph
[5:02] * haomaiwa_ (~haomaiwan@223.223.183.116) has joined #ceph
[5:08] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Ping timeout: 480 seconds)
[5:11] * jobewan (~jobewan@c-75-65-191-17.hsd1.la.comcast.net) Quit (Remote host closed the connection)
[5:12] * burley (~khemicals@cpe-98-28-233-158.woh.res.rr.com) has joined #ceph
[5:23] * tserong (~tserong@203-57-209-186.dyn.iinet.net.au) has joined #ceph
[5:26] * sjm (~sjm@108.53.250.33) has joined #ceph
[5:27] * Jakey (uid1475@id-1475.uxbridge.irccloud.com) has joined #ceph
[5:28] * haomaiwa_ (~haomaiwan@223.223.183.116) Quit (Remote host closed the connection)
[5:29] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[5:32] * Vacum (~vovo@88.130.219.140) has joined #ceph
[5:36] * zhangdongmao (~zhangdong@203.192.156.9) has joined #ceph
[5:38] * haomaiwa_ (~haomaiwan@223.223.183.116) has joined #ceph
[5:39] * Vacum_ (~vovo@88.130.196.92) Quit (Ping timeout: 480 seconds)
[5:40] * haomaiwa_ (~haomaiwan@223.223.183.116) Quit (Remote host closed the connection)
[5:40] * haomaiwa_ (~haomaiwan@203.69.59.199) has joined #ceph
[5:40] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Read error: Connection reset by peer)
[5:44] * saurabh (~saurabh@121.244.87.117) has joined #ceph
[5:50] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[5:57] * haomaiwang (~haomaiwan@223.223.183.116) has joined #ceph
[5:59] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[6:02] * haomaiwa_ (~haomaiwan@203.69.59.199) Quit (Ping timeout: 480 seconds)
[6:06] * haomaiwang (~haomaiwan@223.223.183.116) Quit (Remote host closed the connection)
[6:06] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[6:08] * sjm (~sjm@108.53.250.33) has left #ceph
[6:10] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[6:15] * theanalyst (~abhi@49.32.3.75) has joined #ceph
[6:21] * haomaiwa_ (~haomaiwan@223.223.183.116) has joined #ceph
[6:23] * haomaiwa_ (~haomaiwan@223.223.183.116) Quit (Remote host closed the connection)
[6:23] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Read error: Connection reset by peer)
[6:23] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[6:27] * haomaiwa_ (~haomaiwan@223.223.183.116) has joined #ceph
[6:31] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Read error: Operation timed out)
[6:34] * Cube (~Cube@66.87.130.206) has joined #ceph
[6:47] * Cube (~Cube@66.87.130.206) Quit (Quit: Leaving.)
[6:49] * Cube1 (~Cube@66-87-130-206.pools.spcsdns.net) has joined #ceph
[6:50] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[6:57] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[7:16] * Cube (~Cube@66-87-130-206.pools.spcsdns.net) has joined #ceph
[7:16] * Cube1 (~Cube@66-87-130-206.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[7:21] * Cube1 (~Cube@66.87.130.206) has joined #ceph
[7:21] * Cube (~Cube@66-87-130-206.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[7:21] * Cube1 (~Cube@66.87.130.206) Quit ()
[7:22] * dgbaley27 (~matt@c-98-245-167-2.hsd1.co.comcast.net) Quit (Remote host closed the connection)
[7:24] * rdas (~rdas@110.227.47.172) has joined #ceph
[7:28] * michalefty (~micha@p20030071CE4EF716A52EE91D616C06E3.dip0.t-ipconnect.de) has joined #ceph
[7:30] * thb (~me@2a02:2028:282:3040:285c:3dd1:710a:6f4c) has joined #ceph
[7:33] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit (Ping timeout: 480 seconds)
[7:34] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[7:37] * haomaiwa_ (~haomaiwan@223.223.183.116) Quit (Remote host closed the connection)
[7:37] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[7:39] * Cube (~Cube@66.87.130.206) has joined #ceph
[7:44] * ghartz (~ghartz@ircad17.u-strasbg.fr) has joined #ceph
[7:53] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[7:53] * haomaiwa_ (~haomaiwan@223.223.183.116) has joined #ceph
[7:59] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Ping timeout: 480 seconds)
[8:01] * haomaiwang (~haomaiwan@223.223.183.116) has joined #ceph
[8:01] * haomaiwa_ (~haomaiwan@223.223.183.116) Quit (Read error: Connection reset by peer)
[8:02] * Cube (~Cube@66.87.130.206) Quit (Quit: Leaving.)
[8:04] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[8:04] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[8:09] * pvh_sa (~pvh@197.87.135.205) Quit (Ping timeout: 480 seconds)
[8:11] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[8:13] * haomaiwang (~haomaiwan@223.223.183.116) Quit (Remote host closed the connection)
[8:13] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[8:15] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[8:17] * Nacer (~Nacer@2001:41d0:fe82:7200:cd6d:1012:72a9:8653) has joined #ceph
[8:19] * Cube (~Cube@66-87-130-206.pools.spcsdns.net) has joined #ceph
[8:23] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[8:23] * haomaiwa_ (~haomaiwan@223.223.183.116) has joined #ceph
[8:27] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Ping timeout: 480 seconds)
[8:31] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[8:40] * Cube (~Cube@66-87-130-206.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[8:45] * rdas (~rdas@110.227.47.172) Quit (Quit: Leaving)
[8:50] * cok (~chk@2a02:2350:18:1012:4cb0:be5b:cfa4:58b8) has joined #ceph
[8:52] * shang (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[8:56] * hyperbaba (~hyperbaba@private.neobee.net) has joined #ceph
[8:57] * leseb (~leseb@81-64-215-19.rev.numericable.fr) Quit (Quit: ZNC - http://znc.in)
[8:57] * Cube (~Cube@66.87.66.195) has joined #ceph
[8:59] <ghartz> sage, I'm glad to read that. CephFS is really awesome and there are a lot places to use it
[9:01] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[9:01] * rdas (~rdas@121.244.87.115) has joined #ceph
[9:02] * madkiss (~madkiss@2001:6f8:12c3:f00f:e0e4:2283:fffd:8187) has joined #ceph
[9:03] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[9:05] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Quit: Leaving.)
[9:06] * Nacer (~Nacer@2001:41d0:fe82:7200:cd6d:1012:72a9:8653) Quit (Remote host closed the connection)
[9:09] <ghartz> sage, John Spray seems to be very active on cephfs/mds part
[9:11] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[9:14] * steki (~steki@91.195.39.5) has joined #ceph
[9:18] * pvh_sa (~pvh@41.164.8.114) has joined #ceph
[9:25] * Cube (~Cube@66.87.66.195) Quit (Quit: Leaving.)
[9:25] * Cube (~Cube@66-87-66-195.pools.spcsdns.net) has joined #ceph
[9:25] * Cube (~Cube@66-87-66-195.pools.spcsdns.net) Quit ()
[9:26] * Cube (~Cube@66.87.66.195) has joined #ceph
[9:26] * Cube (~Cube@66.87.66.195) Quit ()
[9:26] * Cube (~Cube@66.87.66.195) has joined #ceph
[9:27] * Cube (~Cube@66.87.66.195) Quit ()
[9:27] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (Ping timeout: 480 seconds)
[9:28] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:32] * fsimonce (~simon@host133-25-dynamic.250-95-r.retail.telecomitalia.it) has joined #ceph
[9:34] * lcavassa (~lcavassa@89.184.114.246) has joined #ceph
[9:36] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[9:40] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[9:47] * jordanP (~jordan@185.23.92.11) has joined #ceph
[9:47] * lucas1 (~Thunderbi@222.240.148.130) has joined #ceph
[9:51] * leseb (~leseb@81-64-215-19.rev.numericable.fr) has joined #ceph
[9:54] * garphy`aw is now known as garphy
[10:01] * rendar (~I@87.19.176.30) has joined #ceph
[10:01] * garphy is now known as garphy`aw
[10:02] * garphy`aw is now known as garphy
[10:03] * stephan (~stephan@62.217.45.26) has joined #ceph
[10:03] * lucas1 (~Thunderbi@222.240.148.130) Quit (Quit: lucas1)
[10:14] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[10:15] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (Ping timeout: 480 seconds)
[10:16] * boichev (~boichev@213.169.56.130) has joined #ceph
[10:17] * haomaiwa_ (~haomaiwan@223.223.183.116) Quit (Remote host closed the connection)
[10:17] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[10:24] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[10:30] * haomaiwa_ (~haomaiwan@223.223.183.116) has joined #ceph
[10:31] * haomaiwa_ (~haomaiwan@223.223.183.116) Quit (Remote host closed the connection)
[10:31] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Read error: Connection reset by peer)
[10:31] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[10:34] * lucas1 (~Thunderbi@222.240.148.154) has joined #ceph
[10:36] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:37] * dgautam (~oftc-webi@116.197.184.11) has joined #ceph
[10:38] * dgautam (~oftc-webi@116.197.184.11) Quit ()
[10:43] * lucas1 (~Thunderbi@222.240.148.154) Quit (Quit: lucas1)
[10:44] * rdas (~rdas@121.244.87.115) Quit (Quit: Leaving)
[10:46] * haomaiwa_ (~haomaiwan@203.69.59.199) has joined #ceph
[10:46] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Read error: Connection reset by peer)
[10:47] * rdas (~rdas@121.244.87.115) has joined #ceph
[10:47] * i_m (~ivan.miro@deibp9eh1--blueice4n2.emea.ibm.com) has joined #ceph
[10:47] * dgautam (~oftc-webi@116.197.184.11) has joined #ceph
[10:48] <dgautam> Hi All
[10:49] <dgautam> I am trying to setup ceph cluster using ceph commands (not ceph-deploy). I am facing error while activating
[10:49] <dgautam> root@ubuntu-1204-n20:~# ceph-osd -f --debug_ms 10 --id=0 starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal SG_IO: bad/missing sense data, sb[]: f0 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 SG_IO: bad/missing sense data, sb[]: f0 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[10:50] * haomaiwang (~haomaiwan@223.223.183.116) has joined #ceph
[10:50] * Sysadmin88 (~IceChat77@94.4.20.0) Quit (Quit: The early bird may get the worm, but the second mouse gets the cheese)
[10:51] <dgautam> I am not able to figure out what is wrong. Could you please help me out
[10:54] <tnt_> looks to me like a hw failure.
[10:56] <dgautam> I am not trying on virtual disks. if I try different set of commands, it comes up correctly.
[10:56] <dgautam> I am trying on VM/V-Disks
[10:56] * drankis (~drankis__@89.111.13.198) has joined #ceph
[10:56] * haomaiwa_ (~haomaiwan@203.69.59.199) Quit (Ping timeout: 480 seconds)
[10:58] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[10:59] <tnt_> huh ?
[11:00] * haomaiwang (~haomaiwan@223.223.183.116) Quit (Remote host closed the connection)
[11:00] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[11:01] <dgautam> I am trying ceph.. so used virtual disks..
[11:08] * ivan` (~ivan`@000130ca.user.oftc.net) Quit (Quit: ERC Version 5.3 (IRC client for Emacs))
[11:11] * mabj (~SilverWol@130.226.133.114) has joined #ceph
[11:16] * zack_dolby (~textual@e0109-114-22-0-42.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[11:17] * ivan` (~ivan`@000130ca.user.oftc.net) has joined #ceph
[11:17] * mabj (~SilverWol@130.226.133.114) Quit (Read error: Operation timed out)
[11:17] * haomaiwa_ (~haomaiwan@223.223.183.116) has joined #ceph
[11:19] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Read error: Operation timed out)
[11:21] * AbyssOne is now known as a1-away
[11:21] * a1-away is now known as AbyssOne
[11:22] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Ping timeout: 480 seconds)
[11:24] <dgautam> I just tried on physical system.
[11:24] <dgautam> root@cmbu-ixs1-5:~# ceph-disk -v activate /dev/sdb
[11:24] <dgautam> INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/sdb
[11:24] <dgautam> ceph-disk: Cannot discover filesystem type: device /dev/sdb: Line is truncated:
[11:24] <dgautam> any pointer why blkid doesn't return value ?
[11:30] * zhangdongmao (~zhangdong@203.192.156.9) Quit (Quit: Konversation terminated!)
[11:30] * zhangdongmao (~zhangdong@203.192.156.9) has joined #ceph
[11:31] * mabj (~SilverWol@130.226.133.111) has joined #ceph
[11:33] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[11:37] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[11:41] * theanalyst (~abhi@0001c1e3.user.oftc.net) Quit (Ping timeout: 480 seconds)
[11:47] * haomaiwa_ (~haomaiwan@223.223.183.116) Quit (Remote host closed the connection)
[11:47] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[11:51] * sz0_ (~sz0@94.55.197.185) has joined #ceph
[12:02] * capri (~capri@212.218.127.222) has joined #ceph
[12:07] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Read error: Operation timed out)
[12:08] * sz0_ (~sz0@94.55.197.185) Quit (Quit: My iMac has gone to sleep. ZZZzzz???)
[12:12] * theanalyst (~abhi@117.96.13.143) has joined #ceph
[12:21] * haomaiwang (~haomaiwan@223.223.183.116) has joined #ceph
[12:28] * fdmanana (~fdmanana@bl9-170-214.dsl.telepac.pt) has joined #ceph
[12:29] * allsystemsarego (~allsystem@79.115.170.35) has joined #ceph
[12:30] * zack_dolby (~textual@e0109-114-22-0-42.uqwimax.jp) has joined #ceph
[12:32] * rdas (~rdas@121.244.87.115) Quit (Quit: Leaving)
[12:34] * zack_dol_ (~textual@p8505b4.tokynt01.ap.so-net.ne.jp) has joined #ceph
[12:38] * zack_dolby (~textual@e0109-114-22-0-42.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[12:38] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit (Ping timeout: 480 seconds)
[12:43] * saurabh (~saurabh@121.244.87.117) Quit (Read error: Operation timed out)
[12:43] * cok (~chk@2a02:2350:18:1012:4cb0:be5b:cfa4:58b8) Quit (Quit: Leaving.)
[12:46] * haomaiwang (~haomaiwan@223.223.183.116) Quit (Remote host closed the connection)
[12:47] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[12:49] * ghartz (~ghartz@ircad17.u-strasbg.fr) has joined #ceph
[12:51] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[12:52] * dgautam (~oftc-webi@116.197.184.11) Quit (Remote host closed the connection)
[13:02] * thb (~me@2a02:2028:282:3040:285c:3dd1:710a:6f4c) has joined #ceph
[13:02] * zhaochao (~zhaochao@124.200.223.7) has left #ceph
[13:03] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Read error: Connection reset by peer)
[13:05] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[13:07] * haomaiwa_ (~haomaiwan@223.223.183.116) has joined #ceph
[13:08] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[13:08] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Read error: Operation timed out)
[13:11] <djh-work> Clients have to know the number of pgs in order to calculate the placement. But am I able to get this information (num_pgs), using the public librados API?
[13:14] * boichev (~boichev@213.169.56.130) Quit (Quit: Nettalk6 - www.ntalk.de)
[13:27] * sleinen (~Adium@2001:620:0:68::103) has joined #ceph
[13:31] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[13:31] * dgautam (~oftc-webi@116.197.184.11) has joined #ceph
[13:38] * lucas1 (~Thunderbi@218.76.25.66) Quit (Quit: lucas1)
[13:45] * hostranger (~oftc-webi@pcroland.nine.ch) has joined #ceph
[13:46] * hostranger (~oftc-webi@pcroland.nine.ch) Quit ()
[13:46] * hostranger (~rulrich@2a02:41a:3999::85) has joined #ceph
[13:49] * saurabh (~saurabh@121.244.87.117) has joined #ceph
[13:54] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (Ping timeout: 480 seconds)
[13:54] * lightspeed (~lightspee@81.187.0.153) has joined #ceph
[13:54] <Xiol> Hi guys. I've got a problem with our cluster, got a few PGs that are stuck unclean and I can't figure out why - all the stuff I've found on the 'net doesn't appear to apply here unless I've missed something. We had a disk failure the other day, which we were alerted to by inconsistent PGs, so I marked that OSD as down/out and let stuff migrate off, but we've got these stuck PGs that just aren't doing anything. ceph pg [pg] query doesn't show any unfound obje
[13:56] <tnt_> what does a pg_dump show for those pgs ?
[13:58] <Xiol> tnt_: I'm not familiar with that command?
[14:01] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) Quit (Quit: Leaving)
[14:02] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) has joined #ceph
[14:02] * ChanServ sets mode +o elder
[14:02] <tnt_> ceph pg dump. But query should work too.
[14:02] <tnt_> pastebin the query results
[14:02] * rdas (~rdas@121.244.87.115) has joined #ceph
[14:04] <Xiol> tnt_: query output for one of the stuck PGs http://p.rig.gr/view/raw/30197f17
[14:04] * cok (~chk@2a02:2350:1:1203:5de1:5793:fbc2:3d99) has joined #ceph
[14:05] * dmsimard_away is now known as dmsimard
[14:05] <Xiol> ceph health detail -> http://p.rig.gr/view/raw/fabf01ba
[14:05] * diegows (~diegows@190.190.5.238) has joined #ceph
[14:05] <Xiol> OSD that was marked down/out was 177
[14:06] <ghartz> Xiol, did you remove the osd from crush map ?
[14:08] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (Remote host closed the connection)
[14:08] <Xiol> ghartz: not yet, as none of the stuck PGs were relating to the removed OSD it didn't seem necessary. can do it now though
[14:09] <tnt_> ceph osd tree ?
[14:10] <ghartz> Xiol, somebody correct me if wrong but best practice are to down/out/rm the OSD for a failure hdd
[14:10] <tnt_> ghartz: yes. However it should recover in a helthy state without doing anything.
[14:11] <ghartz> ho
[14:11] <ghartz> ok
[14:11] <Xiol> Removing the offending OSD has kicked off some recovery/backfilling (which despite setting 'osd max backfills' to 1 has yet again increase load significantly on the cluster -_-)
[14:11] <Xiol> Removing it from the CRUSH map* for clarity
[14:12] <Xiol> I was under the assumption that setting it down/out and stopping the OSD process would kick off recovery anyway
[14:13] <tnt_> it should
[14:13] <Xiol> yeah, and it did, at least until it got to the stuck PGs. guess I'll have to wait and see if this resolves the problem now. could be a while! thanks for the help tnt_, ghartz
[14:15] * theanalyst (~abhi@0001c1e3.user.oftc.net) Quit (Ping timeout: 480 seconds)
[14:15] <ghartz> Xiol, tell us if it help
[14:16] * Kioob`Taff2 (~plug-oliv@89-156-97-235.rev.numericable.fr) Quit (Quit: Leaving.)
[14:16] * Kioob`Taff (~plug-oliv@89-156-97-235.rev.numericable.fr) has joined #ceph
[14:17] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[14:19] * DV (~veillard@veillard.com) Quit (Ping timeout: 480 seconds)
[14:19] * i_m (~ivan.miro@deibp9eh1--blueice4n2.emea.ibm.com) Quit (Quit: Leaving.)
[14:19] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit ()
[14:22] * sz0_ (~sz0@94.55.197.185) has joined #ceph
[14:23] * dgautam (~oftc-webi@116.197.184.11) Quit (Quit: Page closed)
[14:23] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[14:25] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[14:30] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[14:31] * fghaas (~florian@91-119-223-7.dynamic.xdsl-line.inode.at) has joined #ceph
[14:31] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit ()
[14:31] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[14:34] * hyperbaba (~hyperbaba@private.neobee.net) Quit ()
[14:34] * aarcane_ (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[14:35] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[14:39] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[14:40] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[14:41] * ganders (~root@200-127-158-54.net.prima.net.ar) has joined #ceph
[14:41] * rdas (~rdas@121.244.87.115) Quit (Quit: Leaving)
[14:44] <ganders> hi everyone
[14:44] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[14:44] <ganders> i'm having some issues while trying to map a rbd on a ubuntu server with kernel 3.2.0-23
[14:45] * sjm (~sjm@108.53.250.33) has joined #ceph
[14:45] <ganders> issuing the following cmd "sudo rbd map cephtest --pool rbd --name client.admin -m cephmon01,cephmon02,cephmon03 -k /etc/ceph/ceph.client.admin.keyring
[14:45] <ganders> "
[14:45] <ganders> is getting me the following error msg: "rbd: add failed: (5) Input/output error"
[14:47] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[14:49] * ctd_ (~root@00011932.user.oftc.net) Quit (Quit: END OF LINE)
[14:50] <ganders> anyone know what could be the issue here? i try other commands, for example a ls pool and it work fine, also getting some info from the object works fine
[14:50] * michalefty (~micha@p20030071CE4EF716A52EE91D616C06E3.dip0.t-ipconnect.de) has left #ceph
[14:50] * ctd (~root@00011932.user.oftc.net) has joined #ceph
[14:50] <tnt_> does dmesg say anything ?
[14:51] <ghartz> ganders, your ceph.conf is correct ? monitor are up and reacheable ?
[14:51] * ctd (~root@00011932.user.oftc.net) Quit ()
[14:52] * ctd (~root@00011932.user.oftc.net) has joined #ceph
[14:53] <ganders> yes
[14:53] <ganders> all of them
[14:54] <ganders> and the ceph.conf file is the same on all the nodes
[14:55] <tnt_> check dmesg. My guess is that you're using features not supported in 3.2
[14:55] <ganders> from dmesg i only see:
[14:55] <ganders> [57945.118149] libceph: loaded (mon/osd proto 15/24, osdmap 5/6 5/6)
[14:55] <ganders> [57945.120258] rbd: loaded rbd (rados block device)
[14:55] <ganders> [57945.123500] libceph: mon1 10.90.10.2:6789 feature set mismatch, my 2 < server's 42040002, missing 42040000
[14:55] <ganders> [57945.124246] libceph: mon1 10.90.10.2:6789 missing required protocol features
[14:55] <ganders> [57955.137721] libceph: mon1 10.90.10.2:6789 feature set mismatch, my 2 < server's 42040002, missing 42040000
[14:55] <ganders> [57955.138235] libceph: mon1 10.90.10.2:6789 missing required protocol features
[14:55] <ghartz> tnt_, nice catch
[14:55] <tnt_> thre you go
[14:56] <tnt_> http://ceph.com/docs/master/rados/operations/crush-map/#tunables
[14:57] <ganders> oh ok, so this kern version does not support crush_tunables
[14:58] <ganders> but supports crush_tunables3, since im running firefly and kern version is 3.2 > 3.15
[14:59] <ganders> so how can i change the crush tun to 3, is there any commands? like ceph osd crush tunables = 3.. or something like that?
[15:05] <ganders> i try chaging with "ceph osd crush tunables legacy" but with no effect :(
[15:05] <ganders> still receiving the err msg from the map
[15:07] <tnt_> in what world is 2 larger than 15 ?!?
[15:07] * tdasilva (~quassel@nat-pool-bos-t.redhat.com) has joined #ceph
[15:08] <janos> 2 gallons is bigger than 15 cups?
[15:08] <janos> ;)
[15:08] <ganders> oh sorry i mean <, my mistake
[15:08] <ganders> i read 3.20 from the cmd line :P
[15:09] <tnt_> yeah ... so 3.2 doesn't support _any_ of the crush tunables.
[15:09] <tnt_> janos: nice people use SI units :p
[15:09] <ganders> either legacy?
[15:10] <janos> dangit! it's what came to mind quickest. started a batch of 3 gallons of pickles last night from the garden. i have imperial measurements on the mind
[15:10] <tnt_> yes, legacy is basically not using any of the crush_tunables extension.
[15:10] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) has joined #ceph
[15:10] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) has left #ceph
[15:10] * longguang_ (~chatzilla@123.126.33.253) has joined #ceph
[15:10] * markbby (~Adium@168.94.245.3) has joined #ceph
[15:11] <ganders> ok so changed it to legacy should resolved the issue, once changed that is there a need to restart the mons? or only if trying to removed the alert msgs?
[15:11] * brad_mssw (~brad@shop.monetra.com) has joined #ceph
[15:12] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[15:12] * sz0_ (~sz0@94.55.197.185) Quit (Quit: My iMac has gone to sleep. ZZZzzz???)
[15:12] <tnt_> No need. But it may cause some data movement.
[15:12] <tnt_> to remove the warning you need some option in ceph.conf
[15:12] <ganders> ok, the mon warn legacy crush tunables = false and then restart the mons
[15:14] * longguang__ (~chatzilla@123.126.33.253) has joined #ceph
[15:15] * sz0_ (~sz0@94.55.197.185) has joined #ceph
[15:16] * longguang (~chatzilla@123.126.33.253) Quit (Ping timeout: 480 seconds)
[15:16] * longguang__ is now known as longguang
[15:17] <ganders> mmmm something wrong... still can't run the rbd map
[15:18] <tnt_> what does dmesg say now ?
[15:19] <ganders> same err mesg: "[319133.377461] libceph: mon1 10.90.10.2:6789 feature set mismatch, my 2 < server's 40000002, missing 40000000
[15:19] <ganders> [319133.378044] libceph: mon1 10.90.10.2:6789 missing required protocol features
[15:19] <ganders> "
[15:19] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[15:19] <tnt_> Ok, so ... need to find which feature is 1 << 30
[15:21] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[15:21] <tnt_> ganders: It's FEATURE_OSDHASHPSPOOL
[15:21] * longguang_ (~chatzilla@123.126.33.253) Quit (Ping timeout: 480 seconds)
[15:22] <tnt_> you need to remote the HASHPSPOOL flag from your rbd pool.
[15:22] <tnt_> s/remote/remove/
[15:22] <kraken> tnt_ meant to say: you need to remove the HASHPSPOOL flag from your rbd pool.
[15:22] * cok (~chk@2a02:2350:1:1203:5de1:5793:fbc2:3d99) Quit (Quit: Leaving.)
[15:24] * ade (~abradshaw@193.202.255.218) has joined #ceph
[15:25] * sz0_ (~sz0@94.55.197.185) Quit ()
[15:27] <ganders> oh ok, thanks i will try to remove that and see if it works
[15:29] <ganders> ceph osd pool set rbd hashpspool false, right?
[15:32] <tnt_> I'm not sure if it's false or 0
[15:33] * saurabh (~saurabh@121.244.87.117) Quit (Quit: Leaving)
[15:41] * fabioFVZ (~fabiofvz@213.187.20.119) has joined #ceph
[15:41] * JayJ (~jayj@157.130.21.226) has joined #ceph
[15:43] * zack_dolby (~textual@p8505b4.tokynt01.ap.so-net.ne.jp) has joined #ceph
[15:45] * markbby (~Adium@168.94.245.3) Quit (Remote host closed the connection)
[15:46] * zack_dol_ (~textual@p8505b4.tokynt01.ap.so-net.ne.jp) Quit (Ping timeout: 480 seconds)
[15:52] * ircolle (~Adium@2601:1:a580:145a:9d39:e507:a6c1:e032) has joined #ceph
[15:53] * zack_dol_ (~textual@p8505b4.tokynt01.ap.so-net.ne.jp) has joined #ceph
[15:55] * tdasilva (~quassel@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[15:55] * zack_dolby (~textual@p8505b4.tokynt01.ap.so-net.ne.jp) Quit (Ping timeout: 480 seconds)
[15:56] * tdasilva (~quassel@nat-pool-bos-t.redhat.com) has joined #ceph
[15:56] * theanalyst (~abhi@117.96.13.143) has joined #ceph
[15:56] * tdasilva (~quassel@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[15:57] <ganders> same error :(
[16:01] <djh-work> Is it possible to get the number of pgs in a pool using the public librados API?
[16:02] * JayJ (~jayj@157.130.21.226) Quit (Remote host closed the connection)
[16:02] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[16:03] * JayJ (~jayj@157.130.21.226) has joined #ceph
[16:03] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit ()
[16:04] * zack_dolby (~textual@p8505b4.tokynt01.ap.so-net.ne.jp) has joined #ceph
[16:10] * zack_dol_ (~textual@p8505b4.tokynt01.ap.so-net.ne.jp) Quit (Ping timeout: 480 seconds)
[16:15] <ganders> ok so i upgr de kernel to 3.13.0-32 and now is working fine...
[16:18] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[16:21] * jobewan (~jobewan@snapp.centurylink.net) has joined #ceph
[16:26] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[16:28] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) has joined #ceph
[16:30] * diegows (~diegows@190.190.5.238) Quit (Read error: Operation timed out)
[16:34] * theanalyst (~abhi@117.96.13.143) Quit (Remote host closed the connection)
[16:37] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[16:37] * nljmo (~nljmo@5ED6C263.cm-7-7d.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[16:37] * nljmo (~nljmo@5ED6C263.cm-7-7d.dynamic.ziggo.nl) has joined #ceph
[16:38] * pvh_sa (~pvh@41.164.8.114) Quit (Ping timeout: 480 seconds)
[16:45] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) has joined #ceph
[16:46] <kitz> I'm getting a strange error where when I put my cluster under load for a while it then falters and all my OSDs start marking each other down. If I stop writing to the cluster then it recovers gracefully. If I keep writing it sometimes recovers (only to falter again) but eventually throws and I/O error up to by RBD client.
[16:47] <kitz> I've got a lot of "heartbeat_check: no reply from osd.XX" in my logs
[16:47] * markbby (~Adium@168.94.245.2) has joined #ceph
[16:48] <kitz> Running 0.80.4 on Ubuntu 14.04
[16:49] * markbby1 (~Adium@168.94.245.4) has joined #ceph
[16:49] * markbby1 (~Adium@168.94.245.4) Quit ()
[16:49] * markbby (~Adium@168.94.245.2) Quit (Remote host closed the connection)
[16:51] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit (Ping timeout: 480 seconds)
[16:55] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[16:57] <Vacum> kitz: monitor your cpu0 for softirq load. do you have 10GbE interfaces? it could be due to all eth interrupts ending up on core 0 of cpu 0
[16:57] <Vacum> kitz: which would result in lost packets.
[16:59] * ahmett (~horasan@88.244.86.163) has joined #ceph
[16:59] * Kioob`Taff (~plug-oliv@89-156-97-235.rev.numericable.fr) Quit (Quit: Leaving.)
[16:59] * Kioob`Taff (~plug-oliv@89-156-97-235.rev.numericable.fr) has joined #ceph
[16:59] * ahmett (~horasan@88.244.86.163) Quit (autokilled: Do not spam other people. Mail support@oftc.net if you feel this is in error. (2014-07-28 14:59:38))
[16:59] * gregsfortytwo1 (~Adium@126-206-207-216.dsl.mi.winntel.net) has joined #ceph
[17:00] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[17:01] * ghartz (~ghartz@ircad17.u-strasbg.fr) has joined #ceph
[17:02] * fdmanana (~fdmanana@bl9-170-214.dsl.telepac.pt) Quit (Quit: Leaving)
[17:03] * Kioob`Taff (~plug-oliv@89-156-97-235.rev.numericable.fr) Quit (Remote host closed the connection)
[17:06] <kitz> Vacum: I am on 10GbE. Top shows similar to this on all 3 of my nodes: %Cpu(s): 11.2 us, 8.5 sy, 0.0 ni, 78.6 id, 0.7 wa, 1.0 hi, 0.0 si, 0.0 st
[17:07] <Vacum> kitz: hit the 1 key to see it per CPU
[17:07] <Vacum> per core that is
[17:07] * markbby (~Adium@168.94.245.3) has joined #ceph
[17:07] <Vacum> kitz: and monitor the "si" value on all osd nodes when the cluster is under load and starts marking out
[17:08] <Vacum> monitor the si value per core
[17:08] * b0e (~aledermue@juniper1.netways.de) Quit (Quit: Leaving.)
[17:08] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[17:09] * joef (~Adium@c-67-188-220-98.hsd1.ca.comcast.net) has joined #ceph
[17:10] * joef (~Adium@c-67-188-220-98.hsd1.ca.comcast.net) has left #ceph
[17:11] * steki (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:11] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[17:16] <kitz> Vacum: si is 0.0 across the board. Each of my 3 nodes are dual 6-core E5-2620v2 and all si values are 0.0. I never even see a 0.1.
[17:16] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) Quit (Ping timeout: 480 seconds)
[17:17] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[17:17] <Vacum> kitz: then you don't have much traffic on those interfaces :) and you can of course also dicard my theory
[17:17] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[17:18] * xdeller (~xdeller@109.188.124.39) has joined #ceph
[17:20] <kitz> Vacum: hah! thanks. My client is 10GbE so each of my three nodes sees ~3GbE on the front and (with size=3) ~6GbE on the back when things are working at top speed. This only lasts a little while and then it kinda stumbles and slows down from 900MBps to 150MBps and then it drops to < 1Mbps and then I see the markdowns.
[17:20] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[17:22] <kitz> That said, created a new cluster, and wrote about 8TiB before it started doing this. Last time I deployed on this hardware I got the same problem but also had no issues through the first ~5TiB of writes.
[17:24] <kitz> I don't see anything obvious in the logs to describe the slowdown prior to the markdowns it just kinda runs out of gas
[17:24] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[17:25] * gregsfortytwo (~Adium@2607:f298:a:607:ed2a:2cba:bb2:b820) Quit (Quit: Leaving.)
[17:25] * gregsfortytwo (~Adium@38.122.20.226) has joined #ceph
[17:25] <jobewan> is there a way to have ceph-deploy write out all of the config directives for the osds/mons/mds?
[17:29] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Read error: Operation timed out)
[17:29] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[17:32] * ircolle is now known as ircolle-afk
[17:43] * fdmanana (~fdmanana@bl9-170-214.dsl.telepac.pt) has joined #ceph
[17:43] * xdeller (~xdeller@109.188.124.39) Quit (Ping timeout: 480 seconds)
[17:44] * xarses (~andreww@12.164.168.117) has joined #ceph
[17:44] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) has joined #ceph
[17:48] * adamcrume (~quassel@50.247.81.99) has joined #ceph
[17:48] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[17:51] * tdasilva (~quassel@nat-pool-bos-t.redhat.com) has joined #ceph
[17:53] * ade (~abradshaw@193.202.255.218) Quit (Ping timeout: 480 seconds)
[17:53] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[17:58] * fabioFVZ (~fabiofvz@213.187.20.119) Quit ()
[17:58] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:04] * sleinen (~Adium@2001:620:0:68::103) Quit (Ping timeout: 480 seconds)
[18:06] * dmsimard is now known as dmsimard_away
[18:08] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[18:09] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[18:17] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:22] <devicenull> is there a way I can suppress the log spam that mon servers do? they seem to output a whole bunch of 'paxos is_readable' lines every second or two
[18:22] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) has joined #ceph
[18:27] * ircolle-afk is now known as ircolle
[18:31] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[18:32] * sleinen1 (~Adium@2001:620:0:68::102) has joined #ceph
[18:37] * ade (~abradshaw@80-72-52-29.cmts.powersurf.li) has joined #ceph
[18:39] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[18:39] * KaZeR (~kazer@64.201.252.132) has joined #ceph
[18:41] * diegows (~diegows@host131.181-1-236.telecom.net.ar) has joined #ceph
[18:43] <gregsfortytwo1> devicenull: you've probably got some "debug_*" lines set in your ceph.conf; you can turn those values closer to zero
[18:43] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) has joined #ceph
[18:43] * ade (~abradshaw@80-72-52-29.cmts.powersurf.li) Quit (Quit: Too sexy for his shirt)
[18:43] <gregsfortytwo1> or you can add them with their values set closer to zero than the defaults; I think you can find a listing of debug options in ceph.com/docs
[18:43] <devicenull> I don't seem to have any, but I'll see if maybe they're set some other way
[18:45] * dmsimard_away is now known as dmsimard
[18:50] * alram (~alram@38.122.20.226) has joined #ceph
[18:51] * bkopilov (~bkopilov@213.57.17.210) has joined #ceph
[18:51] * JayJ (~jayj@157.130.21.226) Quit (Quit: Computer has gone to sleep.)
[18:52] * blinky_ghost_ (~psousa@213.228.167.67) has joined #ceph
[18:53] * fghaas (~florian@91-119-223-7.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[18:54] <blinky_ghost_> Hi all, I've implemented ceph with my openstack cloud, everything is working fine but when I remove a volume, "rbd rm" command takes so long...Any hint how to improve this? I'm using firefly ceph-0.80.1-0.el6.x86_64. Thanks.
[18:54] * dmsimard is now known as dmsimard_away
[18:55] * lcavassa (~lcavassa@89.184.114.246) Quit (Quit: Leaving)
[18:55] * thb (~me@2a02:2028:282:3040:285c:3dd1:710a:6f4c) has joined #ceph
[18:56] * thb is now known as Guest4090
[18:57] * joef1 (~Adium@c-67-188-220-98.hsd1.ca.comcast.net) has joined #ceph
[18:59] * Guest4090 is now known as thb
[18:59] * adamcrume (~quassel@50.247.81.99) Quit (Remote host closed the connection)
[18:59] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[18:59] * JayJ (~jayj@157.130.21.226) has joined #ceph
[19:00] * sarob (~sarob@mobile-166-137-184-037.mycingular.net) has joined #ceph
[19:01] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (Quit: Leaving.)
[19:01] * Cube (~Cube@66-87-66-195.pools.spcsdns.net) has joined #ceph
[19:01] * danieljh (~daniel@0001b4e9.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:03] * rweeks (~rweeks@pat.hitachigst.com) has joined #ceph
[19:03] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[19:08] * JC (~JC@AMontpellier-651-1-298-68.w92-143.abo.wanadoo.fr) has joined #ceph
[19:10] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[19:13] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (Quit: Leaving.)
[19:20] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[19:23] * diegows (~diegows@host131.181-1-236.telecom.net.ar) Quit (Ping timeout: 480 seconds)
[19:26] * adamcrume (~quassel@2601:9:6680:47:58aa:3802:b133:9b22) has joined #ceph
[19:27] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[19:27] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[19:30] * mtl2 (~Adium@c-67-174-109-212.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[19:30] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Operation timed out)
[19:31] * jordanP (~jordan@185.23.92.11) Quit (Quit: Leaving)
[19:33] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[19:39] <bens> Hi!
[19:39] <bens> Calamari uses diamond, whihc is generating 20g of logs a week
[19:39] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) has joined #ceph
[19:39] * rturk|afk is now known as rturk
[19:40] <bens> What's the best way to tone that down
[19:40] <jobewan> is there a way to have ceph-deploy write out all of the config directives for the osds/mons/mds to my ceph.conf ?
[19:40] <bens> i am looking at diamond docs, but I'm not sure what I need to adjust.
[19:43] * thomnico (~thomnico@37.162.202.170) has joined #ceph
[19:44] <ganders> anyone had a home made proc to install calamari?
[19:45] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[19:46] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit ()
[19:46] * LeaChim (~LeaChim@host86-162-79-167.range86-162.btcentralplus.com) has joined #ceph
[19:49] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[19:51] <bens> i can't parse that.
[19:51] <bens> can you please restate your question
[19:52] <t0rn> sounds like hes asking for a script to install calamari for you
[19:53] * mtl1 (~Adium@66.35.47.125) has joined #ceph
[19:54] <jiffe> so I followed the steps in http://ceph.com/docs/master/start/ to setup the deploy node and create a cluster with 1 mon node and 2 osd nodes and after this ceph health returns HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
[19:54] * fghaas (~florian@91-119-223-7.dynamic.xdsl-line.inode.at) has joined #ceph
[19:56] * dmsimard_away is now known as dmsimard
[19:57] * ircolle is now known as ircolle-lunch
[19:58] <ganders> i try to follow the one that is provided by inktank but it needs a user/pass to retrived the files
[19:58] * baylight (~tbayly@74-220-196-40.unifiedlayer.com) has joined #ceph
[20:00] * sarob (~sarob@mobile-166-137-184-037.mycingular.net) Quit (Remote host closed the connection)
[20:00] * sarob (~sarob@mobile-166-137-184-037.mycingular.net) has joined #ceph
[20:02] * sarob (~sarob@mobile-166-137-184-037.mycingular.net) Quit (Read error: Connection reset by peer)
[20:06] * thomnico (~thomnico@37.162.202.170) Quit (Ping timeout: 480 seconds)
[20:08] * gregsfortytwo1 (~Adium@126-206-207-216.dsl.mi.winntel.net) Quit (Quit: Leaving.)
[20:08] * joef1 (~Adium@c-67-188-220-98.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[20:09] <dmick> jiffe: the default pool replication size is now 3; your pgs will be stuck unless you have at least 3 osds
[20:10] <dmick> ganders: the packages for calamari are indeed customer-only, but the source workspaces have directions for build and install
[20:10] <dmick> bens: what kind of logs where? regardless it'll be /etc/diamond/diamond.conf
[20:13] * wabat (~wbatterso@65.182.109.4) has joined #ceph
[20:15] <wabat> Hello all...I'm a bit new to Ceph and looking for some guidance in regards to an error I am seeing. health HEALTH_WARN 9 pgs stuck unclean; recovery 63/13635 degraded (0.462%. Any help would be appreciated.
[20:17] * alram (~alram@38.122.20.226) Quit (Quit: leaving)
[20:17] * Nacer (~Nacer@2001:41d0:fe82:7200:ddaa:13ec:c8bd:edf8) has joined #ceph
[20:17] * alram (~alram@38.122.20.226) has joined #ceph
[20:17] * joef (~Adium@2601:9:2a00:690:9485:e18c:9da:3857) has joined #ceph
[20:18] * joef (~Adium@2601:9:2a00:690:9485:e18c:9da:3857) Quit ()
[20:19] <jiffe> dmick: part of the setup was to add 'osd pool default size = 2' under the [default] section although there was no [default] in the config file created so I added it
[20:19] <jiffe> perhaps what I did was not correct
[20:20] <dmick> [default] doesn't sound correct, no
[20:20] <dmick> I see that's what it says
[20:22] <jiffe> looks like its under [global] in other configs I'm finding
[20:26] <dmick> I'd try that; I'm not certain just what happens with [default] to be honest
[20:26] <dmick> meanwhile, you can just add another OSD and it should clear up
[20:26] <dmick> (which is just another daemon with a filesystem, so doesn't need to involve another host)
[20:26] * sarob (~sarob@2001:4998:effd:600:1cf1:d12:fb9b:4ae) has joined #ceph
[20:29] * tdb (~tdb@myrtle.kent.ac.uk) Quit (Ping timeout: 480 seconds)
[20:30] * ircolle-lunch is now known as ircolle
[20:30] * tdb (~tdb@myrtle.kent.ac.uk) has joined #ceph
[20:31] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[20:36] <jiffe> just reverted the vms I'm using for testing and used the [global] section and that seemed to work
[20:37] * blinky_ghost_ (~psousa@213.228.167.67) Quit (Quit: Ex-Chat)
[20:38] * meis3_ (~meise@oglarun.3st.be) Quit (Quit: leaving)
[20:38] * diegows (~diegows@190.190.5.238) has joined #ceph
[20:38] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit (Ping timeout: 480 seconds)
[20:39] * davidz (~Adium@cpe-23-242-12-23.socal.res.rr.com) has joined #ceph
[20:43] * rturk is now known as rturk|afk
[20:45] * rendar (~I@87.19.176.30) Quit (Read error: Operation timed out)
[20:45] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[20:49] * BManojlovic (~steki@95.180.4.243) has joined #ceph
[20:49] * ghartz (~ghartz@ircad17.u-strasbg.fr) has joined #ceph
[20:51] <bens> dmick: diamond.log. And the .conf file is just default. Diamond handles its own log rotation (rendering logrotate sort of useless) but it doesn't compress anything
[20:51] <bens> it also is stupid verbose.
[20:52] * rendar (~I@87.19.176.30) has joined #ceph
[20:57] * Cube1 (~Cube@66-87-66-195.pools.spcsdns.net) has joined #ceph
[20:57] * Cube (~Cube@66-87-66-195.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[21:05] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[21:06] * baylight (~tbayly@74-220-196-40.unifiedlayer.com) Quit (Read error: Operation timed out)
[21:06] <jcsp> bens: calamari is supposed to set the diamond verbosity to WARN when it installs its diamond config file.
[21:07] <jcsp> but that only happens if everything is running according to plan, the config file is installed after the diamond package was installed
[21:07] * Cube1 (~Cube@66-87-66-195.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[21:07] <jcsp> if you've installed diamond manually then you can update the log verbosity manually
[21:07] * Cube (~Cube@66-87-66-195.pools.spcsdns.net) has joined #ceph
[21:07] <jcsp> the setting is just logger_root.level in /etc/diamond/diamond.conf
[21:08] <bens> thanks - checking now
[21:08] <bens> mine was installed via inktank's repo - was this fixed when it was opensourced?
[21:09] <jcsp> what version of calamari are you talking about?
[21:11] <bens> 1.1
[21:11] <jcsp> ok, I was talking about open source and later (including what goes into ICE 1.2)
[21:12] <bens> some day
[21:14] * Cube1 (~Cube@66.87.66.195) has joined #ceph
[21:14] * Cube (~Cube@66-87-66-195.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[21:14] * mtl1 (~Adium@66.35.47.125) Quit (Quit: Leaving.)
[21:16] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[21:18] * baylight (~tbayly@69.169.150.21.provo.static.broadweavenetworks.net) has joined #ceph
[21:20] * Nacer (~Nacer@2001:41d0:fe82:7200:ddaa:13ec:c8bd:edf8) Quit (Ping timeout: 480 seconds)
[21:20] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[21:21] * Kupo1 (~tyler.wil@wsip-68-14-231-140.ph.ph.cox.net) has joined #ceph
[21:21] <Kupo1> Hey All, Is it possible to delete extra replicas from when downsizing min_size and pool size?
[21:24] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[21:25] <dmick> Kupo1: what exactly do you mean by "delete extra replicas"? Remove an OSD?
[21:26] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[21:28] <Kupo1> dmick: Moving min_size up to 4 from 2 would create extra copies of those objects correct?
[21:28] <Kupo1> wondering if its possible to do the reverse, and have it clean the extra copies of objects
[21:33] * pvh_sa (~pvh@197.87.135.205) has joined #ceph
[21:34] * rturk|afk is now known as rturk
[21:38] <dmick> well, the extra copies become unmaintained; I'm not 100% certain when they're actually deleted, although I'd guess it's lazily
[21:38] <dmick> are you concerned because of data security, or just wondering how exactly the cluster carries out its promise?
[21:39] <Kupo1> Making sure the space isnt wasted primarily
[21:39] * fghaas (~florian@91-119-223-7.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[21:39] <Kupo1> eg dead objects with no cleanup is bad news :)
[21:39] * mtl1 (~Adium@c-67-174-109-212.hsd1.co.comcast.net) has joined #ceph
[21:40] <bens> I'm totally jammed up on calamari.
[21:40] <bens> not it won't even start.
[21:40] <bens> *now it
[21:40] <bens> well, the api won't start.
[21:40] <bens> rados.ObjectNotFound: error calling connect
[21:40] <bens> unable to load app 0 (mountpoint='') (callable not found or import error)
[21:43] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[21:44] * tdb (~tdb@myrtle.kent.ac.uk) Quit (Ping timeout: 480 seconds)
[21:45] * tdb (~tdb@myrtle.kent.ac.uk) has joined #ceph
[21:46] <dmick> Kupo1: yeah, not sure when things are cleaned, but I'm pretty sure they are eventually. Perhaps at scrub time.
[21:46] <dmick> bens: sounds like auth problems
[21:47] * linuxkidd (~linuxkidd@rtp-isp-nat1.cisco.com) has joined #ceph
[21:47] <bens> i got 99 prblems but my auth ain't one.
[21:47] <bens> err, maybe.
[21:47] <bens> so where is the key that the rest api uses listed?
[21:47] <dmick> depends on what you mean by "listed"
[21:48] <bens> configured?
[21:48] <bens> I have a cold, and i am stupid today, please excuse me.
[21:48] <dmick> clients have a default name; many of them are client.admin
[21:48] <dmick> man ceph-rest-api shows that the default here is client.restapi
[21:48] <dmick> I do not remember if Calamari 1.1 runs ceph-rest-api with the dfeault or not
[21:48] <bens> so the api is a client - it needs to know the key
[21:49] <dmick> it'd be in the service script if it changed
[21:49] * BManojlovic (~steki@95.180.4.243) Quit (Ping timeout: 480 seconds)
[21:49] <bens> CLIENTNAME = 'client.restapi' # normal name
[21:49] <bens> i assume that is it
[21:49] <Kupo1> dmick: how can i force a scrub to test this?
[21:50] <dmick> yeah
[21:50] <bens> dang
[21:50] <dmick> Kupo1: "ceph -h | grep scrub" will help
[21:50] <bens> the client.admin key and keyring all line up
[21:51] <dmick> but why would you be looking at client.admin?
[21:51] <bens> like i said, cold medicine
[21:51] <bens> thanks for punching sense into me
[21:51] <dmick> ;)
[21:52] <bens> so i should have a key called client.restapi in 'ceph auth list' right?
[21:53] <dmick> yep
[21:53] <Kupo1> dmick: looks like it cleaned it up thanks
[21:53] <bens> someone nuked it.
[21:53] <bens> grumble.
[21:54] * danieljh (~daniel@0001b4e9.user.oftc.net) has joined #ceph
[21:59] * thb (~me@0001bd58.user.oftc.net) Quit (Quit: Leaving.)
[22:02] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[22:04] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[22:07] * gregsfortytwo1 (~Adium@126-206-207-216.dsl.mi.winntel.net) has joined #ceph
[22:08] * markbby (~Adium@168.94.245.3) has joined #ceph
[22:24] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[22:29] * ganders (~root@200-127-158-54.net.prima.net.ar) Quit (Ping timeout: 480 seconds)
[22:30] * ganders (~root@200-127-158-54.net.prima.net.ar) has joined #ceph
[22:32] * ganders (~root@200-127-158-54.net.prima.net.ar) Quit ()
[22:35] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) Quit (Quit: Leaving.)
[22:40] * Sysadmin88 (~IceChat77@94.4.20.0) has joined #ceph
[22:41] <bens> thanks dmick - that was what was wrong.
[22:41] <dmick> \o/
[22:42] <bens> how was oscon? sorry I missed you guys
[22:43] <dmick> it was good. Big, well-run, a whole lotta speakers
[22:45] <bens> good swag?!
[22:45] <dmick> I'm kinda over conf swag
[22:45] <dmick> so I'm no judge.
[22:45] <bens> haha
[22:46] <bens> my ceph shirt is the only vendor swag i wear
[22:46] <bens> because octopus
[22:46] <dmick> HP gave us all a little foam cloud. I left it for the maid's child relatives
[22:46] <rturk> :)
[22:46] <sage> bens: i got (yet another) mini flashlight for my daughter to read with in bed, so i'm happy.
[22:46] <dmick> I did get a 10th anniversary Ceph shirt, although I assume I didn't really have to go to the conf to do that :)
[22:47] <bens> "turn the light on for opensource enterprise platform saas infratructure deliverables!"
[22:47] * dmick imagines a three-flashlight array under Sage's daughters blanket to protect against SPOFs
[22:48] <Serbitar> 2 active with one warm spare?
[22:48] <dmick> warm spare? what is this, raid?
[22:48] <bens> no, each one runs at 1/3 power to maximize efficiency
[22:49] * tdasilva (~quassel@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[22:49] <sage> more like 9 flashlights on the window sill, 7 or which have dead batteries
[22:49] * houkouonchi (~linux@houkouonchi-1-pt.tunnel.tserv15.lax1.ipv6.he.net) has joined #ceph
[22:49] <dmick> heh
[22:49] <sage> (and one with a broken hand charger crank handle)
[22:51] <gregsfortytwo1> just as long as they're flashlights that don't get warm, rather than lamps that you can hide under the covers
[22:51] * houkouonchi-home (~linux@pool-71-189-160-82.lsanca.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[22:52] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[22:53] <gregsfortytwo1> I think the scars from when I fell asleep doing that in middle school are gone now
[22:53] <bens> when i was a boy, we read under the covers with a book of matches!
[22:53] <bens> kids these days
[22:54] <dmick> covers? we 'ad to 'uddle by fire
[22:54] * markbby (~Adium@168.94.245.3) has joined #ceph
[22:55] <ircolle> fire!?! We were waiting for fire to be invented! - kids
[22:55] <bens> we didn't have anything to read - the internet hadn't been invented yet
[22:57] <gregsfortytwo1> see, now you're just showing how young you are
[22:57] <janos> haha
[22:57] <gregsfortytwo1> once upon a time, we had "books" instead of "ebooks"
[22:58] <gregsfortytwo1> made out of ground-up bits of trees, that you moved around physically between stores and libraries
[23:00] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[23:00] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[23:03] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[23:03] * sarob (~sarob@2001:4998:effd:600:1cf1:d12:fb9b:4ae) Quit (Remote host closed the connection)
[23:03] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[23:03] * sarob (~sarob@2001:4998:effd:600:1cf1:d12:fb9b:4ae) has joined #ceph
[23:04] * madkiss1 (~madkiss@2001:6f8:12c3:f00f:d414:e5f2:468f:2f3a) has joined #ceph
[23:06] * JC (~JC@AMontpellier-651-1-298-68.w92-143.abo.wanadoo.fr) Quit (Quit: Leaving.)
[23:06] * JC (~JC@AMontpellier-651-1-298-68.w92-143.abo.wanadoo.fr) has joined #ceph
[23:06] * houkouonchi is now known as houkouonchi-home
[23:07] * JC (~JC@AMontpellier-651-1-298-68.w92-143.abo.wanadoo.fr) Quit ()
[23:07] * JC (~JC@AMontpellier-651-1-298-68.w92-143.abo.wanadoo.fr) has joined #ceph
[23:10] * madkiss (~madkiss@2001:6f8:12c3:f00f:e0e4:2283:fffd:8187) Quit (Ping timeout: 480 seconds)
[23:11] * allsystemsarego (~allsystem@79.115.170.35) Quit (Quit: Leaving)
[23:12] * sarob (~sarob@2001:4998:effd:600:1cf1:d12:fb9b:4ae) Quit (Ping timeout: 480 seconds)
[23:14] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) Quit (Quit: Leaving.)
[23:18] * wabat (~wbatterso@65.182.109.4) has left #ceph
[23:18] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[23:18] <rweeks> ircolle, did you ask them if they wanted fire nasally fitted?
[23:19] * rweeks waits to see if anyone gets that reference
[23:22] * JC (~JC@AMontpellier-651-1-298-68.w92-143.abo.wanadoo.fr) Quit (Quit: Leaving.)
[23:22] * brad_mssw (~brad@shop.monetra.com) Quit (Quit: Leaving)
[23:22] * JC (~JC@AMontpellier-651-1-298-68.w92-143.abo.wanadoo.fr) has joined #ceph
[23:33] * baylight (~tbayly@69.169.150.21.provo.static.broadweavenetworks.net) Quit (Read error: Operation timed out)
[23:34] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[23:38] * thb (~me@port-31106.pppoe.wtnet.de) has joined #ceph
[23:38] * thb is now known as Guest4118
[23:39] <jobewan> is there a way to have ceph-deploy write out all of the config directives for the osds/mons/mds to my ceph.conf ?
[23:39] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Remote host closed the connection)
[23:41] * joef (~Adium@2601:9:2a00:690:8cd4:580c:7305:8b87) has joined #ceph
[23:41] * joef (~Adium@2601:9:2a00:690:8cd4:580c:7305:8b87) has left #ceph
[23:42] * lpabon (~lpabon@nat-pool-bos-t.redhat.com) Quit (Read error: Connection reset by peer)
[23:43] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[23:45] * baylight (~tbayly@74-220-196-40.unifiedlayer.com) has joined #ceph
[23:52] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[23:53] * rmoe (~quassel@12.164.168.117) Quit (Remote host closed the connection)
[23:54] * fdmanana (~fdmanana@bl9-170-214.dsl.telepac.pt) Quit (Quit: Leaving)
[23:56] * Kupo1 (~tyler.wil@wsip-68-14-231-140.ph.ph.cox.net) Quit (Read error: Connection reset by peer)
[23:59] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.