#ceph IRC Log

Index

IRC Log for 2014-03-31

Timestamps are in GMT/BST.

[0:00] * leseb (~leseb@185.21.172.77) Quit (Killed (NickServ (Too many failed password attempts.)))
[0:00] * leseb (~leseb@185.21.172.77) has joined #ceph
[0:03] * diegows_ (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[0:24] * diegows_ (~diegows@190.190.5.238) has joined #ceph
[0:38] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) Quit (Quit: sprachgenerator)
[0:38] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[0:57] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) has joined #ceph
[0:58] * The_Bishop_ (~bishop@2001:470:50b6:0:c59d:46b8:673b:e0b7) Quit (Ping timeout: 480 seconds)
[1:09] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[1:10] * yguang11_ (~yguang11@vpn-nat.corp.tw1.yahoo.com) Quit (Ping timeout: 480 seconds)
[1:21] * zack_dolby (~textual@p852cae.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:35] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) Quit (Quit: sprachgenerator)
[1:44] * diegows_ (~diegows@190.190.5.238) Quit (Read error: Operation timed out)
[1:44] * Karcaw (~evan@96-41-200-66.dhcp.elbg.wa.charter.com) Quit (Ping timeout: 480 seconds)
[1:46] * diegows_ (~diegows@190.190.5.238) has joined #ceph
[1:48] * Karcaw (~evan@96-41-200-66.dhcp.elbg.wa.charter.com) has joined #ceph
[1:53] <kiwigeraint> is there any way to make scrubbing less painful ?
[2:00] * Karcaw (~evan@96-41-200-66.dhcp.elbg.wa.charter.com) Quit (Ping timeout: 480 seconds)
[2:14] * sjm (~sjm@70.42.157.29) has joined #ceph
[2:25] * zack_dolby (~textual@e0109-114-22-14-183.uqwimax.jp) has joined #ceph
[2:27] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Read error: Operation timed out)
[2:27] * LeaChim (~LeaChim@host86-162-2-97.range86-162.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:35] * mattt (~textual@S010690724001c795.vc.shawcable.net) has joined #ceph
[2:38] * Karcaw (~evan@96-41-200-66.dhcp.elbg.wa.charter.com) has joined #ceph
[2:39] <classicsnail> ...789723/72417372 objects degraded (1.091%); 12425 MB/s, 7237 objects/s recovering
[2:39] <classicsnail> mischan, whoops, sorry
[2:42] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[2:46] * sjm (~sjm@70.42.157.29) Quit (Quit: Leaving.)
[2:51] <aarontc> I have an interesting problem.. with 5 OSDs down, if I bring more than 1 of them up at a time, all the ones I bought up crash saying they got into an invalid state :(
[2:52] <aarontc> osd/PG.cc: 5255: FAILED assert(0 == "we got a bad state machine event")
[2:52] <aarontc> (version 0.78)
[2:55] * mattt (~textual@S010690724001c795.vc.shawcable.net) Quit (Quit: Computer has gone to sleep.)
[2:57] * Karcaw (~evan@96-41-200-66.dhcp.elbg.wa.charter.com) Quit (Read error: Operation timed out)
[2:58] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[2:59] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit ()
[3:05] * diegows_ (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[3:06] * Karcaw (~evan@96-41-200-66.dhcp.elbg.wa.charter.com) has joined #ceph
[3:49] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[3:52] * yeled (~yeled@spodder.com) Quit (Quit: meh..)
[3:57] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:00] * Daviey (~DavieyOFT@bootie.daviey.com) Quit (Remote host closed the connection)
[4:03] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[4:31] * yuriw1 (~Adium@c-71-202-126-141.hsd1.ca.comcast.net) has joined #ceph
[4:31] * yuriw (~Adium@c-71-202-126-141.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[4:47] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[4:59] * The_Bishop_ (~bishop@f055071121.adsl.alicedsl.de) has joined #ceph
[5:05] * Vacum (~vovo@i59F79E46.versanet.de) has joined #ceph
[5:06] * markbby (~Adium@168.94.245.4) has joined #ceph
[5:08] * shang (~ShangWu@175.41.48.77) has joined #ceph
[5:12] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[5:12] * Vacum_ (~vovo@88.130.192.126) Quit (Ping timeout: 480 seconds)
[5:17] * haomaiwang (~haomaiwan@117.79.232.210) Quit (Remote host closed the connection)
[5:17] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) has joined #ceph
[5:21] * Cube (~Cube@12.248.40.138) has joined #ceph
[5:40] * mattt (~textual@S010690724001c795.vc.shawcable.net) has joined #ceph
[5:41] * fdmanana_ (~fdmanana@bl9-168-27.dsl.telepac.pt) has joined #ceph
[5:45] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[5:46] * mattt_ (~textual@92.52.76.140) has joined #ceph
[5:46] * mattt (~textual@S010690724001c795.vc.shawcable.net) Quit (Remote host closed the connection)
[5:46] * mattt_ is now known as mattt
[5:48] * fdmanana (~fdmanana@bl5-6-132.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[5:50] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[5:56] * haomaiwa_ (~haomaiwan@118.186.133.129) has joined #ceph
[5:56] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[6:02] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[6:02] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[6:02] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) Quit (Ping timeout: 480 seconds)
[6:10] * yanzheng (~zhyan@134.134.137.75) has joined #ceph
[6:10] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[6:13] * MACscr1 (~Adium@c-50-158-183-38.hsd1.il.comcast.net) Quit (Quit: Leaving.)
[6:23] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[6:27] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[6:28] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[6:29] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[6:37] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[6:38] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Ping timeout: 480 seconds)
[6:49] * yeled (~yeled@spodder.com) has joined #ceph
[7:12] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[7:16] * yeled (~yeled@spodder.com) Quit (Quit: meh..)
[7:16] * mattt (~textual@92.52.76.140) Quit (Read error: Connection reset by peer)
[7:16] * yeled (~yeled@spodder.com) has joined #ceph
[7:16] * yeled (~yeled@spodder.com) Quit ()
[7:16] * yeled (~yeled@spodder.com) has joined #ceph
[7:56] * zack_dolby (~textual@e0109-114-22-14-183.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[8:10] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[8:33] * thanhtran (~thanhtran@123.30.135.76) has joined #ceph
[8:35] <thanhtran> is there any way to downgrade ceph from version firefly (0.78-367-gd9a2dea) to emperor (0.72.2)?
[8:46] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[8:51] * Cube (~Cube@12.248.40.138) Quit (Quit: Leaving.)
[8:51] * zack_dolby (~textual@p852cae.tokynt01.ap.so-net.ne.jp) has joined #ceph
[8:52] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) has joined #ceph
[8:58] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) Quit (Read error: Connection reset by peer)
[8:58] * Sysadmin88 (~IceChat77@176.254.32.31) Quit (Quit: If you can't laugh at yourself, make fun of other people.)
[8:58] * HansDeLeenheer (~hansdelee@78-23-180-114.access.telenet.be) has joined #ceph
[9:02] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) has joined #ceph
[9:02] * analbeard (~shw@141.0.32.125) has joined #ceph
[9:02] * `jpg (~josephgla@ppp121-44-202-175.lns20.syd7.internode.on.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[9:02] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:07] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) Quit (Quit: shimo)
[9:07] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) has joined #ceph
[9:10] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[9:12] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) has joined #ceph
[9:13] * zack_dolby (~textual@p852cae.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[9:16] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[9:16] * ChanServ sets mode +v andreask
[9:19] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) has joined #ceph
[9:24] * hybrid512 (~walid@195.200.167.70) Quit (Quit: Leaving.)
[9:25] * garphy`aw is now known as garphy
[9:25] * mkoderer (uid11949@id-11949.ealing.irccloud.com) has joined #ceph
[9:27] * hybrid512 (~walid@195.200.167.70) has joined #ceph
[9:27] * hybrid512 (~walid@195.200.167.70) Quit ()
[9:27] * hybrid512 (~walid@195.200.167.70) has joined #ceph
[9:29] * root__ (~root@176.28.50.139) Quit (Ping timeout: 480 seconds)
[9:37] * ksingh (~Adium@2001:708:10:10:4c02:70a5:6430:9880) has joined #ceph
[9:40] * thb (~me@2a02:2028:2e5:ca80:6267:20ff:fec9:4e40) has joined #ceph
[9:46] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[9:58] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) has left #ceph
[10:01] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[10:07] * HansDeLeenheer (~hansdelee@78-23-180-114.access.telenet.be) Quit (Quit: HansDeLeenheer)
[10:11] * garphy is now known as garphy`aw
[10:13] * HansDeLeenheer (~hansdelee@78-23-180-114.access.telenet.be) has joined #ceph
[10:14] * garphy`aw is now known as garphy
[10:22] * LeaChim (~LeaChim@host86-162-2-97.range86-162.btcentralplus.com) has joined #ceph
[10:25] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[10:25] * vlad_ (~oftc-webi@124.105.60.242) has joined #ceph
[10:25] <vlad_> anybody online
[10:27] <vlad_> i got errors when trying to add monitors via ceph-deploy on centos 6.5 http://pastie.org/private/2ti7pv853g6rrnguxma8oa
[10:29] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[10:29] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[10:31] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[10:34] * Daviey (~DavieyOFT@bootie.daviey.com) has joined #ceph
[10:37] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[10:37] * allsystemsarego (~allsystem@5-12-37-194.residential.rdsnet.ro) has joined #ceph
[10:41] <jerker> vlad_: First time, just when starting cluster? I did this: "ceph-deploy new n1 n2 n3 n4 n5" "ceph-deploy install n1 n2 n3 n4 n5" "ceph-deploy create-initial"
[10:42] <jerker> vlad_: that worked. I am running SL6.5, almost the same as CentOS6.5 (also based on RHEL6)
[10:48] * jtangwk (~Adium@gateway.tchpc.tcd.ie) Quit (Quit: Leaving.)
[10:52] * jtangwk (~Adium@gateway.tchpc.tcd.ie) has joined #ceph
[11:03] * yanzheng (~zhyan@134.134.137.75) Quit (Quit: Leaving)
[11:05] * HansDeLeenheer (~hansdelee@78-23-180-114.access.telenet.be) Quit (Quit: HansDeLeenheer)
[11:11] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) has joined #ceph
[11:13] * haomaiwa_ (~haomaiwan@118.186.133.129) Quit (Remote host closed the connection)
[11:13] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) has joined #ceph
[11:14] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[11:20] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[11:29] * zack_dolby (~textual@e0109-114-22-12-245.uqwimax.jp) has joined #ceph
[11:34] <isodude> Hi, Is there any tool that monitors the same information as iostat, with a central tool? i.e. logging in to servers via ssh and fetching all the stats.
[11:34] * zack_dolby (~textual@e0109-114-22-12-245.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[11:36] * haomaiwa_ (~haomaiwan@117.79.232.177) has joined #ceph
[11:38] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) Quit (Ping timeout: 480 seconds)
[11:40] <Tene> Has anyone written much about migrating relational data into ceph? Maintaining your own indexes, denormalization, etc? I'm not looking for anything specific, just the general topic, to consider what types of data at my company would involve how much work to migrate.
[11:40] * jtangwk (~Adium@gateway.tchpc.tcd.ie) Quit (Remote host closed the connection)
[11:41] * zack_dolby (~textual@e0109-114-22-12-245.uqwimax.jp) has joined #ceph
[11:44] * zack_dolby (~textual@e0109-114-22-12-245.uqwimax.jp) Quit ()
[11:45] <vlad_> thanks jerker, ill try that
[11:49] <Tene> Also, is there any documentation on bringing up a ceph cluster without using ceph-deploy? I'm trying to figure out how to work it into our current cluster management system, and it would be nice to not have to reverse-engineer ceph-deploy.
[11:50] <Fruit> Tene: http://ceph.com/docs/master/install/manual-deployment/
[11:50] <Tene> Fruit: Thanks. :)
[11:52] * allsystemsarego (~allsystem@50c25c2.test.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[11:53] <ksingh> Anyone ??? How to destroy a MDS service . I dont want to use MDS in my cluster
[11:58] * zack_dolby (~textual@p852cae.tokynt01.ap.so-net.ne.jp) has joined #ceph
[11:59] <Fruit> ksingh: remove from configfile and kill the daemon?
[12:04] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[12:11] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[12:13] <ksingh> MDS is not in ceph.conf file and daemon is not running
[12:13] <ksingh> ceph status says
[12:13] <ksingh> mdsmap e5: 1/1/1 up {0=storage0106-ib=up:active(laggy or crashed)}
[12:14] <ksingh> i need to remove MDS completely so that it should not appear in ceph status
[12:16] * leseb (~leseb@185.21.172.77) Quit (Killed (NickServ (Too many failed password attempts.)))
[12:16] * vlad_ (~oftc-webi@124.105.60.242) Quit (Remote host closed the connection)
[12:17] * leseb (~leseb@185.21.172.77) has joined #ceph
[12:23] <Fruit> ksingh: you may have to use ceph mds getmap/setmap
[12:24] <Fruit> the documentation seems to be rather incomplete on this subject
[12:25] <ksingh> Thans Fruit :-)
[12:27] <Fruit> ksingh: http://www.sebastien-han.fr/blog/2012/07/04/remove-a-mds-server-from-a-ceph-cluster/
[12:30] <ksingh> Oppse i missed this bolg , thanks a ton
[12:30] * allsystemsarego (~allsystem@188.25.131.129) has joined #ceph
[12:32] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[12:36] * thanhtran (~thanhtran@123.30.135.76) Quit (Quit: Going offline, see ya! (www.adiirc.com))
[12:37] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Quit: Leaving.)
[12:39] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[12:41] * i_m (~ivan.miro@gbibp9ph1--blueice1n2.emea.ibm.com) has joined #ceph
[12:43] * i_m (~ivan.miro@gbibp9ph1--blueice1n2.emea.ibm.com) Quit ()
[12:43] * i_m (~ivan.miro@gbibp9ph1--blueice4n1.emea.ibm.com) has joined #ceph
[12:44] * neonDrag1n (~ndragon@host86-156-131-11.range86-156.btcentralplus.com) has joined #ceph
[12:46] <ksingh> Anyone : can we create a pool , in which we can add osd from 10 different hosts ??
[12:47] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[12:48] * neonDragon (~ndragon@host31-51-87-179.range31-51.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[12:56] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[12:56] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) has joined #ceph
[13:00] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[13:05] * iggy (~iggy@theiggy.com) Quit (Quit: No Ping reply in 180 seconds.)
[13:05] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) Quit (Read error: Operation timed out)
[13:06] * BillK (~BillK-OFT@58-7-59-126.dyn.iinet.net.au) Quit (Quit: ZNC - http://znc.in)
[13:07] * BillK (~BillK-OFT@58-7-59-126.dyn.iinet.net.au) has joined #ceph
[13:07] * neonDrag1n (~ndragon@host86-156-131-11.range86-156.btcentralplus.com) Quit (Read error: Operation timed out)
[13:08] * neonDragon (~ndragon@host86-168-87-82.range86-168.btcentralplus.com) has joined #ceph
[13:08] * iggy (~iggy@theiggy.com) has joined #ceph
[13:13] * `jpg (~josephgla@ppp121-44-202-175.lns20.syd7.internode.on.net) has joined #ceph
[13:13] * neonDrag1n (~ndragon@host86-134-161-205.range86-134.btcentralplus.com) has joined #ceph
[13:16] * neonDragon (~ndragon@host86-168-87-82.range86-168.btcentralplus.com) Quit (Read error: Operation timed out)
[13:20] * `jpg (~josephgla@ppp121-44-202-175.lns20.syd7.internode.on.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[13:25] * diegows_ (~diegows@190.190.5.238) has joined #ceph
[13:27] * zidarsk8 (~zidar@2001:1470:fffe:fe01:e2ca:94ff:fe34:7822) has joined #ceph
[13:29] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[13:35] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[13:42] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[13:45] * ksingh1 (~Adium@2001:708:10:10:4c02:70a5:6430:9880) has joined #ceph
[13:45] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) Quit (Quit: sprachgenerator)
[13:46] * ksingh (~Adium@2001:708:10:10:4c02:70a5:6430:9880) Quit (Read error: Connection reset by peer)
[13:53] * zidarsk8 (~zidar@2001:1470:fffe:fe01:e2ca:94ff:fe34:7822) has left #ceph
[13:54] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) has joined #ceph
[14:25] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) has joined #ceph
[14:27] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has joined #ceph
[14:28] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has left #ceph
[14:31] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[14:34] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[14:44] <jks> anyone using the leveldb rpms from ceph-extras on Fedora with emperor? (with success?) :)
[14:45] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[14:49] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[14:52] * glambert (~glambert@37.157.50.80) Quit (Quit: <?php exit(); ?>)
[15:00] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) has joined #ceph
[15:03] * HansDeLeenheer (~hansdelee@78-23-180-114.access.telenet.be) has joined #ceph
[15:03] * glambert (~glambert@37.157.50.80) has joined #ceph
[15:07] * jcsp (~Adium@0001bf3a.user.oftc.net) has joined #ceph
[15:12] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[15:21] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[15:31] * yanzheng (~zhyan@134.134.139.74) has joined #ceph
[15:40] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[15:42] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[15:43] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[15:48] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[15:51] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[15:53] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:54] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Remote host closed the connection)
[15:57] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[15:58] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:02] * ircolle (~Adium@2601:1:8380:2d9:2d3c:c5ab:b417:3e55) has joined #ceph
[16:03] <jerker> ksingh: I do not have any custom crush rules for my pools, I just store in the default, using the OSDs from all the hosts for everything. It works. :)
[16:05] * ircolle (~Adium@2601:1:8380:2d9:2d3c:c5ab:b417:3e55) Quit ()
[16:07] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[16:10] <ksingh1> jerker : i was asking just for curiosity
[16:11] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[16:13] * leseb (~leseb@185.21.172.77) Quit (Killed (NickServ (Too many failed password attempts.)))
[16:14] * leseb (~leseb@185.21.172.77) has joined #ceph
[16:14] <jerker> ksingh1: you do that with a custom crush map http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds
[16:15] <jerker> ksingh1: but if I understand you correctly, using the OSDs from all the different hosts is the default behaviour (weighted on size)
[16:17] * sjm (~sjm@cpe-67-248-135-198.nycap.res.rr.com) has joined #ceph
[16:20] <ksingh1> thanks for the link jerker and yes you are correct
[16:21] <aarontc> Most of my OSDs are crashing with "we got a bad state machine event" - here's a clean log from one: http://www.aarontc.com/logs/ceph-osd.4.log
[16:21] <ksingh1> but i want to specify the osd names for a pool . i dont want crush to select any osd .
[16:21] <aarontc> I see several closed bugs with that message, but I don't know enough about this problem to know if it's the same thing or not
[16:25] <aarontc> should I file a bug? :)
[16:25] * via (~via@smtp2.matthewvia.info) Quit (Quit: bbl)
[16:25] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[16:26] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:27] * rahatm1 (~rahatm1@CPE602ad089ce64-CM602ad089ce61.cpe.net.cable.rogers.com) has joined #ceph
[16:28] * rahatm1 (~rahatm1@CPE602ad089ce64-CM602ad089ce61.cpe.net.cable.rogers.com) Quit ()
[16:29] <Svedrin> is there an equivalent for "ceph osd dump" that outputs json?
[16:31] <Svedrin> heh, --format json. ok then. :P
[16:32] * ksingh1 (~Adium@2001:708:10:10:4c02:70a5:6430:9880) Quit (Ping timeout: 480 seconds)
[16:38] * thomnico (~thomnico@c-217-115-42-157.cust.bredband2.com) has joined #ceph
[16:46] * yanzheng (~zhyan@134.134.139.74) Quit (Ping timeout: 480 seconds)
[16:50] * vata (~vata@2607:fad8:4:6:81f2:1b16:4789:7354) has joined #ceph
[16:53] * thomnico (~thomnico@c-217-115-42-157.cust.bredband2.com) Quit (Ping timeout: 480 seconds)
[16:53] * erwan_taf (~erwan@83.167.43.235) has joined #ceph
[16:53] <erwan_taf> yo
[16:53] * ismell (~ismell@host-64-17-89-79.beyondbb.com) has joined #ceph
[16:54] * zoltan (~zoltan@2001:620:20:222:97:e7c3:c8ef:3976) has joined #ceph
[16:54] <zoltan> hey guys
[16:54] <erwan_taf> while looking at ceph's code, I found that many code is using gettimeofday which isn't that safe.
[16:54] <zoltan> I'm setting up a new system and since 14.04 is so close, I'm thinking of installing the current 14.04 there
[16:54] <erwan_taf> what do you think about switching to clock_gettime monotonic ?
[16:54] <zoltan> I did find the trusty packages on gitbuilder, but cannot find them in emperor's deb repo
[16:54] <erwan_taf> which is much more safer
[16:55] <zoltan> should I just go ahead, install 12.04 with emperor, and once 14.04 is out do a rolling upgrade?
[16:55] <zoltan> to 14.04 and firefly :)
[16:59] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[16:59] * dmsimard (~Adium@108.163.152.2) Quit (Quit: Leaving.)
[17:02] * valeech (~valeech@64.191.222.117) has joined #ceph
[17:02] * sroy (~sroy@207.96.182.162) has joined #ceph
[17:05] * thomnico (~thomnico@c-217-115-42-157.cust.bredband2.com) has joined #ceph
[17:08] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[17:09] <glambert> zoltan, not make more sense to wait until some patches have come out for 14.04 first? inevitably will be and I'd rather stick on 12.04 until then
[17:09] <glambert> in production anyway
[17:13] * analbeard (~shw@141.0.32.125) Quit (Quit: Leaving.)
[17:19] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) Quit (Remote host closed the connection)
[17:21] * thomnico (~thomnico@c-217-115-42-157.cust.bredband2.com) Quit (Ping timeout: 480 seconds)
[17:22] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[17:22] <zoltan> glambert, if firefly will be supported on 12.04, then sure
[17:22] * BManojlovic (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[17:23] <glambert> zoltan, don't know, I would certainly hope so given 12.04 itself is being supported for another 3 years
[17:27] * sprachgenerator (~sprachgen@130.202.135.204) has joined #ceph
[17:30] <zoltan> yes, but for caching you're going to need a new kernel anyway
[17:33] <jerker> erwan_taf: I have no idea (i am not a developer) but do write about it in #ceph-devel (or the mailinglist)
[17:33] * kiwigeraint (~kiwigerai@208.72.139.54) Quit (Ping timeout: 480 seconds)
[17:34] <zoltan> is anybody exporting rbd via iSCSI for vSphere? :)
[17:35] <zoltan> in the past I used normal ietd, but it doesn't support SCSI-3 persistent reservation which will be needed for a normal DRS cluster
[17:36] * oms101 (~oms101@2620:113:80c0:5::2222) has joined #ceph
[17:37] <stewiem20001> zoltan: Yup; currently using the built-in LIO/TCM/targetcli stuff. Seems to be working :)
[17:41] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[17:44] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[17:44] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[17:46] * fedgoatbah (~fedgoat@cpe-24-28-22-21.austin.res.rr.com) has joined #ceph
[17:47] <zoltan> and you're using the export from multiple hypervisors?
[17:51] * fedgoat (~fedgoat@cpe-24-28-22-21.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[17:52] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[17:55] <zoltan> stewiem20001, what's TCM anyway?
[17:57] * thomnico (~thomnico@c-217-115-42-157.cust.bredband2.com) has joined #ceph
[17:59] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) Quit (Quit: jlogan)
[18:00] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[18:00] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) Quit ()
[18:01] * via (~via@smtp2.matthewvia.info) has joined #ceph
[18:01] * mkoderer (uid11949@id-11949.ealing.irccloud.com) Quit (Quit: Connection closed for inactivity)
[18:01] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[18:03] * TheBittern (~thebitter@195.10.250.233) Quit (Remote host closed the connection)
[18:03] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[18:04] * thomnico (~thomnico@c-217-115-42-157.cust.bredband2.com) Quit (Quit: Ex-Chat)
[18:04] * HansDeLeenheer (~hansdelee@78-23-180-114.access.telenet.be) Quit (Quit: HansDeLeenheer)
[18:04] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[18:05] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:06] * ircolle (~Adium@2601:1:8380:2d9:2d3c:c5ab:b417:3e55) has joined #ceph
[18:07] * thomnico (~thomnico@c-217-115-42-157.cust.bredband2.com) has joined #ceph
[18:11] * garphy is now known as garphy`aw
[18:11] * mdjp (~mdjp@213.229.87.114) Quit (Ping timeout: 480 seconds)
[18:12] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[18:12] * garphy`aw is now known as garphy
[18:12] * mdjp (~mdjp@213.229.87.114) has joined #ceph
[18:18] * danieagle (~Daniel@186.214.53.19) has joined #ceph
[18:21] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[18:21] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[18:23] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[18:24] * TMM (~hp@c97185.upc-c.chello.nl) has joined #ceph
[18:25] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) has joined #ceph
[18:26] * Cube (~Cube@66-87-130-209.pools.spcsdns.net) has joined #ceph
[18:28] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[18:29] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[18:30] * Gamekiller77 (~Gamekille@2001:420:28c:1007:f889:d84:a530:e871) has joined #ceph
[18:31] <Gamekiller77> hello all i having problem with documentation of stopping a single OSD in CentOS. stop ceph-osd id=4 is not working
[18:31] <jks> have you tried something similar to: service ceph stop osd.4
[18:32] <Gamekiller77> no i have not
[18:32] <Gamekiller77> good idea
[18:32] <Gamekiller77> stop is there
[18:32] <Gamekiller77> but it does not see ceph-osd as a job
[18:32] * Underbyte (~jerrad@pat-global.macpractice.net) Quit (Quit: Linkinus - http://linkinus.com)
[18:32] <Gamekiller77> need to see how stop list jobs
[18:33] <Gamekiller77> but that worked
[18:33] <Gamekiller77> i file a bug with inktank to fix that document
[18:34] <zoltan> service ceph-osd restart id=12
[18:34] <zoltan> try like this
[18:34] <zoltan> s/restart/stop/ :)
[18:36] <Gamekiller77> no the service ceph stop osd.X worked fine
[18:36] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[18:36] <Gamekiller77> now to find out why the journal partition is not where it should be
[18:37] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:38] * i_m (~ivan.miro@gbibp9ph1--blueice4n1.emea.ibm.com) Quit (Quit: Leaving.)
[18:40] * Underbyte (~jerrad@pat-global.macpractice.net) has joined #ceph
[18:41] * joef (~Adium@2620:79:0:131:976:f4aa:48a1:4c1a) has joined #ceph
[18:41] * mattt (~textual@S010690724001c795.vc.shawcable.net) has joined #ceph
[18:45] * carif (~mcarifio@ip-37-25.sn1.eutelia.it) has joined #ceph
[18:45] * carif (~mcarifio@ip-37-25.sn1.eutelia.it) Quit ()
[18:47] * gregsfortytwo (~Adium@2607:f298:a:607:99f3:4afe:a112:d22b) Quit (Quit: Leaving.)
[18:47] * mattt_ (~textual@92.52.76.140) has joined #ceph
[18:47] * gregsfortytwo (~Adium@2607:f298:a:607:99f3:4afe:a112:d22b) has joined #ceph
[18:47] * mattt (~textual@S010690724001c795.vc.shawcable.net) Quit (Read error: Connection reset by peer)
[18:47] * mattt_ is now known as mattt
[18:48] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[18:50] * KaZeR (~KaZeR@64.201.252.132) has joined #ceph
[18:50] <KaZeR> hi there
[18:52] * ghartz (~ghartz@ip-68.net-80-236-84.joinville.rev.numericable.fr) has joined #ceph
[18:52] * garphy is now known as garphy`aw
[18:53] <KaZeR> i'm setting up ceph for the first time, and after spending a few hours on it i have the monitor working, but cannot get the OSD to start
[18:54] <KaZeR> (i have to do a manual setup, because i'm running gentoo)
[18:56] <KaZeR> ceph status gives me : 2014-03-31 09:55:54.223562 7fa7a41aa700 0 -- :/1024739 >> 172.16.103.2:6789/0 pipe(0x7fa7a0010d90 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa7a000e050).fault
[18:56] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[18:56] <KaZeR> my ceph.conf : http://bpaste.net/show/Cxb0mGMFQXyrKG9enkyU/
[18:57] <KaZeR> my monmap: http://bpaste.net/show/o4u8dYHRXzhE8KQzdtn0/
[18:57] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[18:58] <KaZeR> startup of ceph using the init script : http://bpaste.net/show/edV531J7U2PPV1WKhQLS/ here i'm getting a weird awk error
[18:59] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit ()
[18:59] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[19:03] * The_Bishop__ (~bishop@f055083171.adsl.alicedsl.de) has joined #ceph
[19:07] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[19:07] * zoltan (~zoltan@2001:620:20:222:97:e7c3:c8ef:3976) Quit (Quit: Leaving)
[19:09] * The_Bishop_ (~bishop@f055071121.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[19:11] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[19:14] * fdmanana_ is now known as fdmanana
[19:17] * xarses (~andreww@12.164.168.117) has joined #ceph
[19:17] * sjustwork (~sam@2607:f298:a:607:b97d:45ab:c245:7891) has joined #ceph
[19:25] <ponyofdeath> hi, after changing value of filestore_xattr_use_omap in ceph.conf what services do i have to restart?
[19:26] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[19:26] * sarob (~sarob@2601:9:7080:13a:3415:f76e:6568:c7da) has joined #ceph
[19:32] * sjm (~sjm@cpe-67-248-135-198.nycap.res.rr.com) Quit (Ping timeout: 480 seconds)
[19:32] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[19:33] <mjevans> KaZeR: your mon may be /loaded/ but the cluster is not configured correctly and/or you don't have authentication setup properly. You are literally not connecting to the expected monitor correctly.
[19:34] <mjevans> If your monitor(s) are loaded and communicating, even without OSDs, you'll be able to get a cluster status (which will indicate various health problems... but still you'll get a valid /result/)
[19:34] * mattt (~textual@92.52.76.140) Quit (Read error: Connection reset by peer)
[19:34] * sarob (~sarob@2601:9:7080:13a:3415:f76e:6568:c7da) Quit (Ping timeout: 480 seconds)
[19:35] <KaZeR> thanks mjevans
[19:35] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[19:36] <KaZeR> in my mon log i have things like 014-03-31 10:36:00.820921 7ff55f7fe700 1 mon.storage-01a@0(leader).paxos(paxos active c 1..109) is_readable now=2014-03-31 10:36:00.820921 lease_expire=2014-03-31 10:36:02.869785 has v0 lc 109
[19:37] <KaZeR> how can i find if it's a config or authentication issue ?
[19:37] <mjevans> You probably don't have the keyrings setup properly... ceph-deploy really helps with that.
[19:37] * sjustwork (~sam@2607:f298:a:607:b97d:45ab:c245:7891) has left #ceph
[19:38] <mjevans> KaZeR: well, you could try disabling cephx and reloading the mons to see if things start working.
[19:39] <KaZeR> yeah but ceph-deploy does not work if you're not on ubuntu/suse. i tried to use it, even started writing a gentoo module but on the other hand i'd like to deploy this first setup manually to understand all the steps
[19:39] <KaZeR> thanks mjevans trying right now
[19:39] <alfredodeza> KaZeR: wait what
[19:39] <alfredodeza> how it doesn't work in Ubuntu?
[19:39] <alfredodeza> :/
[19:39] <mjevans> alfredodeza: read again carefully
[19:39] <KaZeR> :)
[19:39] <alfredodeza> ah
[19:39] <alfredodeza> *read that the other way*
[19:40] <alfredodeza> KaZeR: what OS are you on?
[19:40] <KaZeR> alfredodeza: gentoo
[19:40] <alfredodeza> oh I see
[19:40] <alfredodeza> oh and that uses Python 3 too right?
[19:40] <alfredodeza> because I would suggest just to pip install
[19:40] <KaZeR> no, you can use 2.7 or 3. you can have both installed
[19:40] <alfredodeza> ah ok
[19:40] <alfredodeza> then you could just pip install it
[19:40] <alfredodeza> `pip install ceph-deploy` with python 2.7 should work
[19:40] <KaZeR> the issue is not to get ceph-deploy installed, the issue is that there's no gentoo module currently
[19:41] <mjevans> KaZeR: you don't need to use it for /everything/
[19:41] <alfredodeza> why would you need a gentoo module if you can get it installed via a Python package manager?
[19:41] <KaZeR> alfredodeza: i'm talking about this module : /usr/lib64/python2.7/site-packages/ceph_deploy/hosts/gentoo/mon/create.py
[19:41] <KaZeR> ceph-deploy module, not python module
[19:41] <alfredodeza> I see
[19:42] <alfredodeza> KaZeR: however, those are for *remote* nodes
[19:42] <alfredodeza> not the local/admin one
[19:42] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[19:42] * `jpg (~josephgla@ppp121-44-202-175.lns20.syd7.internode.on.net) has joined #ceph
[19:42] <alfredodeza> so you could use ceph-deploy, just as long as you are not deploying to remote nodes that are also gentoo
[19:43] * sputnik1_ (~sputnik13@207.8.121.241) has joined #ceph
[19:43] <alfredodeza> if that is what you want, I am sure ceph-deploy is going to be the last of your troubles as Ceph does not test/support Gentoo
[19:43] <KaZeR> mjevans: as a bonus, i'm using fusionIO cards and the ceph-disk utility does not find them : http://bpaste.net/show/DssBKp1AovDjApTmzZQE/
[19:43] <mjevans> alfredodeza: it should be able to work for the other things too... as long as ceph is already installed...
[19:43] * Pedras (~Adium@64.191.206.83) has joined #ceph
[19:44] <KaZeR> alfredodeza: but all my nodes are gentoo. and also i am comfortable with the idea of doing this setup manually, because it helps understanding how it works / what's needed imo
[19:44] <mjevans> KaZeR: you should search the ceph bugtracker and/or file a bug about that.
[19:45] <alfredodeza> mjevans: what is the bug?
[19:45] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) Quit (Ping timeout: 480 seconds)
[19:45] <mjevans> If you look a little harder there is a guide for manual deployment...
[19:45] <KaZeR> i guess it's just an issue with the pattern used to search for the disks
[19:45] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) has joined #ceph
[19:46] <KaZeR> mjevans: i'm currently following this guide : https://github.com/gc3-uzh-ch/ansible-playbooks/blob/master/roles/ceph.notes-on-deployment.rst which i find quite comprehensive
[19:46] <mjevans> alfredodeza: KaZeR has observed that ceph-disk list does not check non-generic block devices, such as /dev/fioa (for his fusion io card). Of course this does not prevent him from manually specifying such devices...
[19:46] <KaZeR> devs = /dev/fioa :)
[19:46] <mjevans> KaZeR: http://ceph.com/docs/master/install/manual-deployment/
[19:46] <KaZeR> should i use fioa or fioa2 ?
[19:46] <KaZeR> thanks mjevans
[19:47] <mjevans> Without the context I do not know for sure, however in MOST cases if you're specifying a partition table you don't want to specify the raw device in ceph
[19:48] <KaZeR> what's the best way to start from scratch ? removing the monmap file is sufficient? where are the other datas keypt? (how can i do a ceph-deploy purge :) ? )
[19:48] <mjevans> I /do/ happen to use partition tables, just so that it is clear the device has been initialized.
[19:48] <mjevans> DO NOT DO THAT
[19:48] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[19:48] <mjevans> ceph-deploy purge will uninstall packages/etc as well.
[19:48] <KaZeR> yeah i know i wrote the gentoo counterpart :D
[19:48] <mjevans> There is a different, proper, command.
[19:48] <KaZeR> ok thanks
[19:48] <mjevans> I can't recall it offhand
[19:49] <KaZeR> i'm restarting from scratch using your link. expect more questions :D
[19:50] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[19:51] * kiwigeraint (~kiwigerai@208.72.139.54) has joined #ceph
[19:53] * sputnik1_ (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[19:56] <KaZeR> mjevans: do you see any obvious mistake in my previous ceph.conf or can i reuse it for this attempt ? http://bpaste.net/show/Cxb0mGMFQXyrKG9enkyU/
[19:56] <mjevans> KaZeR: Don't expect answers... I'm still very new to this my self.
[19:57] <KaZeR> ok. would you mind sharing your own ceph.conf file ?
[19:57] <mjevans> A lack of whitespace for readability...
[19:57] <mjevans> I would actually
[19:57] <KaZeR> ok
[19:57] <mjevans> You need mon_initial_members / hosts; otherwise the cluster doesn't know how to bootstrap
[19:58] <mjevans> You also NEED to have 1, or 3+ monitors, with a non-even number of monitors.
[19:58] * Gamekiller77 (~Gamekille@2001:420:28c:1007:f889:d84:a530:e871) Quit (Quit: This computer has gone to sleep)
[19:59] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) has joined #ceph
[19:59] <KaZeR> oh i didnt knew that thanks
[19:59] <mjevans> Your hosts are confusing me a bit... mostly that you specified the address for one for some reason but not the others.
[20:00] <mjevans> The monitors should be able to 'vote' and reach decisions...
[20:00] <mjevans> having 2 means high risk of a tie in the event that one desyncs
[20:00] <mjevans> I suppose if you have an entire DC worth of racks an even number doesn't matter
[20:01] <KaZeR> mmm. my current setup has 4 nodes total. should i use 3 or 4 of them as monitors? (my current plan was to use 2 of them)
[20:01] * Gamekiller77 (~Gamekille@128-107-239-235.cisco.com) has joined #ceph
[20:01] <mjevans> I honestly don't know... I'd start out with 3
[20:01] <KaZeR> ok thanks
[20:01] <mjevans> I would actually start out with all four setup to be able to be monitors as well, but only 3 active.
[20:02] * sputnik1_ (~sputnik13@207.8.121.241) has joined #ceph
[20:02] <KaZeR> so should i put the 4 of them as "mon_initial_members" ?
[20:02] <mjevans> No
[20:02] <KaZeR> ah
[20:02] <mjevans> 'initial members' are the members that must be present for a cold boot of the cluster.
[20:04] <mjevans> If you have anything other than one OSD per host, I'd also prepare a custom crush map that describes your failure domains
[20:04] <KaZeR> ok makes sense
[20:12] * bitblt (~don@128-107-239-233.cisco.com) has joined #ceph
[20:15] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) Quit (Quit: Leaving.)
[20:17] <mjevans> I'm actually a bit stuck with the 'new' way of describing ceph osds... ceph-deploy can prepare and add them to the cluster nicely enough... but it doesn't actually seem to add them to the config files. This is quite frustrating as I'd prefer as automatic a recovery from unexpected cold-start of the cluster as possible... (presuming it 'shutdown' cleanly on say, a UPS)
[20:17] * alram (~alram@38.122.20.226) has joined #ceph
[20:18] <mjevans> Previously I was manually mounting the block devices in the OS, but now this should be handled by ceph, along with dedicated journal partitions.
[20:18] <mjevans> I just can't see where to actually describe that in the ceph.conf
[20:23] * nhm (~nhm@174-20-103-90.mpls.qwest.net) has joined #ceph
[20:23] * ChanServ sets mode +o nhm
[20:23] <dwm> mjevans: My strategy is going to be to put as little state in the ceph.conf as possible.
[20:24] <dwm> Given it's strictly redundant with the cluster-wide state maintained by the MON nodes for many things, I'm anticipating only putting global configuration assertions there.
[20:27] <mjevans> dwm: that's great if you've got the ability to keep mons up all the time. I'm far more pragmatic given that I have an 'unfixable' (with allocatable resources) failure domain resting on a single (if large) UPS.
[20:27] * gNetLabs (~gnetlabs@188.84.22.23) Quit (Read error: Connection reset by peer)
[20:28] * mschiff (~mschiff@mx10.schiffbauer.net) has joined #ceph
[20:28] <mjevans> I would prefer to place configuration data where expected and not in rc.local or the equivilent.
[20:29] <mjevans> I just don't currently see how to describe within the OSD configuration sections that the filesystem is located at (block device) and the journal for the OSD is located on (block device) like ceph-deploy now expects.
[20:30] <dwm> mjevans: I believe you set the configuration rule 'osd journal' with the path to the file / block-device being used.
[20:30] <dwm> See also: http://ceph.com/docs/master/rados/configuration/osd-config-ref/
[20:31] * sputnik1_ (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[20:31] * JCL (~JCL@c-24-23-166-139.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[20:31] <dwm> (If you are using a standard pattern for such in terms of OSD ids, etc. then you get set this once in the [osd] section and use variable substitution, as the default does.
[20:31] * ircolle (~Adium@2601:1:8380:2d9:2d3c:c5ab:b417:3e55) Quit (Quit: Leaving.)
[20:31] <dwm> ceph-deploy sets up symlinks from inside the Ceph OSD mount-point; you could instead do that.)
[20:32] * sputnik1_ (~sputnik13@207.8.121.241) has joined #ceph
[20:33] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) has joined #ceph
[20:33] * Gamekiller77 (~Gamekille@128-107-239-235.cisco.com) Quit (Quit: This computer has gone to sleep)
[20:33] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) Quit (Ping timeout: 480 seconds)
[20:34] * Gamekiller77 (~Gamekille@128-107-239-236.cisco.com) has joined #ceph
[20:36] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[20:37] <Fruit> having the journal on the same disk as the data does affect the performance
[20:40] <KaZeR> i'm confused. with this ceph.conf http://bpaste.net/show/cnTK7FtJuHoo2jEGOGYu/ i built my monmap file. but when starting the monitor on my nodes it complains that it does not match : http://i.imgur.com/YyfXxQ2.png
[20:40] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) has joined #ceph
[20:41] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[20:42] * leseb (~leseb@185.21.172.77) Quit (Killed (NickServ (Too many failed password attempts.)))
[20:43] * leseb (~leseb@185.21.172.77) has joined #ceph
[20:43] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) Quit ()
[20:44] * sage (~quassel@2607:f298:a:607:3c97:85a6:2361:88b1) Quit (Remote host closed the connection)
[20:44] * sage (~quassel@2607:f298:a:607:6019:ffec:4474:37ff) has joined #ceph
[20:44] * ChanServ sets mode +o sage
[20:44] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[20:45] * Underbyte (~jerrad@pat-global.macpractice.net) Quit (Quit: Linkinus - http://linkinus.com)
[20:49] <mjevans> dwm: yeah, but what about specifying the block device for the OSD data partition instead of an already mounted path?
[20:52] <Fruit> or identify the filesystems by their labels
[20:53] <Fruit> devs = /dev/disk/by-label/ceph-$id
[20:57] * sarob (~sarob@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) has joined #ceph
[20:57] * sarob (~sarob@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[20:57] * ircolle1 (~Adium@170.188.255.191) has joined #ceph
[20:58] * sarob (~sarob@2001:4998:effd:600:3570:bc18:cf36:1b7a) has joined #ceph
[21:01] * ircolle (~Adium@mobile-166-147-081-141.mycingular.net) has joined #ceph
[21:01] * ircolle (~Adium@mobile-166-147-081-141.mycingular.net) Quit (Read error: Connection reset by peer)
[21:04] <KaZeR> http://ceph.com/docs/master/install/manual-deployment/#adding-osds refers to a bootstrap-osd/{cluster}.keyring but it's not defined earlier.. i tried to use the ceph.mon.keyring and ceph.client.admin.keyring which are created earlier in this doc but none of these work : Error connecting to cluster: PermissionError
[21:06] * sjm (~sjm@cpe-67-248-135-198.nycap.res.rr.com) has joined #ceph
[21:07] * ircolle1 (~Adium@170.188.255.191) Quit (Quit: Leaving.)
[21:14] <Gugge-47527> KaZeR: ceph auth list
[21:15] <Gugge-47527> should give you enough info to create your own file :)
[21:19] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[21:23] * fghaas (~florian@sccc-66-78-236-243.smartcity.com) has joined #ceph
[21:26] * gNetLabs (~gnetlabs@188.84.22.23) has joined #ceph
[21:28] <jks> anyone knows how to tackle dependencies on Fedora 20 with Ceph RPMs from ceph.com and Qemu?
[21:29] <jks> qemu depends on "ceph-libs >= 0.61", but ceph.com does not provide a ceph-libs package
[21:30] * janos is also interested in the answer to jks's question
[21:30] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[21:31] * fghaas (~florian@sccc-66-78-236-243.smartcity.com) Quit (Quit: Leaving.)
[21:33] * ircolle (~Adium@170.188.255.191) has joined #ceph
[21:35] * sjm1 (~sjm@cpe-67-248-135-198.nycap.res.rr.com) has joined #ceph
[21:35] * sjm (~sjm@cpe-67-248-135-198.nycap.res.rr.com) Quit (Quit: Leaving.)
[21:37] <mjevans> Fruit: devs, thanks
[21:38] <mjevans> Yeah, Fruit 'devs' isn't documented http://ceph.com/docs/master/rados/configuration/osd-config-ref/
[21:38] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[21:38] <Fruit> that might be because it's used by the init script and not ceph itself
[21:39] * valeech (~valeech@64.191.222.117) Quit (Quit: valeech)
[21:40] <mjevans> Ah
[21:42] <jks> janos, I installed the qemu rpms manually with --nodeps, and that so far seems to be working.. but not really ideal
[21:42] <Fruit> osd mkfs type = xfs
[21:42] <Fruit> osd mount options xfs = noatime,nobarrier,logbufs=8,logbsize=256k,logdev=/dev/raid/xfs-$id
[21:42] <Fruit> those are other options we use
[21:42] <janos> jks, yeah i'm looking for low overhead as far as me remembering machine config differences like that
[21:42] <janos> too many other things on my mind ;)
[21:44] <ponyofdeath> hey guys trying to update permissions of an client with this command http://paste.ubuntu.com/7186511 and i get Error EINVAL: any ideas what i am doing wrong?
[21:44] * `jpg (~josephgla@ppp121-44-202-175.lns20.syd7.internode.on.net) Quit (Quit: Textual IRC Client: www.textualapp.com)
[21:45] * `jpg (~josephgla@ppp121-44-202-175.lns20.syd7.internode.on.net) has joined #ceph
[21:46] <mjevans> I might end up with the xfs option... however for the moment I'm trying to setup a btrfs test as it's time to try that again.
[21:46] <jks> janos, it is relatively painless compared to some of the problems I had with earlier versions, where I had to roll my own qemu, etc :)
[21:46] <jks> now if only I could wrap my head around why NIC bonding isn't working on F20 when it works fine on earlier Fedora versions :|
[21:46] <janos> i have yet to venture into rolling my own qemu
[21:46] <janos> i have nic bonding working on f20. round robin i think
[21:47] <janos> i've only tgested mode 0 and mode 1
[21:47] <jks> if you have time, I would be very curious to know how you got that working
[21:47] <jks> I can set it up manually, which works fine... but trying to set it up using the ordinary ifcfg-bond0 , etc. scripts ... not working like it used to
[21:47] <janos> well i never use NetworkManager, so that's step one
[21:47] <janos> oh
[21:47] <jks> me neither
[21:47] <janos> that;s all i do - is the usual ifcfg*
[21:48] <jks> janos, hmm.. odd! when I reboot my machine, the bond0 interface is not there - and it hasn't loaded the bonding kernel module
[21:48] <Fruit> ponyofdeath: you're trying to create a key that already exists. perhaps ceph-authtool can change the associated caps
[21:48] <janos> i can pull up the bond0 config for a machine that has ceph on it
[21:48] <janos> it should look pretty typical to you though
[21:48] <jks> janos, if I then do a restart of the network service, it will load the kernel module and up bond0, but seemingly does not ifenslave the slaves
[21:49] <jks> janos, that would be very helpful to me, thanks :)
[21:49] <janos> when i install fedora i always do minimal install, then add things as necessary
[21:49] <ponyofdeath> Fruit: thanks! i guess i should use put
[21:49] <janos> gimme a sec, and i'll see if i can find it
[21:49] * sroy (~sroy@207.96.182.162) Quit (Quit: Quitte)
[21:49] <jks> janos, same principle for me :)
[21:59] * sarob (~sarob@2001:4998:effd:600:3570:bc18:cf36:1b7a) Quit (Remote host closed the connection)
[21:59] * sarob (~sarob@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) has joined #ceph
[22:05] <mjevans> I'm thinking that /dev/disk/by-partuuid/ entries will be the /most/ stable for an unchanged disk... but might go with /dev/disk/by-id/ (I want to avoid this though in case device naming changes). However I'm a little concerned about by-partuuid as it seems possible a bit new: Apparantly it might also just be a debian thing for GPT partioned disks: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=681809
[22:05] * wmat (wmat@wallace.mixdown.ca) has joined #ceph
[22:05] * sarob_ (~sarob@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) has joined #ceph
[22:07] * sarob (~sarob@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[22:07] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[22:08] <devicenull> is it possible to centrally manage authorization for multiple different ceph clusters? Based on what I see, I'm thinking not
[22:09] * allsystemsarego (~allsystem@188.25.131.129) Quit (Quit: Leaving)
[22:10] * joao (~joao@a95-92-33-54.cpe.netcabo.pt) Quit (Remote host closed the connection)
[22:10] * valeech (~valeech@64.191.222.117) has joined #ceph
[22:11] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) has joined #ceph
[22:11] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) Quit ()
[22:12] * Pedras (~Adium@64.191.206.83) Quit (Quit: Leaving.)
[22:12] <aarontc> A bunch of my OSDs keep crashing with 'osd/PG.cc: 5255: FAILED assert(0 == "we got a bad state machine event")'... is this a bug? http://www.aarontc.com/logs/ceph-osd.4.log
[22:12] * davidzlap (~Adium@ip68-5-239-214.oc.oc.cox.net) has joined #ceph
[22:13] <aarontc> I have found several similar-looking bugs in redmine already, but I can't tell if this problem is the same based on the line number
[22:14] * valeech (~valeech@64.191.222.117) Quit ()
[22:23] <joshd> aarontc: looks like it might be a new one to me
[22:24] <aarontc> joshd: I was afraid of that. Thanks for looking... is there some logging I should crank up to help track down the issue?
[22:24] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[22:24] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[22:26] <joshd> aarontc: ms = 1, osd = 20, filestore = 20 would help if any logs would
[22:26] <aarontc> joshd: Okay, I'll tweak that now. What could help other than logs?
[22:26] <KaZeR> thanks Gugge-47527. ceph auth list lists only one entry, client.admin, which match what i have in my keyring
[22:27] * ircolle (~Adium@170.188.255.191) Quit (Quit: Leaving.)
[22:27] <joshd> aarontc: you could try a newer version (maybe the master branch even) to make sure it's still an issue, and not just a slightly different symptom
[22:28] <aarontc> joshd: hmm, that sounds dangerous :)
[22:29] <joshd> aarontc: if you've got the log from the first time it crashed that'll have some move info dumped from the in-memory log, which wouldn't be the same later
[22:30] <aarontc> joshd: I have many gigabytes of log files, I'm not 100% sure where the first crash occurred but I'm sure I have it captured for at least one OSD. The challenge will be tracking down where in the file the issue is
[22:31] <joshd> aarontc: you can grep for 'assert' to find the earliest one
[22:31] * sarob_ (~sarob@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[22:31] <joshd> since that'll have a date and time
[22:32] * sarob (~sarob@2001:4998:effd:600:a869:8c2f:b813:207b) has joined #ceph
[22:32] * andelhie_ (~Gamekille@128-107-239-235.cisco.com) has joined #ceph
[22:32] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[22:32] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[22:33] * Gamekiller77 (~Gamekille@128-107-239-236.cisco.com) Quit (Read error: Operation timed out)
[22:33] <aarontc> Okay, I found a logfile I didn't remove or replace, it has all the failures since the cluster was created, and it's only 111MiB :)
[22:33] <joshd> excellent!
[22:34] <aarontc> Do you want the whole thing?
[22:34] * sarob_ (~sarob@2001:4998:effd:600:a507:8f4c:387b:a43b) has joined #ceph
[22:34] <joshd> sure, it's small enough as is
[22:35] <aarontc> Cool, sending it to the webserver now
[22:35] <aarontc> The first 'assert' in this logfile was with version 0.72.1 :)
[22:37] <aarontc> joshd: http://www.aarontc.com/logs/ceph-osd.11.log
[22:38] <joshd> aarontc: surprised no one else has reported it if it was happening since emperor
[22:39] <aarontc> joshd: well, I'm not sure those are all the same issue.. I can see just with that grep that the line number is different
[22:39] <aarontc> I could have encountered different problems over the various versions
[22:39] <joshd> the line numbers changed between versions too
[22:40] <joshd> the backtrace looks the same
[22:40] * sarob (~sarob@2001:4998:effd:600:a869:8c2f:b813:207b) Quit (Ping timeout: 480 seconds)
[22:40] <aarontc> joshd: Yeah. Well, I'm happy to do whatever I can to help corral the bug if that's what it is... just let me know :)
[22:41] <aarontc> If it matters, the problem seems to occur less frequently if I restart the crashed OSDs one at a time, and let it fully recover that OSD before starting the next
[22:41] <aarontc> (but not always)
[22:43] <joshd> aarontc: could you file a bug about it? sjust will probably want to look into it more
[22:43] <aarontc> Sure. Should I attach the logfile or leave it a hyperlink?
[22:43] <joshd> link is fine
[22:44] <joshd> attaching a dump of your --format json osdmap would help too
[22:46] <aarontc> joshd: What's the ceph command to get the osdmap?
[22:47] <joshd> aarontc: ceph osd dump --format json
[22:47] <aarontc> joshd: thanks :)
[22:47] <joshd> yw
[22:50] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[22:52] <aarontc> Should osdmap be attached as a file or in <pre>?
[22:54] * sarob_ (~sarob@2001:4998:effd:600:a507:8f4c:387b:a43b) Quit (Remote host closed the connection)
[22:54] * sarob (~sarob@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) has joined #ceph
[22:57] <mjevans> ARG that's got to be a bug in gdisk... I cannot believe that the uuid of /non-edited/ partitions is replaced when you edit other partitions.
[22:57] <mikedawson> joshd: do you need anything more from me besides that coredump?
[22:57] * thomnico (~thomnico@c-217-115-42-157.cust.bredband2.com) Quit (Read error: Connection reset by peer)
[22:58] * thomnico (~thomnico@c-217-115-42-157.cust.bredband2.com) has joined #ceph
[23:02] * sarob (~sarob@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[23:09] * thomnico (~thomnico@c-217-115-42-157.cust.bredband2.com) Quit (Quit: Ex-Chat)
[23:09] <aarontc> lol, filestore=20 sure produces a lot of logging
[23:10] * andelhie_ (~Gamekille@128-107-239-235.cisco.com) Quit (Quit: This computer has gone to sleep)
[23:11] * andelhie_ (~Gamekille@128-107-239-235.cisco.com) has joined #ceph
[23:14] <mjevans> Fruit: FYI, I've decided to use the disk World Wide ID mapping (plus part info) : /dev/disk/by-id/wwn-* given that the currently avaliable tools do not appear to preserve GPT partition UUIDs when editing a disk.
[23:15] <joshd> mikedawson: I don't think I'll need anything else right now - need to look into more race conditions. I'll let you know when I have a potential fix
[23:16] <mikedawson> joshd: thanks. If you'd like me to run a custom build to help with debugging, let me know.
[23:17] * thomnico (~thomnico@c-217-115-42-157.cust.bredband2.com) has joined #ceph
[23:19] <Fruit> mjevans: I see. I chose labels because that makes it easier to replace failed disks
[23:19] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[23:23] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[23:23] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[23:25] <Fruit> mjevans: also, (s)gdisk preserves uuids (and you can assign arbitrary ones too)
[23:25] * leochill (~leochill@nyc-333.nycbit.com) has joined #ceph
[23:26] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) has joined #ceph
[23:27] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[23:31] <mjevans> Fruit: might depend on a newer version of gdisk; gdisk /is/ what I just used and the labels did change for my version.
[23:36] * mattt (~textual@S010690724001c795.vc.shawcable.net) has joined #ceph
[23:38] * mattt (~textual@S010690724001c795.vc.shawcable.net) Quit ()
[23:39] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) Quit (Quit: Leaving)
[23:44] <KaZeR> 20633 MB used, 11874 GB / 11895 GB avail : woohoo!
[23:45] <KaZeR> it works LD
[23:45] * vata (~vata@2607:fad8:4:6:81f2:1b16:4789:7354) Quit (Quit: Leaving.)
[23:46] * vilobhmm (~vilobhmm@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) has joined #ceph
[23:47] <vilobhmm> hi
[23:48] <vilobhmm> I was trying to use CEPH RBD but facing few issues while creating raw device using qemu-img
[23:48] <vilobhmm> on the CEPH storage
[23:48] <vilobhmm> need some help
[23:50] <vilobhmm> elder, nhm : you there ?
[23:51] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[23:51] * ChanServ sets mode +v andreask
[23:55] * sarob (~sarob@2001:4998:effd:600:d013:9612:c196:766e) has joined #ceph
[23:57] <mjevans> vilobhmm: more specific
[23:57] <vilobhmm> hi mjevans
[23:57] <vilobhmm> I am using steps mentioned at http://openstack.redhat.com/Using_Ceph_for_Cinder_with_RDO_Havana
[23:57] <mjevans> vilobhmm: what command are you using to create it and what error?
[23:58] <vilobhmm> installed qemu for ceph
[23:58] <mjevans> I've not worked with openstack before; it looked like a data-center scale solution rather than a small-buisness solution.
[23:58] <vilobhmm> but when i try to test whether qemu for ceph works by using qemu-img create the commands just stcuks
[23:58] <vilobhmm> qemu-img create -f raw rbd:data/foo 1G
[23:58] <vilobhmm> its not openstack problem
[23:59] <vilobhmm> the problem is qemu-img is not recognizing the ceph store
[23:59] <mjevans> vilobhmm: rbd ls -p disk
[23:59] <vilobhmm> qemu-img can create raw images on local storage but not on the CEPH osd pools
[23:59] <mjevans> was foo created?

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.