#ceph IRC Log

Index

IRC Log for 2014-03-26

Timestamps are in GMT/BST.

[0:00] * fedgoat (~fedgoat@cpe-24-28-22-21.austin.res.rr.com) has joined #ceph
[0:01] * ustuehler (~uwe@e179045203.adsl.alicedsl.de) has joined #ceph
[0:01] * nhm_ (~nhm@174-20-103-90.mpls.qwest.net) has joined #ceph
[0:02] <ustuehler> howdy
[0:03] <ustuehler> is it expected that i get "64 pgs degraded" when i change the size of an empty pool from 2 to 3?
[0:03] <lurbs> ponyofdeath: https://ceph.com/docs/master/rbd/rbd-snapshot/
[0:04] * shang (~ShangWu@42-64-43-100.dynamic-ip.hinet.net) Quit (Quit: Ex-Chat)
[0:04] * shang (~ShangWu@42-64-43-100.dynamic-ip.hinet.net) has joined #ceph
[0:04] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[0:05] <dmick> ustuehler: if you didn't also change your crush configuration to map 3 replicas
[0:06] <ustuehler> dmick: i'm using the default map, with min_size 1 and max_size 10
[0:07] <lurbs> Might not be using optimal CRUSH tunables?
[0:07] * fedgoatbah (~fedgoat@cpe-24-28-22-21.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[0:07] * nhm (~nhm@65-128-159-155.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[0:08] <dmick> ustuehler: that's not what min_size and max_size mean
[0:08] <ustuehler> gaah, it is this: step choose firstn 0 type host
[0:08] <ustuehler> one host is down :)
[0:08] <ustuehler> i only had 3
[0:08] <ustuehler> thanks anyway ;)
[0:09] * shang (~ShangWu@42-64-43-100.dynamic-ip.hinet.net) Quit (Quit: Ex-Chat)
[0:10] <dmick> that'll do it too :)
[0:12] * The_Bishop (~bishop@2001:470:50b6:0:14a8:d133:21b2:88cd) has joined #ceph
[0:13] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[0:24] * markbby (~Adium@168.94.245.4) Quit (Ping timeout: 480 seconds)
[0:28] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[0:29] * fdmanana (~fdmanana@bl5-172-157.dsl.telepac.pt) Quit (Quit: Leaving)
[0:29] * mtanski (~mtanski@69.193.178.202) Quit (Ping timeout: 480 seconds)
[0:32] * AfC (~andrew@2407:7800:400:1011:6e88:14ff:fe33:2a9c) has joined #ceph
[0:35] * Cube (~Cube@66-87-67-4.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[0:44] * diegows_ (~diegows@190.190.5.238) has joined #ceph
[0:51] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[0:52] * sjustwork (~sam@2607:f298:a:607:91fb:e6c2:dd0e:da92) Quit (Quit: Leaving.)
[0:57] * BillK (~BillK-OFT@58-7-115-16.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[0:59] * JoeGruher (~JoeGruher@134.134.139.70) Quit (Remote host closed the connection)
[1:00] * bitblt (~don@128-107-239-233.cisco.com) Quit (Read error: Connection reset by peer)
[1:02] * BillK (~BillK-OFT@124-148-91-138.dyn.iinet.net.au) has joined #ceph
[1:15] * xarses_ (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[1:28] * h6w (~tudor@254.86.96.58.static.exetel.com.au) has left #ceph
[1:29] * diegows_ (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[1:29] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[1:32] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) has joined #ceph
[1:37] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[1:42] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[1:45] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) has joined #ceph
[1:50] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) Quit (Quit: Leaving.)
[1:50] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[1:58] * joao|lap (~JL@a95-92-33-54.cpe.netcabo.pt) Quit (Ping timeout: 480 seconds)
[1:59] * jtaguinerd (~Adium@112.205.12.151) has joined #ceph
[2:02] * garphy is now known as garphy`aw
[2:02] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[2:02] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:07] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) Quit (Quit: mtanski)
[2:09] * JeffK (~Jeff@38.99.52.10) Quit (Quit: JeffK)
[2:10] * yanzheng (~zhyan@jfdmzpr05-ext.jf.intel.com) has joined #ceph
[2:12] * Cube (~Cube@66-87-67-32.pools.spcsdns.net) has joined #ceph
[2:12] * reed (~reed@75-101-54-131.dsl.static.sonic.net) Quit (Read error: Operation timed out)
[2:14] * yanzheng (~zhyan@jfdmzpr05-ext.jf.intel.com) Quit (Remote host closed the connection)
[2:14] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[2:18] * AfC (~andrew@2407:7800:400:1011:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[2:18] * yanzheng (~zhyan@jfdmzpr05-ext.jf.intel.com) has joined #ceph
[2:19] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[2:21] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[2:31] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) Quit (Quit: Leaving.)
[2:34] * Lea (~LeaChim@host86-159-235-225.range86-159.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:46] * zack_dolby (~textual@p852cae.tokynt01.ap.so-net.ne.jp) Quit (Quit: Textual IRC Client: www.textualapp.com)
[2:46] * zack_dolby (~textual@p852cae.tokynt01.ap.so-net.ne.jp) has joined #ceph
[2:58] * garphy`aw is now known as garphy
[2:59] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[3:02] * garphy is now known as garphy`aw
[3:03] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[3:03] * zerick (~eocrospom@190.187.21.53) Quit (Remote host closed the connection)
[3:03] * Boltsky (~textual@office.deviantart.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[3:09] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) has joined #ceph
[3:11] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[3:13] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[3:13] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[3:13] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) Quit ()
[3:19] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[3:27] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) has joined #ceph
[3:27] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[3:28] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[3:30] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) Quit ()
[3:30] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) has joined #ceph
[3:34] * mdxi (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) Quit (Quit: leaving)
[3:36] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:37] * mdxi (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) has joined #ceph
[3:37] * ustuehler (~uwe@e179045203.adsl.alicedsl.de) has left #ceph
[3:39] * wrale (~wrale@cpe-107-9-20-3.woh.res.rr.com) has joined #ceph
[3:51] * `jpg (~josephgla@ppp121-44-146-74.lns20.syd7.internode.on.net) has joined #ceph
[3:53] * mattt (~textual@S010690724001c795.vc.shawcable.net) has joined #ceph
[4:01] * zack_dolby (~textual@p852cae.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[4:04] * mattt (~textual@S010690724001c795.vc.shawcable.net) Quit (Quit: Computer has gone to sleep.)
[4:06] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit (Quit: Leaving...)
[4:17] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[4:20] * mdxi (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) has left #ceph
[4:20] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[4:20] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[4:24] * The_Bishop (~bishop@2001:470:50b6:0:14a8:d133:21b2:88cd) Quit (Ping timeout: 480 seconds)
[4:25] * The_Bishop (~bishop@2001:470:50b6:0:14a8:d133:21b2:88cd) has joined #ceph
[4:27] * shahrzad (~sherry@mike-alien.esc.auckland.ac.nz) Quit (Remote host closed the connection)
[4:27] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[4:28] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[4:33] * shang (~ShangWu@112.96.168.34) has joined #ceph
[4:35] * shang (~ShangWu@112.96.168.34) Quit ()
[4:36] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:43] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[4:48] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[4:51] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) Quit (Quit: mtanski)
[5:01] * zack_dolby (~textual@ai126213138200.5.tss.access-internet.ne.jp) has joined #ceph
[5:03] * Vacum (~vovo@88.130.203.180) has joined #ceph
[5:10] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:10] * Vacum_ (~vovo@88.130.206.247) Quit (Ping timeout: 480 seconds)
[5:24] * perfectsine (~perfectsi@if01-gn01.dal05.softlayer.com) Quit (Read error: Connection reset by peer)
[5:27] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[5:28] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[5:34] * zack_dolby (~textual@ai126213138200.5.tss.access-internet.ne.jp) Quit (Ping timeout: 480 seconds)
[5:38] * zack_dolby (~textual@ai126213138200.5.tss.access-internet.ne.jp) has joined #ceph
[5:49] * hasues (~hazuez@112.sub-174-237-7.myvzw.com) has joined #ceph
[5:51] * Pedras1 (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[5:55] * wrale (~wrale@cpe-107-9-20-3.woh.res.rr.com) Quit (Quit: Leaving...)
[5:57] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[6:04] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[6:09] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Ping timeout: 480 seconds)
[6:14] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[6:27] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[6:27] * zack_dolby (~textual@ai126213138200.5.tss.access-internet.ne.jp) Quit (Ping timeout: 480 seconds)
[6:32] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) has joined #ceph
[6:35] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[6:39] * zack_dolby (~textual@pw126205088201.3.panda-world.ne.jp) has joined #ceph
[6:51] * hasues (~hazuez@112.sub-174-237-7.myvzw.com) Quit (Quit: Leaving.)
[6:53] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[6:55] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit ()
[6:59] * zack_dol_ (~textual@ai126213138200.5.tss.access-internet.ne.jp) has joined #ceph
[7:03] * zack_dolby (~textual@pw126205088201.3.panda-world.ne.jp) Quit (Read error: Connection reset by peer)
[7:07] * zack_dol_ (~textual@ai126213138200.5.tss.access-internet.ne.jp) Quit (Ping timeout: 480 seconds)
[7:07] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Ping timeout: 480 seconds)
[7:10] * zack_dolby (~textual@pw126255077086.9.panda-world.ne.jp) has joined #ceph
[7:14] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[7:14] * zack_dolby (~textual@pw126255077086.9.panda-world.ne.jp) Quit (Read error: Connection reset by peer)
[7:16] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[7:24] * zack_dolby (~textual@ai126212132000.5.tik.access-internet.ne.jp) has joined #ceph
[7:26] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[7:29] * Cube (~Cube@66-87-67-32.pools.spcsdns.net) Quit (Quit: Leaving.)
[7:33] * wschulze (~wschulze@p54BEDDB2.dip0.t-ipconnect.de) has joined #ceph
[7:33] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) Quit (Quit: sprachgenerator)
[7:37] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[7:37] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[7:45] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:07] * jtaguinerd (~Adium@112.205.12.151) Quit (Quit: Leaving.)
[8:07] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[8:19] <aarontc> hmm I have an interesting problem... if I lose a single OSD on a single host, with two pools (min_size 2 and 3), I have a bunch of pgs go 'incomplete'
[8:20] <aarontc> that doesn't seem quite right
[8:21] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[8:28] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[8:49] * analbeard (~shw@141.0.32.124) has joined #ceph
[8:59] * tim|mint (~tim@5419C03B.cm-5-2d.dynamic.ziggo.nl) has joined #ceph
[9:00] * garphy`aw is now known as garphy
[9:09] * andreask (~andreask@213.150.31.3) has joined #ceph
[9:09] * ChanServ sets mode +v andreask
[9:13] <tim|mint> hi, what problems can one expect when running ceph with btrfs on a very recent kernel (3.13)? anyone doing it already?
[9:13] <tim|mint> I'm mainly wondering if the current 'btrfs is experimental' state is perfectionism from the creators or actual causes issues :)
[9:14] <Gugge-47527> as far as i know there is still performance problems with btrfs
[9:15] <Gugge-47527> after some time, the performance drops
[9:16] <tim|mint> hm ok
[9:19] * Sysadmin88 (~IceChat77@176.254.32.31) Quit (Read error: Connection reset by peer)
[9:26] * andreask (~andreask@213.150.31.3) Quit (Ping timeout: 480 seconds)
[9:33] <tim|mint> is running the osd journal on ssd worth it? also, in a large machine, would you create a raid1 with ssds and have multiple journals on there?
[9:41] * thomnico (~thomnico@2a01:e35:8b41:120:51bd:4913:9399:150b) has joined #ceph
[9:43] * zack_dolby (~textual@ai126212132000.5.tik.access-internet.ne.jp) Quit (Ping timeout: 480 seconds)
[9:44] * sleinen (~Adium@2001:620:0:26:590:e67a:c472:c9d8) has joined #ceph
[9:45] * jtaguinerd (~Adium@121.54.44.153) has joined #ceph
[9:48] * cfreak201 (~cfreak200@p4FF3FC84.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[10:06] <toutour>
[10:06] <toutour>
[10:06] * yanzheng (~zhyan@jfdmzpr05-ext.jf.intel.com) Quit (Remote host closed the connection)
[10:10] * oro (~oro@2001:620:20:16:c901:fc01:9cf9:26e) has joined #ceph
[10:11] * skeenan (~Adium@8.21.68.242) has joined #ceph
[10:23] * Lea (~LeaChim@host86-159-235-225.range86-159.btcentralplus.com) has joined #ceph
[10:28] * allsystemsarego (~allsystem@5-12-37-194.residential.rdsnet.ro) has joined #ceph
[10:37] * isodude (~isodude@kungsbacka.oderland.com) Quit (Remote host closed the connection)
[10:39] * isodude (~isodude@kungsbacka.oderland.com) has joined #ceph
[10:47] * longnv (~longnv@123.30.135.76) has joined #ceph
[10:50] <jerker> tim|mint: I have only tried running with journal on SSD. But one have to be aware of the if those mirrored SSDs go down the whole node go down as well. I search for low cheap 4 SATA + 1 SSD boxes but I have not really found any I like. :/
[10:50] <longnv> Hi
[10:50] <tim|mint> jerker: that's why i was thinking about raid1 ssds
[10:51] <tim|mint> at least one can fail without breaking all the osds on the machine
[10:52] <jerker> tim|mint: yes. And that is why I am thinking of smaller nodes :) Also, with smaller nodes (2-4 drives) gigabit or bonded gigabit is ok, with large nodes 10G is needed to be able to saturate network. And in our setup they are quite expensive.
[10:52] <tim|mint> and I'm thinking about pretty big boxes... like 8 slots... 2 for ssd and 6 for storage
[10:53] <tim|mint> smaller boxes tend to use more energy for the same functionality and i'd like to keep energy use lowish ;-)
[10:53] <longnv> My apache response http error 500 when request/second about 300
[10:54] <jerker> tim|mint: It depends. I would like the HP Microserver Gen8 but rackmount them :( I wish some larger vendor could sell that stuff.
[10:54] <tim|mint> and we're indeed thinking about a 10gb dedicated cluster network (only bonded gigabit on public network, though)
[10:54] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has joined #ceph
[10:54] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has left #ceph
[10:59] <jerker> I am looking at the 12*3.5"+2*2.5" 1U nodes from Supermicro. They sort of require 10G which in our setup makes them quite expensive.
[10:59] <jerker> They are small compared to the 36 (or 72) disk 4U boxes. :)
[11:03] * The_Bishop (~bishop@2001:470:50b6:0:14a8:d133:21b2:88cd) Quit (Remote host closed the connection)
[11:04] * thb (~me@port-7798.pppoe.wtnet.de) has joined #ceph
[11:04] * thb is now known as Guest4432
[11:05] * Guest4432 is now known as thb
[11:05] * jtang1 (~jtang@outbound.ladiesgaelic.ie) has joined #ceph
[11:13] <tim|mint> hm that's interesting as well
[11:14] * sleinen (~Adium@2001:620:0:26:590:e67a:c472:c9d8) Quit (Ping timeout: 480 seconds)
[11:15] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[11:17] * sleinen (~Adium@2001:620:0:46:21db:dec7:512f:7347) has joined #ceph
[11:18] * zack_dolby (~textual@ai126194007171.1.tss.access-internet.ne.jp) has joined #ceph
[11:24] * jtang1 (~jtang@outbound.ladiesgaelic.ie) Quit (Quit: Leaving.)
[11:24] <classicsnail> I've got a bunch of the 36 and 72 disk boxes
[11:25] <classicsnail> if I were to do it all over again, I'd go the 12-16 disk boxes with 10G
[11:26] <classicsnail> the Intel UIO 10G interfaces from Supermicro are so dirt cheap it's almost embarrasing
[11:27] <tim|mint> it's generally not the nics that are expensive, but the switches :)
[11:27] <classicsnail> 10G SFP+ chassis aren't that expensive
[11:27] <classicsnail> and twinax and flexoptics SFP+ modules are cheap enough
[11:27] <classicsnail> the problem with the 36 and 72 disk chassis is that 72 disks is actually a lot of io capability
[11:28] <singler> classicsnail: why you prefer 12-16 disk boxes vs 36? I am planning start using 36 disk boxes
[11:28] <classicsnail> and I'm in the process of sorting out connectivity at 2 x 40 G for the chassis
[11:28] <classicsnail> when a chassis comes up, and the rebuilds and rebalancing occurs, it'll flat line the 10G links trivially
[11:28] <classicsnail> it's too "Dense", for want of a better description
[11:29] <classicsnail> the 72 disk chassis have 3TB disks in them, which makes for a 2.7 day full transfer
[11:29] <classicsnail> to fill the 216TB
[11:30] <classicsnail> I have multiple 100GigE between various DCs, but it's worthless to me as 100G server adapters are incredibly expensive, and because I have fewer chassis than I should, my rebuild speed is really limited at that 10G
[11:30] <classicsnail> not multiples of
[11:30] <Serbitar> i guess you just hand ceph the individual disks?
[11:31] <classicsnail> yes, I have 72 osds in each chassis
[11:31] <classicsnail> plus the two system disks
[11:32] <classicsnail> at the time, the 72 disk chassis were a good solution for how I intended to use them
[11:32] <classicsnail> hwo I'm using them has changed, and really does reveal the capacity problem
[11:32] <classicsnail> I'm organising dark fibre between two DCs atm so I can
[11:33] <classicsnail> mux 40G or multiple 10s in a LAG between sides, multiple 10s in each chassis
[11:33] <classicsnail> and of course, losing a chassis has a lot more impact on the total osd count than a smaller chassis
[11:35] * BillK (~BillK-OFT@124-148-91-138.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[11:35] <tim|mint> interesting... so using smaller machines is actually preferred... too bad it works havoc on the power bill :S
[11:36] <tim|mint> classicsnail: do you work with ssds as well?
[11:36] * fatih (~fatih@78.186.36.182) has joined #ceph
[11:37] * BillK (~BillK-OFT@106-69-183-243.dyn.iinet.net.au) has joined #ceph
[11:38] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[11:40] <classicsnail> I don't use ssd journals, no
[11:40] <classicsnail> 72 x 300 MB/s into ~ 720GB of journals, is a lot of journal ;)
[11:40] * zack_dolby (~textual@ai126194007171.1.tss.access-internet.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[11:41] <tim|mint> indeed :)
[11:43] * zack_dolby (~textual@ai126194007171.1.tss.access-internet.ne.jp) has joined #ceph
[11:43] <classicsnail> I've seen some roadmaps lately from vendors who are aware of the power bill issue with multiple chassis
[11:44] <classicsnail> I'm seriously rethinking some of my ongoing designs
[11:44] * longnv (~longnv@123.30.135.76) Quit (Quit: Going offline, see ya! (www.adiirc.com))
[12:00] <jtaguinerd> hi guys quick question i am trying to mount a type 2 rbd from one of my machine..but I am getting rbd: add failed: No such device or address
[12:01] <jtaguinerd> what could be the problem? I am pretty sure I have all the ceph.conf in my machine as I can do ceph -s and it gives me a result
[12:05] <jtaguinerd> Is it because my kernel which is 3.8 does not support mounting of rbd format 2 yet?
[12:07] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[12:08] <singler> http://ceph.com/docs/master/man/8/rbd/ format 2 - Use the second rbd format, which is supported by librbd (but not the kernel rbd module) at this time
[12:08] <singler> I guess you cannot mount it
[12:09] <jtaguinerd> i see thanks singler
[12:11] * diegows_ (~diegows@190.190.5.238) has joined #ceph
[12:11] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) Quit (Ping timeout: 480 seconds)
[12:12] * jtang1 (~jtang@outbound.ladiesgaelic.ie) has joined #ceph
[12:13] * jtang1 (~jtang@outbound.ladiesgaelic.ie) Quit ()
[12:14] * leseb (~leseb@185.21.172.77) Quit (Killed (NickServ (Too many failed password attempts.)))
[12:14] * leseb (~leseb@185.21.172.77) has joined #ceph
[12:15] * jtang1 (~jtang@outbound.ladiesgaelic.ie) has joined #ceph
[12:17] * jtang1 (~jtang@outbound.ladiesgaelic.ie) Quit ()
[12:23] * jtang1 (~jtang@outbound.ladiesgaelic.ie) has joined #ceph
[12:25] * jtang1 (~jtang@outbound.ladiesgaelic.ie) Quit ()
[12:33] * fdmanana (~fdmanana@bl5-172-157.dsl.telepac.pt) has joined #ceph
[12:38] * i_m (~ivan.miro@gbibp9ph1--blueice4n2.emea.ibm.com) has joined #ceph
[12:39] * i_m (~ivan.miro@gbibp9ph1--blueice4n2.emea.ibm.com) Quit ()
[12:40] * i_m (~ivan.miro@gbibp9ph1--blueice2n1.emea.ibm.com) has joined #ceph
[12:41] * jtang1 (~jtang@outbound.ladiesgaelic.ie) has joined #ceph
[12:43] * jtang1 (~jtang@outbound.ladiesgaelic.ie) Quit ()
[12:46] * fireD (~fireD@93-139-169-67.adsl.net.t-com.hr) has joined #ceph
[12:47] * tryggvil (~tryggvil@178.19.53.254) Quit (Quit: tryggvil)
[12:47] <jerker> classicsnail: How much extra power do an idle chassis use? Power from CPU and drives are more por less the same, I would have guess, using 3 boxes with 1 CPU with 4 cores each at 3.0 GHz (12 drives) compared to 1 box with 2 CPUs with 6 cores at 3.0 GHz (36 drives) if one is assuming 1 GHz CPU per disk/OSD.
[12:50] <classicsnail> my 72 disks chassis using wd re enterprise drives, two E5-2670s, ~ 850 watts idle
[12:50] <classicsnail> set everything going, it'll spike to 1.2 - 1.3kw
[12:51] * jtaguinerd1 (~Adium@121.54.32.136) has joined #ceph
[12:51] <classicsnail> each drive is 4.5 watts while operating
[12:51] <classicsnail> not sure where all that extra power comes from to be honest, 72 x 4.5 != 850 w
[12:52] <classicsnail> may have to go remeasure that now I think about it, as I'm taking that from the bmc, and I suspect it's actually wrong
[12:52] * jtang1 (~jtang@outbound.ladiesgaelic.ie) has joined #ceph
[12:52] * jtaguinerd (~Adium@121.54.44.153) Quit (Read error: Connection reset by peer)
[12:54] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) has joined #ceph
[12:55] * BillK (~BillK-OFT@106-69-183-243.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[12:56] * oro (~oro@2001:620:20:16:c901:fc01:9cf9:26e) Quit (Ping timeout: 480 seconds)
[12:57] * jtang1 (~jtang@outbound.ladiesgaelic.ie) Quit ()
[13:04] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[13:05] * oro (~oro@2001:620:20:222:c0c7:3647:c369:d16b) has joined #ceph
[13:05] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[13:08] * jtang1 (~jtang@outbound.ladiesgaelic.ie) has joined #ceph
[13:08] * jtang1 (~jtang@outbound.ladiesgaelic.ie) Quit ()
[13:16] <glambert> wrencsok, did you find a solution for your problem with stats coming from the radosgw?
[13:26] * syed_ (~chatzilla@125.63.98.129) has joined #ceph
[13:30] * ksingh (~Adium@2001:708:10:10:c896:e2fb:2a8f:f5bb) has joined #ceph
[13:30] <ksingh> [root@bmi-pocfe1 ceph]# ceph-deploy mds destroy storage0105-ib
[13:30] <ksingh> [ceph_deploy.cli][INFO ] Invoked (1.4.0): /usr/bin/ceph-deploy mds destroy storage0105-ib
[13:30] <ksingh> [ceph_deploy.mds][ERROR ] subcommand destroy not implemented
[13:30] <ksingh> [root@bmi-pocfe1 ceph]#
[13:31] <ksingh> any one has seen this
[13:33] <ksingh> can we remove a MDS ???
[13:40] <syed_> ksingh: ceph-deploy mds destroy {host-name}[:{daemon-name}]
[13:42] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[13:42] * wschulze (~wschulze@p54BEDDB2.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[13:42] <ksingh> syed_ : what is the daemon-name
[13:42] <ksingh> here
[13:45] * wschulze (~wschulze@p54BEDDB2.dip0.t-ipconnect.de) has joined #ceph
[13:46] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) has joined #ceph
[13:46] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[13:46] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[13:47] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has joined #ceph
[13:47] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) Quit ()
[13:47] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) has joined #ceph
[13:48] <syed_> well actuly, daemon name is optional but i am not sure if destroy sub command is available for mds
[13:48] <ksingh> yeah , looks like its not yet implemented
[13:48] <ksingh> not even documentation
[13:48] <syed_> http://eu.ceph.com/docs/v0.61/rados/deployment/ceph-deploy-mds/
[13:49] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[13:50] <ksingh> oppse i was looking on other link http://ceph.com/docs/master/rados/deployment/ceph-deploy-mds/
[13:50] <ksingh> ??? it says coming soon
[13:50] <syed_> Yes, i saw that.
[13:50] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) has joined #ceph
[13:53] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) Quit (Read error: Operation timed out)
[13:53] <syed_> ksingh: nice blog, btw
[13:53] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[13:54] <ksingh> which one ?
[13:54] <Svedrin> what are pools used for? when would I want to create new pools?
[13:56] <isodude> Regarding, http://tracker.ceph.com/issues/2305 , any updates on this?
[13:56] * wschulze (~wschulze@p54BEDDB2.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[13:57] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[13:57] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[13:59] <syed_> Svedrin: Pool contain objects, read here for more http://www.sebastien-han.fr/blog/2012/10/15/ceph-data-placement/
[14:01] <Svedrin> syed_, but when do I need a new one? like, for tiering? or for separating data from multiple users?
[14:05] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[14:05] * linuxkidd (~linuxkidd@rtp-isp-nat1.cisco.com) has joined #ceph
[14:09] <syed_> Svedrin: separating data from multiple users
[14:09] * Svedrin (svedrin@ketos.funzt-halt.net) has left #ceph
[14:09] * Svedrin (svedrin@ketos.funzt-halt.net) has joined #ceph
[14:10] <Svedrin> syed_, ok, so how do I tell mount.ceph or ceph-fuse which pool to use?
[14:10] <Svedrin> do they even need to know?
[14:11] * cfreak200 (~cfreak200@p4FF3F177.dip0.t-ipconnect.de) has joined #ceph
[14:13] * powhihs (~hjg@0001c8bd.user.oftc.net) has joined #ceph
[14:13] <powhihs> hi
[14:16] * ksingh (~Adium@2001:708:10:10:c896:e2fb:2a8f:f5bb) has left #ceph
[14:18] <powhihs> i got a ceph cluster with no ssd's, and journaling on OSD's
[14:18] <syed_> Svedrin: if you want to mount a particular pool with cephfs , you can do something like mount -t ceph 1.2.3.4:/ /srv/mds/pools/<your pool>
[14:18] <powhihs> if i were to upgrade the hardware, in a way of adding SSD's
[14:18] <powhihs> which way should i go first
[14:19] <Svedrin> syed_, but wouldn't that mount the entire thing? doesn't the pool show up in the "1.2.3.4:/" part somewhere?
[14:19] <powhihs> creating OSD's with journaling on SSD, or implementing cache pool with SSD's
[14:21] <powhihs> I understand that it is best to have OSD journaling on SSD and cache pool, but if i cannot afford it, which one should i choose from both?
[14:21] * wschulze (~wschulze@p54BEDDB2.dip0.t-ipconnect.de) has joined #ceph
[14:24] * zack_dolby (~textual@ai126194007171.1.tss.access-internet.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[14:25] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[14:27] * wschulze (~wschulze@p54BEDDB2.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[14:30] * tryggvil (~tryggvil@178.19.53.254) Quit (Quit: tryggvil)
[14:31] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[14:31] * mozg (~andrei@host217-46-236-49.in-addr.btopenworld.com) has joined #ceph
[14:32] * syed_ (~chatzilla@125.63.98.129) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 27.0.1/20140212131424])
[14:36] * perfectsine (~perfectsi@if01-gn01.dal05.softlayer.com) has joined #ceph
[14:44] * valeech (~valeech@ip72-205-7-86.dc.dc.cox.net) has joined #ceph
[14:46] * markbby (~Adium@168.94.245.1) has joined #ceph
[14:49] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[14:53] * tryggvil (~tryggvil@178.19.53.254) Quit ()
[14:53] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[14:55] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[14:56] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[14:57] * xcrracer (~xcrracer@fw-ext-v-1.kvcc.edu) Quit (Quit: quit)
[14:57] * xcrracer (~xcrracer@fw-ext-v-1.kvcc.edu) has joined #ceph
[15:00] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[15:02] * brambles (lechuck@s0.barwen.ch) has joined #ceph
[15:03] * zidarsk8 (~zidar@2001:1470:fffd:101c:ea11:32ff:fe9a:870) has joined #ceph
[15:04] * zidarsk8 (~zidar@2001:1470:fffd:101c:ea11:32ff:fe9a:870) has left #ceph
[15:07] * thuc_ (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[15:07] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[15:07] * nrs_ (~nrs@ool-435376d0.dyn.optonline.net) has joined #ceph
[15:08] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[15:11] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[15:13] * diegows_ (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[15:23] * thuc_ (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[15:23] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[15:26] * valeech (~valeech@ip72-205-7-86.dc.dc.cox.net) Quit (Quit: valeech)
[15:28] * theactualwarrenusui (~Warren@2607:f298:a:607:4c3f:82e:add5:b567) Quit (Read error: Connection reset by peer)
[15:29] * theactualwarrenusui (~Warren@2607:f298:a:607:4c3f:82e:add5:b567) has joined #ceph
[15:31] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[15:32] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[15:36] * dvanders (~dvanders@pb-d-128-141-133-146.cern.ch) has joined #ceph
[15:37] * timidshark (~timidshar@70-88-62-73-fl.naples.hfc.comcastbusiness.net) has joined #ceph
[15:38] * dvanders (~dvanders@pb-d-128-141-133-146.cern.ch) Quit ()
[15:44] * rahatm1 (~rahatm1@CPE602ad089ce64-CM602ad089ce61.cpe.net.cable.rogers.com) has joined #ceph
[15:44] * rahatm1 (~rahatm1@CPE602ad089ce64-CM602ad089ce61.cpe.net.cable.rogers.com) Quit ()
[15:46] * valeech (~valeech@ip72-205-7-86.dc.dc.cox.net) has joined #ceph
[15:55] * diegows_ (~diegows@190.190.5.238) has joined #ceph
[15:58] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[16:00] * tryggvil (~tryggvil@178.19.53.254) Quit (Quit: tryggvil)
[16:01] * zack_dolby (~textual@p852cae.tokynt01.ap.so-net.ne.jp) has joined #ceph
[16:02] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[16:02] * timidshark (~timidshar@70-88-62-73-fl.naples.hfc.comcastbusiness.net) Quit (Remote host closed the connection)
[16:05] * Fruit (wsl@2001:981:a867:2:216:3eff:fe10:122b) has joined #ceph
[16:06] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) has joined #ceph
[16:06] <Fruit> crushmap question: how do I tell ceph to never store two copies in one bucket, even if it's the only bucket available?
[16:07] * valeech (~valeech@ip72-205-7-86.dc.dc.cox.net) Quit (Quit: valeech)
[16:10] * valeech (~valeech@ip72-205-7-86.dc.dc.cox.net) has joined #ceph
[16:15] <nrs_> abandon all hope Fruit: they don't answer any questions in this channel
[16:16] <Fruit> so it appears :)
[16:18] * bitblt (~don@128-107-239-233.cisco.com) has joined #ceph
[16:20] <fghaas> Fruit: "bucket" is a generic term in crush that can apply to devices, hosts, racks or any other hierarchical level
[16:21] <Fruit> yes
[16:21] <fghaas> so the standard crushmap rule that makes sure no two replicas can go on the same host is an example of that.
[16:21] <fghaas> feel free to take a look at that and extrapolate from there
[16:23] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit (Ping timeout: 480 seconds)
[16:28] * danieagle (~Daniel@179.176.52.184.dynamic.adsl.gvt.net.br) has joined #ceph
[16:30] * JeffK (~Jeff@38.99.52.10) has joined #ceph
[16:31] <Fruit> fghaas: well that's what I'm using, but if I mark all of the OSDs on the other host as "out" I see ceph copying everything on the remaining host
[16:31] <Fruit> http://sprunge.us/DbQD is the rule I use, for the record
[16:31] * nrs_ (~nrs@ool-435376d0.dyn.optonline.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[16:33] * nrs_ (~nrs@ool-435376d0.dyn.optonline.net) has joined #ceph
[16:34] * oro (~oro@2001:620:20:222:c0c7:3647:c369:d16b) Quit (Quit: Leaving)
[16:39] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (Quit: Leaving)
[16:42] * sprachgenerator (~sprachgen@130.202.135.211) has joined #ceph
[16:42] * i_m (~ivan.miro@gbibp9ph1--blueice2n1.emea.ibm.com) Quit (Quit: Leaving.)
[16:45] * joef (~Adium@2620:79:0:131:6976:4a68:298:44d) Quit (Remote host closed the connection)
[16:47] * powhihs (~hjg@0001c8bd.user.oftc.net) Quit (Ping timeout: 480 seconds)
[16:47] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[16:54] * nrs_ (~nrs@ool-435376d0.dyn.optonline.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[16:54] * thanhtran (~thanhtran@113.173.250.160) has joined #ceph
[16:54] * thanhtran (~thanhtran@113.173.250.160) Quit ()
[16:55] * thanhtran (~thanhtran@113.173.250.160) has joined #ceph
[16:57] * thanhtran_ (~thanhtran@113.173.250.160) has joined #ceph
[16:57] * thanhtran_ (~thanhtran@113.173.250.160) Quit ()
[17:00] * joao|lap (~JL@a95-92-33-54.cpe.netcabo.pt) has joined #ceph
[17:00] * ChanServ sets mode +o joao|lap
[17:00] * wrale (~wrale@wrk-28-217.cs.wright.edu) has joined #ceph
[17:08] * thomnico (~thomnico@2a01:e35:8b41:120:51bd:4913:9399:150b) Quit (Quit: Ex-Chat)
[17:12] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) Quit (Ping timeout: 480 seconds)
[17:13] * tryggvil (~tryggvil@178.19.53.254) Quit (Quit: tryggvil)
[17:13] * joshd1 (~joshd@2602:306:c5db:310:c13:db17:6eb:f258) has joined #ceph
[17:26] * geekmush (~Adium@cpe-66-68-198-33.rgv.res.rr.com) Quit (Quit: Leaving.)
[17:27] * geekmush (~Adium@cpe-66-68-198-33.rgv.res.rr.com) has joined #ceph
[17:29] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[17:30] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[17:32] * leseb (~leseb@185.21.172.77) Quit (Killed (NickServ (Too many failed password attempts.)))
[17:32] * leseb (~leseb@185.21.172.77) has joined #ceph
[17:33] * nrs_ (~nrs@ool-435376d0.dyn.optonline.net) has joined #ceph
[17:34] * jtaguinerd1 (~Adium@121.54.32.136) Quit (Quit: Leaving.)
[17:35] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[17:36] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[17:36] * thomnico (~thomnico@2a01:e35:8b41:120:9417:9daa:2502:4e1e) has joined #ceph
[17:44] * nrs_ (~nrs@ool-435376d0.dyn.optonline.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[17:47] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[17:48] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[17:51] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[17:53] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[17:54] * wschulze (~wschulze@p54BEDDB2.dip0.t-ipconnect.de) has joined #ceph
[17:54] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[17:56] * JoeGruher (~JoeGruher@134.134.139.76) has joined #ceph
[17:56] * rmoe (~quassel@12.164.168.117) has joined #ceph
[17:57] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[18:02] * angdraug (~angdraug@12.164.168.117) Quit (Ping timeout: 480 seconds)
[18:04] * b0e (~aledermue@juniper1.netways.de) Quit (Remote host closed the connection)
[18:06] * joshd1 (~joshd@2602:306:c5db:310:c13:db17:6eb:f258) Quit (Quit: Leaving.)
[18:08] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[18:14] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[18:16] * sjusthm (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) has joined #ceph
[18:18] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[18:19] <mjevans> http://ceph.com/docs/master/install/manual-deployment/ Manually adding OSDs it talks about /var/lib/ceph/bootstrap-osd/{cluster}.keyring ; however there is no provided/linked documentation on how this should be created.
[18:19] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[18:20] * joshd1 (~joshd@2602:306:c5db:310:f0c5:fc16:f8fe:a6bf) has joined #ceph
[18:24] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[18:28] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:30] * dpippenger (~riven@cpe-198-72-157-189.socal.res.rr.com) has joined #ceph
[18:31] <skeenan> mjevans: ceph auth get-or-create-key client.bootstrap-osd mon 'allow profile bootstrap-osd'
[18:31] * analbeard (~shw@141.0.32.124) Quit (Quit: Leaving.)
[18:31] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has left #ceph
[18:35] * thanhtran (~thanhtran@113.173.250.160) Quit (Read error: Connection reset by peer)
[18:38] * thanhtran (~thanhtran@113.172.159.19) has joined #ceph
[18:38] * bboris (~boris@router14.mail.bg) has joined #ceph
[18:39] <bboris> hi
[18:39] <bboris> anyone online?
[18:39] * JoeGruher (~JoeGruher@134.134.139.76) Quit (Remote host closed the connection)
[18:40] * sleinen (~Adium@2001:620:0:46:21db:dec7:512f:7347) Quit (Quit: Leaving.)
[18:40] * Pedras (~Adium@216.207.42.132) has joined #ceph
[18:41] * analbeard (~shw@host31-53-108-38.range31-53.btcentralplus.com) has joined #ceph
[18:43] * rahatm1 (~rahatm1@CPE602ad089ce64-CM602ad089ce61.cpe.net.cable.rogers.com) has joined #ceph
[18:43] * wschulze (~wschulze@p54BEDDB2.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[18:44] * rahatm1 (~rahatm1@CPE602ad089ce64-CM602ad089ce61.cpe.net.cable.rogers.com) Quit (Remote host closed the connection)
[18:44] * rahatm1 (~rahatm1@CPE602ad089ce64-CM602ad089ce61.cpe.net.cable.rogers.com) has joined #ceph
[18:44] * rahatm1 (~rahatm1@CPE602ad089ce64-CM602ad089ce61.cpe.net.cable.rogers.com) Quit ()
[18:50] * xarses (~andreww@12.164.168.117) has joined #ceph
[18:51] * analbeard (~shw@host31-53-108-38.range31-53.btcentralplus.com) has left #ceph
[18:51] * garphy is now known as garphy`aw
[18:51] * theactualwarrenusui (~Warren@2607:f298:a:607:4c3f:82e:add5:b567) Quit (Read error: Connection reset by peer)
[18:51] * theactualwarrenusui (~Warren@2607:f298:a:607:4c3f:82e:add5:b567) has joined #ceph
[18:54] * Cube (~Cube@66-87-67-14.pools.spcsdns.net) has joined #ceph
[18:55] * nrs_ (~nrs@ool-435376d0.dyn.optonline.net) has joined #ceph
[18:57] * sroy (~sroy@207.96.182.162) has joined #ceph
[18:57] * Boltsky (~textual@office.deviantart.net) has joined #ceph
[19:00] * Sysadmin88 (~IceChat77@176.254.32.31) has joined #ceph
[19:00] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[19:01] * Haksoldier (~islamatta@88.234.59.132) has joined #ceph
[19:01] <Haksoldier> EUZUBILLAHIMINE??EYTANIRRACIM BISMILLAHIRRAHMANIRRAHIM
[19:01] <Haksoldier> ALLAHU EKBERRRRR! LA ?LAHE ?LLALLAH MUHAMMEDEN RESULULLAH!
[19:01] <Haksoldier> I did the obligatory prayers five times a day to the nation. And I promised myself that, who (beside me) taking care not to make the five daily prayers comes ahead of time, I'll put it to heaven. Who says prayer does not show attention to me I do not have a word for it.! Prophet Muhammad (s.a.v.)
[19:01] <Haksoldier> hell if you did until the needle tip could not remove your head from prostration Prophet Muhammad pbuh
[19:01] * Haksoldier (~islamatta@88.234.59.132) Quit (Remote host closed the connection)
[19:06] * nrs_ (~nrs@ool-435376d0.dyn.optonline.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[19:06] * thomnico (~thomnico@2a01:e35:8b41:120:9417:9daa:2502:4e1e) Quit (Quit: Ex-Chat)
[19:08] * mozg (~andrei@host217-46-236-49.in-addr.btopenworld.com) Quit (Ping timeout: 480 seconds)
[19:14] * l3iggs (~oftc-webi@c-24-130-225-173.hsd1.ca.comcast.net) has joined #ceph
[19:17] * thanhtran (~thanhtran@113.172.159.19) Quit (Quit: Going offline, see ya! (www.adiirc.com))
[19:18] * leochill (~leochill@nyc-333.nycbit.com) Quit (Remote host closed the connection)
[19:19] * joao|lap (~JL@a95-92-33-54.cpe.netcabo.pt) Quit (Ping timeout: 480 seconds)
[19:21] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Quit: Leaving.)
[19:22] * leochill (~leochill@nyc-333.nycbit.com) has joined #ceph
[19:25] * joao|lap (~JL@a95-92-33-54.cpe.netcabo.pt) has joined #ceph
[19:25] * ChanServ sets mode +o joao|lap
[19:26] <l3iggs> can anyone point me to a place I can read about recovering from a ceph-mds hardware failure?
[19:28] * Sysadmin88 (~IceChat77@176.254.32.31) Quit (Read error: Connection reset by peer)
[19:32] * mozg (~andrei@213.205.227.22) has joined #ceph
[19:32] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit (Quit: Leaving...)
[19:36] <iggy> anybody know offhand if openstack using ceph uses the qemu built-in rbd driver or the kernel rbd driver?
[19:38] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[19:40] <Fruit> iggy: qemu builtin
[19:40] <iggy> thanks
[19:42] * bboris (~boris@router14.mail.bg) Quit (Quit: leaving)
[19:44] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) has joined #ceph
[19:48] * xarses (~andreww@12.164.168.117) Quit (Quit: Leaving)
[19:49] * mozg (~andrei@213.205.227.22) Quit (Ping timeout: 480 seconds)
[19:54] * danieagle (~Daniel@179.176.52.184.dynamic.adsl.gvt.net.br) Quit (Quit: Muito Obrigado por Tudo! :-))
[19:55] * l3iggs (~oftc-webi@c-24-130-225-173.hsd1.ca.comcast.net) Quit (Quit: Page closed)
[20:00] * ircolle (~Adium@2601:1:8380:2d9:c98b:bf28:250f:9aeb) has joined #ceph
[20:03] <loicd> houkouonchi-work: sorry to bother you again. gitbuilder-ceph-tarball-precise-amd64-basic claims to have sse4.1 instructions (/proc/cpuinfo & cpuid) but ubuntu@gitbuilder-ceph-tarball-precise-amd64-basic:~/loic/ceph/src$ gdb ./unittest_erasure_code_jerasure will "illegal instruction" on vcvtsi2sdl (as shown with layout asm in gdb). Does it ring a bell ? If not I'll keep looking, no worries ;-)
[20:04] <loicd> houkouonchi-work: I'm trying to figure out why http://gitbuilder.sepia.ceph.com/gitbuilder-ceph-tarball-precise-amd64-basic/log.cgi?log=8701ccafdc17773fecab2070ad411b682c119b43 fails
[20:04] <loicd> by getting more information (hence the manual compilation)
[20:05] <houkouonchi-work> hmm maybe its something to do with the fact its an lxc container
[20:05] <loicd> ah !
[20:05] <loicd> that would save my day
[20:05] <loicd> these CPU features drive me nuts :-)
[20:06] <loicd> houkouonchi-work: I'll investigate in this direction, thanks for the hint
[20:06] <loicd> bbl
[20:06] <houkouonchi-work> ok i will also look into it a bit on my side as well
[20:06] <houkouonchi-work> since lxc is kind of chrooted i would have expected it to contain all the CPU capabilities of the host machine
[20:07] <houkouonchi-work> loicd: is it working on any of the other gitbuilders? A lot of the debian/ubuntu ones are lxc but all the rpm ones should be vm's
[20:08] <houkouonchi-work> you could also try it manually on the host machine as well (it might need some packages installed in order to compile) but that would atleast rule out lxc
[20:08] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[20:10] <houkouonchi-work> loicd: that guest's host machine is vercoi02
[20:10] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[20:11] <mjevans> Is there an easy way of determining which osds a pg is evaluating and if they're selected within a crush map?
[20:15] * bitblt (~don@128-107-239-233.cisco.com) Quit (Quit: Leaving)
[20:16] <mjevans> houkouonchi-work: LXC is the linux equivilent of BSD jails. The capabilities should exactly match to the host kernel, however the inner device probably doesn't have access to real block devices/etc. If you're working the raw storage that should probably occur outside of your guest scope (in this context think of the guests more like you would an OpenVZ system)
[20:20] * nrs_ (~nrs@ool-435376d0.dyn.optonline.net) has joined #ceph
[20:27] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[20:27] * `jpg (~josephgla@ppp121-44-146-74.lns20.syd7.internode.on.net) Quit (Ping timeout: 480 seconds)
[20:32] * JoeGruher (~JoeGruher@134.134.137.75) has joined #ceph
[20:35] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[20:37] * wschulze (~wschulze@p54BEDDB2.dip0.t-ipconnect.de) has joined #ceph
[20:41] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[20:41] * xarses (~andreww@12.164.168.117) has joined #ceph
[20:46] * thomnico (~thomnico@2a01:e35:8b41:120:9417:9daa:2502:4e1e) has joined #ceph
[20:46] * nrs_ (~nrs@ool-435376d0.dyn.optonline.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[20:49] * joao|lap (~JL@a95-92-33-54.cpe.netcabo.pt) Quit (Remote host closed the connection)
[20:57] * nrs_ (~nrs@ool-435376d0.dyn.optonline.net) has joined #ceph
[21:04] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[21:17] * thomnico (~thomnico@2a01:e35:8b41:120:9417:9daa:2502:4e1e) Quit (Quit: Ex-Chat)
[21:21] * BillK (~BillK-OFT@106-69-72-154.dyn.iinet.net.au) has joined #ceph
[21:26] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[21:29] * garphy`aw is now known as garphy
[21:30] * valeech (~valeech@ip72-205-7-86.dc.dc.cox.net) Quit (Quit: valeech)
[21:31] * wrale (~wrale@wrk-28-217.cs.wright.edu) Quit (Quit: Leaving)
[21:40] * mozg (~andrei@host86-184-125-218.range86-184.btcentralplus.com) has joined #ceph
[22:02] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[22:03] * nrs_ (~nrs@ool-435376d0.dyn.optonline.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[22:04] * nrs_ (~nrs@ool-435376d0.dyn.optonline.net) has joined #ceph
[22:09] <skeenan> hey, I've got a pg marked as incomplete, possibly from dumping a couple of osd's yesterday
[22:10] <skeenan> assuming i lost some data, how would i go about telling ceph that data is really gone? or is there some other way to get it out of the incomplete state
[22:10] <skeenan> osdmap e808: 12 osds: 12 up, 12 in
[22:10] <skeenan> pgmap v208482: 972 pgs: 971 active+clean, 1 incomplete; 109 GB data, 326 GB used, 2084 GB / 2539 GB avail; 5074B/s wr, 1op/s
[22:12] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[22:12] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[22:14] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[22:16] <lurbs> skeenan: I believe you want: http://ceph.com/docs/master/rados/operations/placement-groups/#revert-lost
[22:17] <skeenan> i tried that...
[22:17] <lurbs> Not sure then, sorry. :-/
[22:18] <skeenan> np, thanks. it says pg has no unfound objects
[22:18] <skeenan> which is baffling considering it thinks the pg is incomplete. my understanding of the incomplete status must be lacking :D
[22:20] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[22:20] * ChanServ sets mode +v andreask
[22:22] * nrs_ (~nrs@ool-435376d0.dyn.optonline.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[22:23] * nrs_ (~nrs@ool-435376d0.dyn.optonline.net) has joined #ceph
[22:25] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[22:27] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[22:28] * sroy (~sroy@207.96.182.162) Quit (Quit: Quitte)
[22:28] <loicd> houkouonchi-work: the problem had nothing to do with lxc after all. It was compiled with AVX instructions ( http://en.wikipedia.org/wiki/Advanced_Vector_Extensions ) but vercoi02 has no AVX features. Nothing complicated, just a my mistake. Sorry for the noise.
[22:29] <houkouonchi-work> ah yeah i think AVX is sandybridge right?
[22:29] <houkouonchi-work> which is the newest dual CPU xeon's you can buy ATM I believe
[22:29] <houkouonchi-work> doing a 13.3 trillion pi calculation on my home machine which is sandybridge using a pi program compiled with AVX =)
[22:30] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) Quit (Quit: Leaving)
[22:34] * skeenan (~Adium@8.21.68.242) Quit (Quit: Leaving.)
[22:34] * allsystemsarego (~allsystem@50c25c2.test.dnsbl.oftc.net) Quit (Quit: Leaving)
[22:35] * skeenan (~Adium@8.21.68.242) has joined #ceph
[22:36] <JoeGruher> with openstack and ceph are there any clever ways to do non-live migration that doesn't involve a shared filesystem? or do you just have to set up a share, perhaps backed by an RBD?
[22:37] * zerick (~eocrospom@190.187.21.53) Quit (Ping timeout: 480 seconds)
[22:37] <skeenan> why not just use rdb-backed instances?
[22:37] <Fruit> JoeGruher: I've been thinking of changing the host in the openstack mysql database and doing something like nova reset --hard
[22:38] <Fruit> not very pretty though.
[22:39] * fdmanana (~fdmanana@bl5-172-157.dsl.telepac.pt) Quit (Quit: Leaving)
[22:39] <Fruit> actually by default it tries to copy the directory by ssh I think?
[22:39] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[22:39] <mjevans> skeenan: have you examined the binary crush map with the bad mapping list mode?
[22:39] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[22:40] <skeenan> mjevans: nope??? relatively new to this.
[22:40] <mjevans> skeenan: I had an issue where my instructions weren't quite intrepreted as I expected, so I'd get insufficient numbers of copies. Thus I had to perform some slight modifications.
[22:41] * garphy is now known as garphy`aw
[22:41] <JoeGruher> fruit: yeah i agree just doing the live migration with rbd-backed is easier, however we have a project where we want to test non-live migrations
[22:42] <skeenan> mjevans: how would i do that?
[22:42] <Fruit> JoeGruher: the default scp behavior is not an option? it only copies a tiny instance directory
[22:42] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) Quit (Quit: Leaving.)
[22:44] * nrs_ (~nrs@ool-435376d0.dyn.optonline.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[22:48] * sprachgenerator (~sprachgen@130.202.135.211) Quit (Quit: sprachgenerator)
[22:48] * nrs_ (~nrs@ool-435376d0.dyn.optonline.net) has joined #ceph
[22:49] * wschulze (~wschulze@p54BEDDB2.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[22:50] <skeenan> mjevans: thanks. no bad mappings in here
[22:51] * zerick (~eocrospom@190.114.248.76) has joined #ceph
[22:52] <mjevans> skeenan: then you'll want to investigate the other troubleshooting pages
[22:56] <skeenan> thx
[22:57] <ponyofdeath> hi, how can i do rbd delete -p pool imagename/name
[22:57] <ponyofdeath> it has an / in the name
[22:58] <bens> quote and escape 'imagename\/name'
[22:58] * zerick (~eocrospom@190.114.248.76) Quit (Read error: Operation timed out)
[22:58] <bens> I am guessing.
[22:59] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit (Quit: Leaving...)
[23:00] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[23:00] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[23:00] * wschulze (~wschulze@p54BEDDB2.dip0.t-ipconnect.de) has joined #ceph
[23:01] <ponyofdeath> bens: tried
[23:01] <ponyofdeath> rbd: error opening pool libvirt\: (2) No such file or directory
[23:02] * BManojlovic (~steki@212.200.65.135) has joined #ceph
[23:04] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[23:05] <joshd1> ponyofdeath: rbd rm pool/imagename works for me, even if imagename contains /
[23:06] <ponyofdeath> joshd1: cool that worked
[23:06] <ponyofdeath> vladi@prod-ent-ceph03:~$ sudo rbd rm 'libvirt/libvirt/9b49ff0e-9491-4397-b193-f1f1709aa342_v2'
[23:07] <bens> what didn't work?
[23:07] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[23:08] * Svedrin (svedrin@ketos.funzt-halt.net) Quit (Quit: Starved on the internet)
[23:11] * Svedrin (svedrin@ketos.funzt-halt.net) has joined #ceph
[23:13] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[23:20] * mozg (~andrei@host86-184-125-218.range86-184.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[23:21] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[23:24] * mtanski (~mtanski@69.193.178.202) Quit (Ping timeout: 480 seconds)
[23:28] * dmsimard (~Adium@108.163.152.2) Quit (Ping timeout: 480 seconds)
[23:28] * nrs_ (~nrs@ool-435376d0.dyn.optonline.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[23:28] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[23:30] * TMM (~hp@c97185.upc-c.chello.nl) has joined #ceph
[23:30] * mozg (~andrei@82.150.98.65) has joined #ceph
[23:32] * wschulze (~wschulze@p54BEDDB2.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[23:37] * garphy`aw is now known as garphy
[23:37] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[23:40] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[23:41] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has left #ceph
[23:41] * garphy is now known as garphy`aw
[23:57] * BManojlovic (~steki@212.200.65.135) Quit (Ping timeout: 480 seconds)
[23:59] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[23:59] * JoeGruher (~JoeGruher@134.134.137.75) Quit (Remote host closed the connection)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.