#ceph IRC Log

Index

IRC Log for 2014-03-07

Timestamps are in GMT/BST.

[0:00] * sputnik13 (~sputnik13@client64-80.sdsc.edu) Quit ()
[0:01] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) Quit (Quit: Computer has gone to sleep.)
[0:03] * yuriw1 is now known as yuriw
[0:06] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Remote host closed the connection)
[0:07] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[0:11] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[0:14] * fdmanana (~fdmanana@bl9-171-73.dsl.telepac.pt) Quit (Quit: Leaving)
[0:17] * al (quassel@niel.cx) Quit (Ping timeout: 480 seconds)
[0:18] * scuttlemonkey (~scuttlemo@c-107-5-193-244.hsd1.mi.comcast.net) Quit (Ping timeout: 480 seconds)
[0:29] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) Quit (Quit: Connection closed for inactivity)
[0:39] * dmsimard (~Adium@ap03.wireless.co.mtl.iweb.com) Quit (Quit: Leaving.)
[0:39] * garphy`aw is now known as garphy
[0:44] * garphy is now known as garphy`aw
[0:49] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[0:53] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:55] * laviandra (~lavi@eth-seco21th2-46-193-64-32.wb.wifirst.net) Quit (Ping timeout: 480 seconds)
[1:04] * mtanski (~mtanski@69.193.178.202) Quit (Ping timeout: 480 seconds)
[1:04] <taras> geraintjones: why was that ironic?
[1:05] <geraintjones> its the openstack bare metal project
[1:05] <geraintjones> https://github.com/openstack/ironic
[1:05] <taras> yeah the not quite baked one
[1:06] <taras> apparently nova baremetal is the driver to use until tripleo ships
[1:06] <taras> so one can use ironic
[1:08] <taras> geraintjones: so koding.com is the workload for your openstack?
[1:08] <geraintjones> yea
[1:08] <taras> cool
[1:09] <taras> that's a really neat setup for a neat problem
[1:09] <geraintjones> thanks
[1:10] <taras> geraintjones: do you just roll a normal openstack setup?
[1:10] <taras> or some variant of it
[1:10] <geraintjones> yeah we just use the ubuntu cloud repos
[1:10] <geraintjones> nothing special
[1:10] <taras> geraintjones: and you dont hate your life?
[1:11] <geraintjones> nope not at all
[1:11] <taras> i mean as a newbie in this
[1:11] <taras> openstack looks like hell
[1:13] <taras> glad to see a usecase like this outside of their marketing
[1:15] <geraintjones> yeah its "interesting"
[1:16] <geraintjones> but really once you get your head around the way it does things its actually just a simple API on KVM etc
[1:16] <taras> yeah api control is a big thing for me
[1:16] <taras> we move so fast with apis
[1:16] <geraintjones> about the steepest learning curve is the networking stuff
[1:16] <taras> and so slow with real hw
[1:16] <geraintjones> if you can avoid using OpenVSwitch i recommend it
[1:17] <taras> yeah we noticed that
[1:20] <taras> or most fun part
[1:20] <taras> is going to be figuring out how to bare metal macs :(
[1:20] <taras> atleast the ceph part of this whole thing looks reasonable
[1:21] * scuttlemonkey (~scuttlemo@99-6-62-94.lightspeed.livnmi.sbcglobal.net) has joined #ceph
[1:21] * ChanServ sets mode +o scuttlemonkey
[1:26] <taras> geraintjones: what's the benefit of softlayer over rackspace for you?
[1:26] <geraintjones> the fact they could stand up a 10gbit peering session really quickly :)
[1:30] * The_Bishop (~bishop@2001:470:50b6:0:d837:b2d2:bc1d:8329) Quit (Ping timeout: 480 seconds)
[1:30] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[1:30] <taras> geraintjones: i just did the math
[1:30] <taras> i think you have an identical workload to what we run in aws
[1:30] <geraintjones> :)
[1:31] <taras> that's roughly where i want to go
[1:31] <taras> since atm we run this in the most expenisve way possible
[1:31] <taras> geraintjones: do you have a lot in object storage?
[1:32] <geraintjones> nah we use it for rbd
[1:32] <geraintjones> which of course means lots of objects :)
[1:32] <geraintjones> but we don't use them directly
[1:32] <taras> yup
[1:32] <taras> ok
[1:32] <taras> i'd like to use instance store for my compute nodes
[1:33] <taras> and object storage for shared needs
[1:33] <geraintjones> yeah we use instance store for the VM disks - the images are in ceph and they get used as COW
[1:34] <mo-> wait, what was that about avoiding openvswitch? that stuff is currently being betatested for proxmox but I somehow never saw the appeal to it, its just a fancy wrapper for iptables, isnt it?
[1:34] <mo-> (and brctl)
[1:34] <geraintjones> our compute nodes are all SSD, so the preference is instance store - but for bulk storage (backups etc) we use RBD
[1:34] <geraintjones> but all of our LXC VMs are RBD backed
[1:35] <taras> geraintjones: so those are 12core xeons?
[1:35] <taras> per box
[1:35] <geraintjones> no OVS is an implementation of OpenFlow - it implements bridging, but it also does OpenFlow
[1:36] <geraintjones> its more like vmWare's Distributed vSwitch
[1:36] * reed (~reed@50-0-92-79.dsl.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[1:36] <mo-> well bridging is what brctl is for. flow control being rate limiting? thatd be iptables
[1:36] <geraintjones> nah we have 12 * dual hex core and 10 * dual octacore
[1:37] <geraintjones> nah flow control being - how to get packet from VM A to VM B
[1:37] <taras> geraintjones: and i assume you keep ceph and compute as separate nodes?
[1:37] <geraintjones> We do - but I don't think its really necessary.
[1:37] * The_Bishop (~bishop@2001:470:50b6:0:d837:b2d2:bc1d:8329) has joined #ceph
[1:37] <taras> metacloud suggests mixing storage and compute nodes
[1:37] <taras> that seems a bit sketchy
[1:38] <geraintjones> I know some guys who put a small (200gb) OSD in each compute node
[1:38] <geraintjones> and do their ceph that way
[1:38] <geraintjones> we have dedicated storage boxes tho
[1:38] <mo-> well 1 OSD on a machine shouldnt be too taxing. so why not make use of the available hard drive slots in those compute nodes
[1:38] <geraintjones> 4 of them - currently 60 3TB disks.
[1:39] <geraintjones> all with their journals on fusion io's
[1:39] <geraintjones> mo- is right, we just couldn't get the volume of storage we needed out of 22 boxes doing it that way
[1:40] <geraintjones> plus they are all 2.5 disks - which really limits you
[1:40] <taras> makes sense
[1:40] <taras> i heard more smaller disks is better
[1:40] <mo-> well nobody said that these would be ALL the OSDs you have, you can complement this with some actual storage boxes
[1:42] <taras> have you guys done crazy stuff like spin up ceph remotely?
[1:42] <taras> eg on softlayer stuff?
[1:42] <mo-> err I dont follow, sorry
[1:43] <mo-> ah thats some cloud thing, then no
[1:44] <geraintjones> we did indeed overflow some OSDs to SL
[1:44] <geraintjones> i wouldn't wanna do it on a high latency link
[1:44] <geraintjones> but ~3ms was fine
[1:45] <taras> geraintjones: so this was on bare metal?
[1:45] <taras> or vms
[1:46] <geraintjones> we always do the storage on bare
[1:46] <geraintjones> we did it on hs1.8xl's in AWS about a year ago
[1:46] <geraintjones> and it worked okay
[1:46] <geraintjones> only issue was with AWS is if you reboot - you loose your storage
[1:47] <geraintjones> which on those instances equates to 48TB :)
[1:47] <geraintjones> and yes - more smaller is better than less bigger
[1:48] <geraintjones> you have a lot less to re-replicate in a failure
[1:48] <geraintjones> but our product is mostly free - so we lean towards more for less :)
[1:51] * JC (~JC@2607:f298:a:607:6419:a166:ae0e:5c2b) has joined #ceph
[1:52] * LeaChim (~LeaChim@host86-166-182-74.range86-166.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[1:55] <taras> geraintjones: thanks for the info
[1:55] * KevinPerks (~Adium@74.122.167.244) Quit (Quit: Leaving.)
[1:55] <taras> geraintjones: since our workloads are kind of similar, mind if i bug you again in a few weeks?
[1:56] * KevinPerks (~Adium@74.122.167.244) has joined #ceph
[1:56] * KevinPerks (~Adium@74.122.167.244) Quit ()
[1:56] * dmsimard (~Adium@69.165.206.93) has joined #ceph
[1:57] * zere (~matt@asklater.com) has left #ceph
[1:58] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[2:01] * scuttlemonkey (~scuttlemo@99-6-62-94.lightspeed.livnmi.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[2:01] * dmsimard (~Adium@69.165.206.93) Quit ()
[2:02] <geraintjones> go for it
[2:04] * nwat (~textual@eduroam-252-224.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:04] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) has joined #ceph
[2:05] * JC (~JC@2607:f298:a:607:6419:a166:ae0e:5c2b) Quit (Quit: Leaving.)
[2:07] * joef (~Adium@2620:79:0:131:c8b0:3332:538e:4f79) Quit (Quit: Leaving.)
[2:09] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[2:10] * dmsimard1 (~Adium@69-165-206-93.cable.teksavvy.com) has joined #ceph
[2:10] * dmsimard1 (~Adium@69-165-206-93.cable.teksavvy.com) Quit ()
[2:10] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) Quit (Read error: Connection reset by peer)
[2:12] * JC (~JC@2607:f298:a:607:ed25:4e93:8d93:cd2d) has joined #ceph
[2:13] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) has joined #ceph
[2:13] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) Quit ()
[2:14] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[2:15] * ircolle (~Adium@2601:1:8380:2d9:1893:b260:8be1:49e8) Quit (Quit: Leaving.)
[2:17] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:18] * sroy (~sroy@96.127.230.203) has joined #ceph
[2:28] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[2:30] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[2:34] * KevinPerks (~Adium@ip-64-134-190-6.public.wayport.net) has joined #ceph
[2:34] * ivotron (~ivotron@2601:9:2700:178:c1bd:6540:b7b7:ba35) has joined #ceph
[2:35] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[2:35] * zerick (~eocrospom@190.187.21.53) Quit (Quit: Saliendo)
[2:37] * ivotron_ (~ivotron@c-50-150-124-250.hsd1.ca.comcast.net) has joined #ceph
[2:38] * vata (~vata@2607:fad8:4:6:33:43e5:3624:6097) Quit (Quit: Leaving.)
[2:39] * scuttlemonkey (~scuttlemo@c-107-5-193-244.hsd1.mi.comcast.net) has joined #ceph
[2:39] * ChanServ sets mode +o scuttlemonkey
[2:40] * JC (~JC@2607:f298:a:607:ed25:4e93:8d93:cd2d) Quit (Quit: Leaving.)
[2:42] * ivotron (~ivotron@2601:9:2700:178:c1bd:6540:b7b7:ba35) Quit (Ping timeout: 480 seconds)
[2:42] * zapotah_ (~zapotah@dsl-hkibrasgw1-58c08e-250.dhcp.inet.fi) has joined #ceph
[2:44] * zapotah (~zapotah@dsl-hkibrasgw1-58c08e-250.dhcp.inet.fi) Quit (Ping timeout: 480 seconds)
[2:50] * sroy (~sroy@96.127.230.203) Quit (Quit: Quitte)
[2:51] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:55] * ivotron_ (~ivotron@c-50-150-124-250.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[2:56] * KevinPerks (~Adium@ip-64-134-190-6.public.wayport.net) Quit (Quit: Leaving.)
[2:56] * ivotron (~ivotron@2601:9:2700:178:30a5:d556:bec:39ea) has joined #ceph
[2:57] * JC (~JC@38.122.20.226) has joined #ceph
[3:04] * glzhao (~glzhao@220.181.11.232) has joined #ceph
[3:09] * erkules_ (~erkules@port-92-193-29-150.dynamic.qsc.de) has joined #ceph
[3:13] * JC (~JC@38.122.20.226) Quit (Quit: Leaving.)
[3:13] * fabiocba (~fabiocbal@187.114.205.253) has joined #ceph
[3:16] * laviandra (~lavi@eth-seco21th2-46-193-64-32.wb.wifirst.net) has joined #ceph
[3:16] * erkules (~erkules@port-92-193-86-125.dynamic.qsc.de) Quit (Ping timeout: 480 seconds)
[3:17] * haomaiwa_ (~haomaiwan@49.4.189.43) Quit (Remote host closed the connection)
[3:17] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) has joined #ceph
[3:17] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[3:20] * carter (~carter@2600:3c03::f03c:91ff:fe6e:6c01) Quit (Quit: ZNC - http://znc.in)
[3:20] * carter (~carter@2600:3c03::f03c:91ff:fe6e:6c01) has joined #ceph
[3:22] * carter (~carter@2600:3c03::f03c:91ff:fe6e:6c01) Quit ()
[3:23] * carter (~carter@2600:3c03::f03c:91ff:fe6e:6c01) has joined #ceph
[3:28] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[3:28] * yguang11_ (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[3:38] * yguang11_ (~yguang11@2406:2000:ef96:e:4591:3292:6a1f:f5a2) has joined #ceph
[3:38] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[3:56] * kaizh (~kaizh@128-107-239-233.cisco.com) Quit (Remote host closed the connection)
[3:57] * sarob (~sarob@2001:4998:effd:600:9100:4a31:919c:6bc7) has joined #ceph
[3:58] * markbby (~Adium@168.94.245.3) has joined #ceph
[4:05] * Boltsky (~textual@office.deviantart.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[4:13] * ivotron (~ivotron@2601:9:2700:178:30a5:d556:bec:39ea) Quit (Remote host closed the connection)
[4:14] * ivotron (~ivotron@c-50-150-124-250.hsd1.ca.comcast.net) has joined #ceph
[4:20] * sjusthm (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[4:21] * jks (~jks@3e6b5724.rev.stofanet.dk) Quit (Read error: Operation timed out)
[4:30] * fabiocba (~fabiocbal@187.114.205.253) Quit (Read error: Operation timed out)
[4:31] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) has joined #ceph
[4:35] * JC (~JC@rrcs-67-53-127-42.west.biz.rr.com) has joined #ceph
[4:44] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[4:45] * ivotron (~ivotron@c-50-150-124-250.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[5:15] * haomaiwa_ (~haomaiwan@117.79.232.197) has joined #ceph
[5:19] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[5:21] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) Quit (Ping timeout: 480 seconds)
[5:21] * JC1 (~JC@rrcs-67-53-127-42.west.biz.rr.com) has joined #ceph
[5:22] * nhm (~nhm@65-128-159-155.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[5:23] * Vacum_ (~vovo@88.130.220.51) has joined #ceph
[5:27] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[5:28] * JC (~JC@rrcs-67-53-127-42.west.biz.rr.com) Quit (Ping timeout: 480 seconds)
[5:29] * Vacum (~vovo@88.130.211.172) Quit (Ping timeout: 480 seconds)
[5:34] * Cube (~Cube@66-87-64-62.pools.spcsdns.net) Quit (Quit: Leaving.)
[5:37] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[5:37] * dpippenger (~riven@cpe-198-72-157-189.socal.res.rr.com) has joined #ceph
[5:44] * markbby (~Adium@168.94.245.3) Quit (Remote host closed the connection)
[5:47] * kaizh (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) has joined #ceph
[5:53] * kaizh (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[6:10] * utkarshsins (~utkarshsi@115.252.166.179) has joined #ceph
[6:14] * sarob_ (~sarob@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) has joined #ceph
[6:20] * sarob (~sarob@2001:4998:effd:600:9100:4a31:919c:6bc7) Quit (Ping timeout: 480 seconds)
[6:22] * sarob_ (~sarob@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[6:27] * kaizh (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) has joined #ceph
[6:35] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:36] * JC1 (~JC@rrcs-67-53-127-42.west.biz.rr.com) Quit (Quit: Leaving.)
[6:37] * hasues (~hazuez@146.sub-174-237-36.myvzw.com) has joined #ceph
[6:38] * tiger (~textual@58.213.102.114) has joined #ceph
[6:46] * ivotron (~ivotron@c-50-150-124-250.hsd1.ca.comcast.net) has joined #ceph
[6:53] * kaizh (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[6:54] * kaizh (~kaizh@128-107-239-233.cisco.com) has joined #ceph
[6:54] * ivotron (~ivotron@c-50-150-124-250.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[6:56] * kaizh_ (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) has joined #ceph
[6:56] * kaizh (~kaizh@128-107-239-233.cisco.com) Quit (Read error: Connection reset by peer)
[6:56] * tiger (~textual@58.213.102.114) Quit (Quit: Textual IRC Client: www.textualapp.com)
[6:57] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[7:01] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[7:06] * plantain (~plantain@106.187.96.118) Quit (Remote host closed the connection)
[7:07] * gaveen (~gaveen@220.247.234.28) has joined #ceph
[7:11] * hasues (~hazuez@146.sub-174-237-36.myvzw.com) Quit (Quit: Leaving.)
[7:11] * utkarshsins (~utkarshsi@115.252.166.179) Quit (Quit: leaving)
[7:12] * kaizh_ (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[7:12] * plantain (~plantain@106.187.96.118) has joined #ceph
[7:16] * kaizh (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) has joined #ceph
[7:20] * hasues (~hazuez@146.sub-174-237-36.myvzw.com) has joined #ceph
[7:28] * shang (~ShangWu@42-64-142-195.dynamic-ip.hinet.net) has joined #ceph
[7:43] * kaizh (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[7:43] * kaizh (~kaizh@128-107-239-233.cisco.com) has joined #ceph
[7:50] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) Quit (Ping timeout: 480 seconds)
[7:51] * shang (~ShangWu@42-64-142-195.dynamic-ip.hinet.net) Quit (Ping timeout: 480 seconds)
[7:58] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) has joined #ceph
[8:04] * mkoderer (uid11949@id-11949.ealing.irccloud.com) has joined #ceph
[8:06] * kaizh_ (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) has joined #ceph
[8:06] * kaizh (~kaizh@128-107-239-233.cisco.com) Quit (Read error: Connection reset by peer)
[8:12] * gdavis33 (~gdavis@38.122.12.254) has joined #ceph
[8:15] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[8:16] * hasues (~hazuez@146.sub-174-237-36.myvzw.com) Quit (Quit: Leaving.)
[8:22] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[8:22] * erice_ (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[8:23] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:26] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[8:27] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[8:29] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[8:32] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[8:32] * sarob (~sarob@nat-dip6.cfw-a-gci.corp.yahoo.com) has joined #ceph
[8:34] * kaizh_ (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[8:34] * kaizh (~kaizh@128-107-239-233.cisco.com) has joined #ceph
[8:36] * kaizh (~kaizh@128-107-239-233.cisco.com) Quit (Remote host closed the connection)
[8:46] * haomaiwa_ (~haomaiwan@117.79.232.197) Quit (Ping timeout: 480 seconds)
[8:48] * sarob_ (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[8:48] * sarob (~sarob@nat-dip6.cfw-a-gci.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[8:56] * sarob_ (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:59] * sleinen (~Adium@2001:620:0:26:f1ad:716a:15a4:ee97) has joined #ceph
[9:05] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:15] * Sysadmin88 (~IceChat77@176.254.32.31) Quit (Quit: Some folks are wise, and some otherwise.)
[9:18] * hjjg (~hg@p3EE32208.dip0.t-ipconnect.de) has joined #ceph
[9:18] * garphy`aw is now known as garphy
[9:21] * rendar (~s@host98-179-dynamic.23-79-r.retail.telecomitalia.it) has joined #ceph
[9:35] * Jezz (~Jezz@103.251.108.4) has joined #ceph
[9:37] * Siva (~sivat@generalnat.eglbp.corp.yahoo.com) has joined #ceph
[9:38] * Jezz (~Jezz@103.251.108.4) Quit ()
[9:40] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[9:42] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[9:42] * ChanServ sets mode +v andreask
[9:42] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Read error: No route to host)
[9:43] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[9:44] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[9:47] * ajazdzewski (~quassel@lpz-66.sprd.net) has joined #ceph
[9:49] * sarob (~sarob@2601:9:7080:13a:ddbb:894:4bfe:9f4e) has joined #ceph
[9:49] * Siva (~sivat@generalnat.eglbp.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[9:50] * dvanders (~dvanders@dvanders-air.cern.ch) Quit (Ping timeout: 480 seconds)
[9:54] * yguang11_ (~yguang11@2406:2000:ef96:e:4591:3292:6a1f:f5a2) Quit (Remote host closed the connection)
[9:57] * sarob (~sarob@2601:9:7080:13a:ddbb:894:4bfe:9f4e) Quit (Ping timeout: 480 seconds)
[9:58] * LeaChim (~LeaChim@host86-166-182-74.range86-166.btcentralplus.com) has joined #ceph
[9:59] * gaveen (~gaveen@220.247.234.28) Quit (Read error: Connection reset by peer)
[9:59] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[10:00] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[10:00] * gaveen (~gaveen@220.247.234.28) has joined #ceph
[10:00] * haomaiwang (~haomaiwan@49.4.189.43) has joined #ceph
[10:07] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) Quit (Quit: Leaving)
[10:08] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[10:15] * fghaas (~florian@91-119-140-244.dynamic.xdsl-line.inode.at) has joined #ceph
[10:16] <fghaas> @leseb, if you're around, I'd appreciate if we could have a quick word about https://github.com/enovance/puppet-ceph/pull/45
[10:16] <cephalobot> fghaas: Error: "leseb," is not a valid command.
[10:19] <fghaas> leseb: ^^
[10:20] * laviandra (~lavi@eth-seco21th2-46-193-64-32.wb.wifirst.net) Quit (Remote host closed the connection)
[10:21] * ismell (~ismell@host-64-17-89-79.beyondbb.com) Quit (Ping timeout: 480 seconds)
[10:28] * haomaiwang (~haomaiwan@49.4.189.43) Quit (Remote host closed the connection)
[10:29] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) has joined #ceph
[10:29] * al (quassel@niel.cx) has joined #ceph
[10:36] * The_Bishop (~bishop@2001:470:50b6:0:d837:b2d2:bc1d:8329) Quit (Ping timeout: 480 seconds)
[10:40] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:41] * The_Bishop (~bishop@2001:470:50b6:0:d837:b2d2:bc1d:8329) has joined #ceph
[10:42] * al (quassel@niel.cx) Quit (Ping timeout: 480 seconds)
[10:45] * al (quassel@niel.cx) has joined #ceph
[10:48] * ivotron (~ivotron@c-50-150-124-250.hsd1.ca.comcast.net) has joined #ceph
[10:51] * jks (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[10:52] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[10:54] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has left #ceph
[10:54] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[10:55] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[10:55] * ChanServ sets mode +v andreask
[10:56] * ivotron (~ivotron@c-50-150-124-250.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[10:58] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[10:59] * ivotron_ (~ivotron@c-50-150-124-250.hsd1.ca.comcast.net) has joined #ceph
[11:00] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[11:07] * lofejndif (~lsqavnbok@bolobolo2.torservers.net) has joined #ceph
[11:20] * dvanders (~dvanders@dvanders-air.cern.ch) has joined #ceph
[11:27] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[11:28] * thb (~me@2a02:2028:1da:5e0:6267:20ff:fec9:4e40) has joined #ceph
[11:51] * allsystemsarego (~allsystem@188.26.167.156) has joined #ceph
[11:53] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[12:01] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[12:03] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[12:06] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[12:09] * gaveen (~gaveen@220.247.234.28) Quit (Remote host closed the connection)
[12:13] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[12:13] * lofejndif (~lsqavnbok@9YYAAJMSH.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[12:15] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[12:24] * Cube (~Cube@12.248.40.138) has joined #ceph
[12:25] * hybrid512 (~walid@195.200.167.70) has joined #ceph
[12:28] * glzhao (~glzhao@220.181.11.232) Quit (Quit: leaving)
[12:35] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[12:51] <jerker> The instructions here refere to a noarch repo that do not exist: http://ceph.com/docs/master/install/install-vm-cloud/
[12:54] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[12:56] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[13:00] * zapotah_ is now known as zapotah
[13:02] * t0rn (~ssullivan@c-24-11-198-35.hsd1.mi.comcast.net) has joined #ceph
[13:02] * fdmanana (~fdmanana@bl9-171-73.dsl.telepac.pt) has joined #ceph
[13:03] * i_m (~ivan.miro@deibp9eh1--blueice1n2.emea.ibm.com) has joined #ceph
[13:04] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[13:11] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[13:11] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) Quit (Ping timeout: 480 seconds)
[13:11] * andreask (~andreask@zid-vpnn020.uibk.ac.at) has joined #ceph
[13:11] * ChanServ sets mode +v andreask
[13:18] * i_m (~ivan.miro@deibp9eh1--blueice1n2.emea.ibm.com) Quit (Quit: Leaving.)
[13:19] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) has joined #ceph
[13:36] * yanzheng (~zhyan@134.134.139.74) has joined #ceph
[13:38] * toabctl (~toabctl@toabctl.de) Quit (Quit: WeeChat 0.3.7)
[13:39] * toabctl (~toabctl@toabctl.de) has joined #ceph
[13:41] * tserong (~tserong@203-57-208-132.dyn.iinet.net.au) Quit (Quit: Leaving)
[13:53] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) has joined #ceph
[13:55] * markbby (~Adium@168.94.245.2) has joined #ceph
[13:57] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[13:57] * sarob (~sarob@2601:9:7080:13a:4d7d:6d33:537d:b5c3) has joined #ceph
[13:59] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit ()
[14:05] * sarob (~sarob@2601:9:7080:13a:4d7d:6d33:537d:b5c3) Quit (Ping timeout: 484 seconds)
[14:07] * sarob (~sarob@2601:9:7080:13a:559f:2544:fc8d:d38f) has joined #ceph
[14:08] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[14:15] * sarob (~sarob@2601:9:7080:13a:559f:2544:fc8d:d38f) Quit (Ping timeout: 480 seconds)
[14:15] * andreask (~andreask@zid-vpnn020.uibk.ac.at) Quit (Read error: Connection reset by peer)
[14:25] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit (Quit: valeech)
[14:33] * xcrracer (~xcrracer@fw-ext-v-1.kvcc.edu) has joined #ceph
[14:37] * xcrracer (~xcrracer@fw-ext-v-1.kvcc.edu) Quit ()
[14:40] * yanzheng (~zhyan@134.134.139.74) Quit (Remote host closed the connection)
[14:46] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[14:52] * hjjg_ (~hg@p3EE32CA8.dip0.t-ipconnect.de) has joined #ceph
[14:54] * hjjg (~hg@p3EE32208.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[14:57] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[15:05] * fabiocba (~fabiocbal@187.114.197.220) has joined #ceph
[15:05] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[15:08] * linuxkidd (~linuxkidd@rtp-isp-nat1.cisco.com) has joined #ceph
[15:10] * sroy (~sroy@207.96.182.162) has joined #ceph
[15:15] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) has joined #ceph
[15:21] * fabiocba_ (~fabiocbal@187.114.200.44) has joined #ceph
[15:21] * fabiocba is now known as Guest2514
[15:21] * fabiocba_ is now known as fabiocba
[15:25] * Guest2514 (~fabiocbal@187.114.197.220) Quit (Read error: Connection reset by peer)
[15:37] * BillK (~BillK-OFT@124-148-67-206.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[15:38] <jerker> Regarding the discussion on the mailing list "contraining crush placement possibilities"....: Use erasure coding and avoid the problem. Fail any three nodes
[15:40] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) has joined #ceph
[15:46] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[15:46] <infernix> by chance I ran into NVDIMM support on supermicro boards
[15:46] <infernix> has anyone entertained the idea of using those for journals?
[15:46] * markbby (~Adium@168.94.245.2) Quit (Quit: Leaving.)
[15:49] * ircolle (~Adium@2601:1:8380:2d9:3492:c803:7280:4cbf) has joined #ceph
[15:53] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:54] * t0rn (~ssullivan@c-24-11-198-35.hsd1.mi.comcast.net) has left #ceph
[15:57] * ircolle (~Adium@2601:1:8380:2d9:3492:c803:7280:4cbf) Quit (Quit: Leaving.)
[15:58] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[15:59] * carif (~mcarifio@cpe-74-78-54-137.maine.res.rr.com) has joined #ceph
[16:01] * ismell (~ismell@host-64-17-89-79.beyondbb.com) has joined #ceph
[16:02] * mattt_ (~textual@92.52.76.140) has joined #ceph
[16:02] * markbby (~Adium@168.94.245.1) has joined #ceph
[16:03] * ircolle (~Adium@2601:1:8380:2d9:783e:3285:b35a:faeb) has joined #ceph
[16:03] * imriz (~imriz@109.65.126.90) has joined #ceph
[16:05] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[16:05] * mattt_ is now known as mattt
[16:08] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[16:10] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (Ping timeout: 480 seconds)
[16:16] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:17] * tsnider (~oftc-webi@216.240.30.25) has joined #ceph
[16:18] * tiger (~textual@60.55.10.131) has joined #ceph
[16:18] * tiger (~textual@60.55.10.131) Quit ()
[16:19] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[16:24] * sleinen1 (~Adium@2001:620:0:26:382c:6333:b2c8:24c6) has joined #ceph
[16:28] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[16:29] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[16:29] * ChanServ sets mode +v andreask
[16:31] * sleinen (~Adium@2001:620:0:26:f1ad:716a:15a4:ee97) Quit (Ping timeout: 480 seconds)
[16:41] * tdb (~tdb@willow.kent.ac.uk) has joined #ceph
[16:41] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[16:44] * neurodrone (~neurodron@static-108-30-171-7.nycmny.fios.verizon.net) has joined #ceph
[16:55] * pvsa (~pvsa@89.204.137.40) has joined #ceph
[16:56] * bitblt (~don@128-107-239-234.cisco.com) has joined #ceph
[16:56] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) Quit (Quit: Leaving.)
[16:59] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[17:04] * pvsa (~pvsa@89.204.137.40) Quit (Remote host closed the connection)
[17:05] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[17:06] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[17:06] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:16] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[17:16] * kaizh (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) has joined #ceph
[17:19] <bitblt> i have some volume-backed instances (openstack) that I am trying to live migrate. nova seems to have some check wherein it wants shared storage to do this. i tried using virsh but it wants to copy disk and log files to the target hypervisor too. do i really need shared storage to solve this problem?
[17:26] * dvanders (~dvanders@dvanders-air.cern.ch) Quit (Ping timeout: 480 seconds)
[17:29] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[17:30] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[17:33] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[17:40] * hjjg_ (~hg@p3EE32CA8.dip0.t-ipconnect.de) Quit (Read error: Operation timed out)
[17:42] * imriz (~imriz@109.65.126.90) Quit (Ping timeout: 480 seconds)
[17:43] <fghaas> leseb, here now?
[17:43] <fghaas> just in case you have time to talk about puppet-ceph
[17:44] * reed (~reed@75-101-54-131.dsl.static.sonic.net) has joined #ceph
[17:47] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:50] * kaizh (~kaizh@c-50-131-203-4.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[17:50] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[17:50] * orion195 (~oftc-webi@213.244.168.133) Quit (Quit: Page closed)
[17:52] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[17:56] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[17:56] * markbby (~Adium@168.94.245.1) Quit (Remote host closed the connection)
[17:56] * joef (~Adium@2620:79:0:131:d90:13fe:5c19:de85) has joined #ceph
[17:59] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: Textual IRC Client: www.textualapp.com)
[18:02] <leseb> fghaas: I don't have much time now, but I asked someone to review your PR :)
[18:04] <fghaas> this is not about the PR, actually, I have a few more general questions
[18:04] <fghaas> should I just file issues?
[18:04] * sleinen1 (~Adium@2001:620:0:26:382c:6333:b2c8:24c6) Quit (Quit: Leaving.)
[18:04] * sleinen (~Adium@130.59.94.55) has joined #ceph
[18:05] <fghaas> (such as: why is this still using ceph-authtool when ceph -o file auth foo will do, why does it rely on sysv init rather than upstart, why doesn't this use ceph-deploy)
[18:05] <fghaas> but nevermind, I'll come back to that
[18:08] * b0e (~aledermue@juniper1.netways.de) Quit (Remote host closed the connection)
[18:09] <wrale> in a typical ceph-for-openstack-and-hadoop deployment of ceph, are IOPS beneficial in the backing storage of the MDS(s) or the MON(s)? If I have to pick one to go on an SSD, which should it be?
[18:10] <wrale> s/beneficial/more beneficial/g
[18:10] <kraken> wrale meant to say: in a typical ceph-for-openstack-and-hadoop deployment of ceph, are IOPS more beneficial/g in the backing storage of the MDS(s) or the MON(s)? If I have to pick one to go on an SSD, which should it be?
[18:10] <wrale> global fail :)
[18:10] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[18:11] <wrale> My guess is that the MON sees more random RW than the MDS, but I'm not sure.
[18:11] <leseb> fghaas: ceph-authtool doesn't require the mons to be up. It relies on sysv init because we started on debian but yes we could expand this to ubuntu as well by using a service module from puppet. it doesn't use ceph-deploy because ceph-deploy because this means: huge refactor moreover I won't use ceph-deploy.
[18:12] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:12] * sleinen (~Adium@130.59.94.55) Quit (Ping timeout: 480 seconds)
[18:13] <leseb> fghaas: sorry for the last sentence: " it doesn't use ceph-deploy because it means: huge refactor moreover I won't use ceph-deploy"
[18:14] <leseb> fghaas: for me ceph-deploy is more like example that should be re-use by the config management systems. It shows the way to do things properly
[18:14] * xarses (~andreww@12.164.168.117) has joined #ceph
[18:15] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit (Quit: Bye!)
[18:15] <leseb> fghaas: if I had to refactor something on puppet, I'll use ceph-disk instead
[18:15] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[18:16] * markbby (~Adium@168.94.245.3) has joined #ceph
[18:17] * JC (~JC@38.122.20.226) has joined #ceph
[18:17] * markbby (~Adium@168.94.245.3) Quit (Remote host closed the connection)
[18:21] <wrale> Found this quote: "FYI, the MDS does not use any local storage ??? it puts everything on the OSDs." http://en.it-usenet.org/thread/11905/8030/ .. i guess the MDS doesn't need SSD, then.
[18:22] * markbby (~Adium@168.94.245.4) has joined #ceph
[18:22] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[18:23] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[18:23] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[18:24] * fabiocba (~fabiocbal@187.114.200.44) Quit (Ping timeout: 480 seconds)
[18:27] * mnash (~chatzilla@vpn.expressionanalysis.com) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 20.0.1/20130409194949])
[18:28] <fghaas> wrale: no it doesn't, but lots of RAM does help
[18:29] * doubleg (~doubleg@69.167.130.11) Quit (Remote host closed the connection)
[18:29] <wrale> fghaas: sounds good.. these servers have lots of ram... I read on the ceph site that multiple mds servers are not yet supported.. do you know if this is still the case?
[18:30] * doubleg (~doubleg@69.167.130.11) has joined #ceph
[18:30] <wrale> i'd like to run six
[18:30] <fghaas> leseb: on at least one occasion it calls "ceph-authtool <blah> $(ceph auth get-or-create <blah)", so I'm not seeing the point of "doesn't require the mons to be up"
[18:30] <wrale> (or five, if quorum is a concern)
[18:30] <fghaas> MDSs don't contribute to quorum
[18:31] <tsnider> I have a feeling that having journals on SSDs only benefit writes and won't help read performance is that generally true?
[18:31] <fghaas> wrale: you can run several, but (afaik) multiple active MDSs still isn't recommended, so this would be an active/backup config
[18:31] <wrale> fghaas: excellent answer.. thanks
[18:32] <wrale> Do MONs significantly benefit from being backed by SSDs?
[18:34] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[18:35] * garphy is now known as garphy`aw
[18:35] * perfectsine (~perfectsi@if01-gn01.dal05.softlayer.com) has joined #ceph
[18:38] <leseb> fghaas: I believe this is a leftover, ceph -n mon. auth get-or-create will work as well
[18:38] <leseb> fghaas: oh! https://github.com/enovance/puppet-ceph/issues/21
[18:38] <leseb> fghaas: proposed 8 months ago, but never got into the module...
[18:39] <fghaas> yeah that's true for a number of issues/PRs sitting in that repo.
[18:39] <fghaas> also, the keyring path is hardcoded *and* outdated
[18:40] <fghaas> a pull req enabling a decent choice for journals has also been pending for months, with zero feedback
[18:48] <leseb> fghaas: yes we should let ceph-create-keys running for that
[18:50] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[18:51] <joao> wrale, for really large clusters, we've seen them behaving better; but I don't think we have hard data to back that up
[18:52] <wrale> joao: Thanks. My cluster will have 132 OSDs (66 physical nodes, 2 HDDs per) (also, each will have one SSD for OS + journals
[18:53] <wrale> joao: I suppose that's probably mid-size.. ?
[18:53] * mnash (~chatzilla@vpn.expressionanalysis.com) has joined #ceph
[18:54] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[18:56] * sleinen1 (~Adium@2001:620:0:26:317c:496f:437f:291c) has joined #ceph
[18:59] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[18:59] * sleinen2 (~Adium@2001:620:1000:3:300d:5d71:4146:f96b) has joined #ceph
[19:00] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Read error: Connection reset by peer)
[19:00] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:02] <joao> wrale, don't have the numbers to say how that matches against what's out there, but I would think you'll get away just fine using HDDs :)
[19:04] * ajazdzewski (~quassel@lpz-66.sprd.net) Quit (Remote host closed the connection)
[19:04] <loicd> wusui: can you steal the ceph hoody from they guy behind you (discretly) and ship it to france ? (dachary 47 rue de metz, 31000 toulouse) :-)
[19:06] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[19:07] * sleinen1 (~Adium@2001:620:0:26:317c:496f:437f:291c) Quit (Ping timeout: 480 seconds)
[19:07] * sleinen2 (~Adium@2001:620:1000:3:300d:5d71:4146:f96b) Quit (Ping timeout: 480 seconds)
[19:08] * nwat (~textual@eduroam-237-196.ucsc.edu) has joined #ceph
[19:10] * Sysadmin88 (~IceChat77@176.254.32.31) has joined #ceph
[19:11] <wrale> joao: cool. thanks again
[19:14] <winston-d> hi, I would like to know what kind of parameter should I put into xml when my Ceph cluster name is *NOT* 'ceph' but something else.
[19:15] <winston-d> <source protocol ='rbd' name='rbd/rbd-0001' /> things like this doesn't work, I assume librbd will look for ceph.conf by default.
[19:16] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[19:16] * JC1 (~JC@2607:f298:a:607:837:cab1:23ff:392c) has joined #ceph
[19:17] * JC1 (~JC@2607:f298:a:607:837:cab1:23ff:392c) Quit ()
[19:17] * diegows (~diegows@190.190.5.238) has joined #ceph
[19:21] * mattt (~textual@92.52.76.140) Quit (Read error: Connection reset by peer)
[19:22] * JC (~JC@38.122.20.226) Quit (Ping timeout: 480 seconds)
[19:23] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) has joined #ceph
[19:23] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) Quit ()
[19:24] <joshd> winston-d: as long as your version of libvirt still accepts it, you can add arbitrary ceph options in the name attribute like name='pool/image:cluster=foo'
[19:25] <joshd> winston-d: in far future versions of libvirt we'll need to patch qemu to use their newer block device arg parsing so we can use libvirt's existing qemu arg passthrough
[19:28] <fghaas> joshd: wido is working on a libvirt patch to pass options to network drives
[19:28] <fghaas> comes in handy not just for rbd, but also for nfs pools (think NFS mount options)
[19:29] <joshd> fghaas: that'd be great, though I'm skeptical about the libvirt devs accepting any more kinds of arbitrary options passthrough
[19:30] <fghaas> he was suggesting adding <option name="foo" value="bar"> to the network drive element
[19:31] <fghaas> what's the currently supported way? I thought there was none
[19:32] <joshd> currently you can add key-value pairs to the image name separated by colons simply because that's what the command line looks like for qemu
[19:32] <joshd> later versions of libvirt disable that though
[19:32] <fghaas> my suggestion actually would have been to say to hell with it, have qemu-rbd and libvirt just initialize their rados connection with /etc/ceph/ceph.conf, but wido disliked that idea
[19:33] <joshd> well that's what qemu does, but sometimes you want per-volume settings
[19:33] * c74d (~c74d@2002:4404:712c:0:60e9:dd8f:9eee:74ec) Quit (Remote host closed the connection)
[19:33] * c74d (~c74d@2002:4404:712c:0:c97c:bd2c:2646:a817) has joined #ceph
[19:33] <bitblt> has anyone gotten ceph true live migration working with nova?
[19:33] <joshd> and libvirt devs would rather get rid of that entirely, since they don't want any dependence on files outside of libvirt's knowledge
[19:34] <fghaas> bitblt: are you booting from volume, or are you booting off of ephemeral disks with one attached cinder volume?
[19:35] * danieagle (~Daniel@186.214.58.22) has joined #ceph
[19:35] <bitblt> booting off volume
[19:35] <bitblt> i got a raw image, put it into glance, created a cinder volume from it, booted a nova instance from the cinder volume
[19:35] * JoeGruher (~JoeGruher@jfdmzpr01-ext.jf.intel.com) Quit (Ping timeout: 480 seconds)
[19:35] <fghaas> and you are able to manually migrate with virsh migrate --live?
[19:36] <bitblt> no, it complains about permissions issues trying to copy disk config and logs to the target hypervisors's /var/lib/nova/instances
[19:36] <bitblt> using qemu+tcp, no auth
[19:36] <fghaas> using --tunnelled?
[19:37] <bitblt> i haven't tried that, ill give it a go now
[19:37] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[19:40] <fghaas> might also try --p2p
[19:40] <winston-d> joshd: thx, tried that. Still got error like this: error : qemuMonitorTextAddDrive:2828 : operation failed: open disk image file failed
[19:41] <winston-d> joshd: btw, i'm using Ubuntu12.04LTS
[19:41] * reed (~reed@75-101-54-131.dsl.static.sonic.net) Quit (Quit: Ex-Chat)
[19:41] <winston-d> not sure what went wrong this time.
[19:42] <joshd> winston-d: does it work if you specify conf=/path/to/file directly?
[19:44] * nhm (~nhm@65-128-159-155.mpls.qwest.net) has joined #ceph
[19:44] * ChanServ sets mode +o nhm
[19:44] <winston-d> joshd: that works
[19:46] <winston-d> joshd: thx!
[19:46] <joshd> winston-d: you're welcome!
[19:47] <joshd> for future reference: setting cluster there doesn't work because qemu reads the conf file before setting other options, so that you can override anything in the conf file
[19:47] <fghaas> joshd, is there going to be any future work on the glance rbd backend post-icehouse? or should people just switch to using cinder as their glance backend?
[19:47] <joshd> cluster should be treated specially like id though
[19:48] <joshd> fghaas: the cinder glance backend wouldn't support cloning in its current state and no one's using it afaik
[19:49] <fghaas> are there blueprints/plans for getting that cloning support into cinder, somehow?
[19:49] <fghaas> well, into glance working with cinder, you know what I mean :)
[19:49] <joshd> I'm not sure anyone's really working on the cinder glance backend
[19:50] <fghaas> ok...
[19:51] <fghaas> so are we still going to have to expose the glance backend URL to make cloning work, for the foreseeable future?
[19:51] <joshd> yes
[19:52] <joshd> that'd be true with the cinder backend as well most likely
[19:52] <winston-d> joshd: i agree, cluster is essentail parameter, should be treated differently.
[19:54] <joshd> fghaas: do you have issues with running an internal-only glance-api if you want to hide that detail?
[19:55] <fghaas> well it does seem suboptimal if there is a single service for which you can't expose the API
[19:56] <joshd> fghaas: you can expose it through another glance-api that doesn't show any image locations
[19:56] * JC (~JC@38.122.20.226) has joined #ceph
[19:58] * erice_ (~erice@c-98-245-48-79.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[20:01] <fghaas> oh, so two separate glance-api endpoints
[20:01] <fghaas> yeah, I guess that's an option
[20:06] <winston-d> fghaas: actually, HP has been doing similar for Nova (two endpoints, one for other service like Cinder only)
[20:07] <fghaas> winston-d: are you with HPCS?
[20:09] * haomaiwa_ (~haomaiwan@49.4.189.43) has joined #ceph
[20:10] <bitblt> fghaas, getting close I think. getting virDomainMigrateToURI3 errors, to which google does not explain
[20:10] * trevorgfrancis (~trevorgfr@user-0ccshl1.cable.mindspring.com) has joined #ceph
[20:11] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) Quit (Ping timeout: 480 seconds)
[20:11] <trevorgfrancis> Im a bit confused on deploying ceph. From what I understand you can zap a disk and that basically makes it entirely available to Ceph..ie I don't have to format it and mount it...is that right?
[20:12] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[20:16] * fghaas (~florian@91-119-140-244.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[20:22] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[20:24] * kaizh (~kaizh@128-107-239-235.cisco.com) has joined #ceph
[20:24] * JC (~JC@38.122.20.226) Quit (Quit: Leaving.)
[20:25] * sarob (~sarob@2001:4998:effd:600:25b0:aabd:b008:2a08) has joined #ceph
[20:29] * xarses_ (~andreww@12.164.168.117) has joined #ceph
[20:30] * reed (~reed@75-101-54-131.dsl.static.sonic.net) has joined #ceph
[20:32] * JC (~JC@2607:f298:a:607:1240:f3ff:fe9f:2dc8) has joined #ceph
[20:32] * wrencsok (~wrencsok@174.79.34.244) Quit (Quit: Leaving.)
[20:32] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[20:34] * JC (~JC@2607:f298:a:607:1240:f3ff:fe9f:2dc8) Quit ()
[20:34] * JC (~JC@2607:f298:a:607:f07e:d7b7:a20a:8bde) has joined #ceph
[20:35] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[20:37] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[20:39] * Pedras (~Adium@216.207.42.132) has joined #ceph
[20:41] * xarses_ (~andreww@12.164.168.117) Quit (Quit: Leaving)
[20:41] * xarses_ (~andreww@12.164.168.117) has joined #ceph
[20:44] * Boltsky (~textual@office.deviantart.net) has joined #ceph
[20:45] * JC (~JC@2607:f298:a:607:f07e:d7b7:a20a:8bde) Quit (Quit: Leaving.)
[20:50] * zidarsk8 (~zidar@89-212-16-128.static.t-2.net) has joined #ceph
[20:50] * zidarsk8 (~zidar@89-212-16-128.static.t-2.net) has left #ceph
[20:53] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[20:54] * markbby (~Adium@168.94.245.2) has joined #ceph
[20:55] * danieagle (~Daniel@186.214.58.22) Quit (Quit: Muito Obrigado por Tudo! :-))
[20:55] * JC (~JC@38.122.20.226) has joined #ceph
[20:57] <kitz> trevorgfrancis: zapping a disk kills the partition tables of the disk. When you then run ceph-deploy osd prepare on the drive it will allocate the whole drive for whatever it needs.
[20:57] <kitz> I usually zap my disks as a matter of course during the setup just to be thorough.
[20:58] * kaizh (~kaizh@128-107-239-235.cisco.com) Quit (Remote host closed the connection)
[21:08] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[21:16] * fabiocba (fabiocbalb@187.114.200.44) has joined #ceph
[21:18] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit (Quit: Leaving...)
[21:20] * JC (~JC@38.122.20.226) Quit (Quit: Leaving.)
[21:21] * mkoderer (uid11949@id-11949.ealing.irccloud.com) Quit (Quit: Connection closed for inactivity)
[21:23] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[21:25] * markbby (~Adium@168.94.245.2) Quit (Remote host closed the connection)
[21:26] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[21:32] * JC (~JC@38.122.20.226) has joined #ceph
[21:50] * sarob (~sarob@2001:4998:effd:600:25b0:aabd:b008:2a08) Quit (Remote host closed the connection)
[21:51] * sarob (~sarob@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) has joined #ceph
[21:53] * dtalton2 (~don@ip24-255-48-170.tc.ph.cox.net) has joined #ceph
[21:55] * markbby (~Adium@168.94.245.4) has joined #ceph
[21:56] * imriz (~imriz@213.57.90.98) has joined #ceph
[21:56] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[21:57] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[21:59] * sarob (~sarob@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[22:00] * markbby (~Adium@168.94.245.4) has joined #ceph
[22:01] * bitblt (~don@128-107-239-234.cisco.com) Quit (Ping timeout: 480 seconds)
[22:02] * JC (~JC@38.122.20.226) Quit (Quit: Leaving.)
[22:02] * xarses_ is now known as xarses
[22:04] * sarob (~sarob@2001:4998:effd:600:79fd:1a98:81dd:dbe8) has joined #ceph
[22:06] * sroy (~sroy@207.96.182.162) Quit (Quit: Quitte)
[22:06] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) has joined #ceph
[22:13] * JC (~JC@38.122.20.226) has joined #ceph
[22:13] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[22:19] * zidarsk8 (~zidar@2a01:260:4039:1:ea11:32ff:fe9a:870) has joined #ceph
[22:19] * zidarsk8 (~zidar@2a01:260:4039:1:ea11:32ff:fe9a:870) has left #ceph
[22:20] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[22:20] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[22:21] * al (quassel@niel.cx) Quit (Read error: Connection reset by peer)
[22:22] * Qu310 (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) Quit (Read error: No route to host)
[22:23] * Qu310 (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[22:24] * ivotron_ (~ivotron@c-50-150-124-250.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[22:25] * ivotron (~ivotron@c-50-150-124-250.hsd1.ca.comcast.net) has joined #ceph
[22:25] * allsystemsarego (~allsystem@188.26.167.156) Quit (Quit: Leaving)
[22:27] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[22:34] * WarrenUsui (~Warren@2607:f298:a:607:c5a4:40bb:68ee:c717) Quit (Read error: Connection reset by peer)
[22:34] * warrenSusui (~Warren@2607:f298:a:607:c5a4:40bb:68ee:c717) Quit (Read error: Connection reset by peer)
[22:34] * WarrenUsui (~Warren@2607:f298:a:607:c5a4:40bb:68ee:c717) has joined #ceph
[22:34] * warrenSusui (~Warren@2607:f298:a:607:c5a4:40bb:68ee:c717) has joined #ceph
[22:39] * ivotron_ (~ivotron@c-50-150-124-250.hsd1.ca.comcast.net) has joined #ceph
[22:39] * ivotron (~ivotron@c-50-150-124-250.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[22:39] * joef (~Adium@2620:79:0:131:d90:13fe:5c19:de85) Quit (Quit: Leaving.)
[22:39] * monod (~monod@host74-245-dynamic.10-79-r.retail.telecomitalia.it) has joined #ceph
[22:54] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[22:58] * ivotron_ (~ivotron@c-50-150-124-250.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[22:59] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[23:00] * markbby (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[23:01] * joef (~Adium@2620:79:0:131:f5cb:7a5e:4ba6:1a6e) has joined #ceph
[23:01] * sarob (~sarob@2001:4998:effd:600:79fd:1a98:81dd:dbe8) Quit (Remote host closed the connection)
[23:01] * sarob (~sarob@2001:4998:effd:600:79fd:1a98:81dd:dbe8) has joined #ceph
[23:04] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) Quit (Quit: Computer has gone to sleep.)
[23:07] * monod (~monod@host74-245-dynamic.10-79-r.retail.telecomitalia.it) Quit (Quit: Quit)
[23:08] * JC1 (~JC@2607:f298:a:607:bc14:6162:a891:841d) has joined #ceph
[23:09] * kaizh (~kaizh@128-107-239-234.cisco.com) has joined #ceph
[23:10] * sarob (~sarob@2001:4998:effd:600:79fd:1a98:81dd:dbe8) Quit (Ping timeout: 480 seconds)
[23:10] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) Quit (Quit: Leaving.)
[23:11] * JC2 (~JC@38.122.20.226) has joined #ceph
[23:12] * JC (~JC@38.122.20.226) Quit (Ping timeout: 480 seconds)
[23:12] * al (d@niel.cx) has joined #ceph
[23:13] * nwat (~textual@eduroam-237-196.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[23:13] * nwat (~textual@eduroam-237-196.ucsc.edu) has joined #ceph
[23:15] * JC (~JC@2607:f298:a:607:44df:b68c:c255:e4d2) has joined #ceph
[23:15] * nwat (~textual@eduroam-237-196.ucsc.edu) Quit ()
[23:16] * JC1 (~JC@2607:f298:a:607:bc14:6162:a891:841d) Quit (Ping timeout: 480 seconds)
[23:19] * JC2 (~JC@38.122.20.226) Quit (Ping timeout: 480 seconds)
[23:19] * sarob (~sarob@2001:4998:effd:600:bdfa:f380:22ca:1de9) has joined #ceph
[23:29] <darkfader> did anyone here make some disk io latency flamegraphs?
[23:29] <darkfader> i would love to but i feel just too stupid to understand how it would be done
[23:34] * rendar (~s@host98-179-dynamic.23-79-r.retail.telecomitalia.it) Quit ()
[23:35] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has left #ceph
[23:39] * neurodrone (~neurodron@static-108-30-171-7.nycmny.fios.verizon.net) Quit (Quit: neurodrone)
[23:41] * linuxkidd (~linuxkidd@rtp-isp-nat1.cisco.com) Quit (Quit: Leaving)
[23:43] * Cube (~Cube@12.248.40.138) Quit (Quit: Leaving.)
[23:45] * sarob (~sarob@2001:4998:effd:600:bdfa:f380:22ca:1de9) Quit (Remote host closed the connection)
[23:45] * sarob (~sarob@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) has joined #ceph
[23:50] * sarob_ (~sarob@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) has joined #ceph
[23:50] * sarob_ (~sarob@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[23:50] * sarob_ (~sarob@2001:4998:effd:600:c8c4:5762:e54a:709b) has joined #ceph
[23:53] * JC1 (~JC@2607:f298:a:607:8d9f:2c2:110f:e13e) has joined #ceph
[23:53] * sarob (~sarob@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[23:58] * sarob_ (~sarob@2001:4998:effd:600:c8c4:5762:e54a:709b) Quit (Ping timeout: 480 seconds)
[23:58] * trevorgfrancis (~trevorgfr@user-0ccshl1.cable.mindspring.com) Quit (Quit: trevorgfrancis)
[23:59] * JC (~JC@2607:f298:a:607:44df:b68c:c255:e4d2) Quit (Ping timeout: 480 seconds)
[23:59] * tsnider (~oftc-webi@216.240.30.25) Quit (Remote host closed the connection)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.