#ceph IRC Log

Index

IRC Log for 2015-06-18

Timestamps are in GMT/BST.

[0:00] * moore_ (~moore@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) has joined #ceph
[0:00] * moore (~moore@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) Quit (Read error: Connection reset by peer)
[0:03] * Concubidated (~Adium@129.192.176.66) Quit (Quit: Leaving.)
[0:20] * ska (~skatinolo@cpe-173-174-111-177.austin.res.rr.com) has joined #ceph
[0:21] <ska> Do I need ceph-deploy to manage ceph?
[0:21] <gleam> no
[0:21] <gleam> it can make things easier
[0:21] <doppelgrau> ska: no, e.g. I use ansible
[0:21] <ska> I'm using debian8 and I don't see it.. But this version of debian has 0.80.7-2 version.
[0:21] <ska> doppelgrau: me too..
[0:21] <ska> doppelgrau: can you share any of your plays?
[0:22] <evilrob00> I'm deploying it with ceph-deploy right now, but the long term plan is to use saltstack
[0:22] * xarses_ (~xarses@166.175.58.7) Quit (Ping timeout: 480 seconds)
[0:23] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) has left #ceph
[0:26] <doppelgrau> ska: they are based on https://github.com/ceph/ceph-ansible
[0:26] <ska> I have 3 nodes that I'll deploy symmetrically (as possible) each node will have one XFS partition and one CephFs partition.
[0:27] <doppelgrau> ska: but I have changed the OSD setup a bit (but not published it yet)
[0:27] <ska> Do I need to inform ceph of masters for Mon and MDS?
[0:27] <ska> Or will it decide via quorum?
[0:27] <doppelgrau> ska: ceph-masters?
[0:27] <ska> I suppose only one Active Mon and MDS.
[0:28] <doppelgrau> ska: they decide it automagically :)
[0:28] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Quit: Leaving.)
[0:28] <doppelgrau> ska: just put them in the config
[0:28] <ska> Cool. So I simply need to make all my configs exactly the same?
[0:28] <ska> Each one must know of the other.
[0:29] <doppelgrau> ska: same config is the easiest setup IMHO
[0:29] <ska> Awesome..
[0:29] * Concubidated (~Adium@199.119.131.10) has joined #ceph
[0:30] <doppelgrau> ska: but only mon and mds (perhaps rgw, but I???m not using that) must be specified, the osds find a monitor according to the config-file and the mon keep the current crush-tree
[0:31] * nhm (~nhm@184-97-242-33.mpls.qwest.net) has joined #ceph
[0:31] * ChanServ sets mode +o nhm
[0:37] * johanni (~johanni@173.226.103.101) has joined #ceph
[0:39] * lcurtis (~lcurtis@47.19.105.250) Quit (Remote host closed the connection)
[0:39] <johanni> Hey, I am trying to run the ceph community cookbook while setting a preset osd keyring value. In /var/lib/ceph/bootstrap-osd/<cluster_name>.keyring I set the value for osd havever this gets overwritten during the ceph-disk prepare step. Any idea why?
[0:40] <ichavero> hello, how can i change the key for the admin user?
[0:42] <johanni> Ichavero: So you don't want the client admin key randomly generated as it usually is?
[0:43] <ichavero> johanni: yeah but it seems that i misconfigured the admin user so the key that i have is not valid to login
[0:44] <ichavero> johanni: i get this error while trying to run any command: http://fpaste.org/233341/81046143/
[0:44] * yghannam (~yghannam@0001f8aa.user.oftc.net) Quit (Quit: Leaving)
[0:45] * xarses_ (~xarses@12.10.113.130) has joined #ceph
[0:45] <ichavero> i think it's because i purged everything and started the cluster from scratch
[0:47] * jschmid (~jxs@ip9234f579.dynamic.kabel-deutschland.de) Quit (Ping timeout: 480 seconds)
[0:47] * haomaiwang (~haomaiwan@183.206.168.253) has joined #ceph
[0:51] * haomaiwa_ (~haomaiwan@183.206.168.253) Quit (Ping timeout: 480 seconds)
[0:54] * chasmo77 (~chas77@158.183-62-69.ftth.swbr.surewest.net) Quit (Quit: It's just that easy)
[0:56] * SaintAardvark (~user@saintaardvark-1-pt.tunnel.tserv14.sea1.ipv6.he.net) has joined #ceph
[0:56] * oro (~oro@91.146.191.230) has joined #ceph
[0:57] <johanni> If the cluster has truly been started from scratch, then the admin key should be different. I don't think its failing because you don't have the old admin key. Probably at this point I would detach any volumes, remove all files, and reboot the machine before running ceph-deploy again
[0:59] * fdmanana__ (~fdmanana@bl13-129-165.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[0:59] <ska> doppelgrau: in a OpenStack or large installation, would the RGW usually be present?
[1:00] <doppelgrau> ska: I think for openstack RGW can be utilized for some openstack-services, other large installations, depends???
[1:00] <ska> IS RGW essential for ceph?
[1:01] <lurbs> No, it's just if you want to provide a Swift and/or S3 compatible layer for object storage.
[1:01] <doppelgrau> ska: no, ceph itself only need the monitors and OSDs for a working (distributed and fault tolerant) objectstore
[1:02] <doppelgrau> ska: rbd, cephfs/mds and rgw are just ???possible ???clients???
[1:02] * ircolle (~Adium@2601:285:201:2bf9:e965:3daa:abd9:324c) Quit (Quit: Leaving.)
[1:03] <ska> doppelgrau: thanks...
[1:04] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[1:04] * oro (~oro@91.146.191.230) Quit (Ping timeout: 480 seconds)
[1:04] <ichavero> johanni: the only key that i can see in /etc/ceph/ceph.client.admin.keyring is the old key, i think i overwrited the file by mistake when i was re deploying
[1:04] <SaintAardvark> hi all -- i'm setting up Ceph and I'm running into some confusion with the "public network"/"cluster network" settings
[1:05] <SaintAardvark> I've got separate interfaces for the public/cluster networks, and the "public network"/"cluster network" settings are in /etc/ceph/ceph.conf
[1:05] <SaintAardvark> but looking at the output of "netstat", I seem to see a lot of connections on the *public* network between OSDs
[1:06] <SaintAardvark> I'm wondering how I can verify that, and if it's true how I can fix it
[1:06] <johanni> ichavero: maybe it didn't create a new key since it saw the old one? Not sure
[1:06] * dopesong_ (~dopesong@78-56-228-178.static.zebra.lt) Quit (Remote host closed the connection)
[1:07] <lurbs> SaintAardvark: That's normal. You'll see OSD <-> OSD connections on both network.
[1:07] <lurbs> s/network/networks/
[1:07] <ichavero> johanni: that was my first thought but since the old key does not work i think it did but i did something stupid and i can't find it. that's why i want to change the key
[1:10] <SaintAardvark> lurbs: aha, thank you. Just to clarify: even though there are OSD<->OSD conns on the public network, all the object replication/recovery traffic (which is what I'm really worried about) will be strictly on the cluster network? (ref: http://ceph.com/docs/master/rados/configuration/network-config-ref/)
[1:11] <lurbs> Yep. It's reasonably easy to test that, if your cluster is quiet and you run a 'rados bench' or something.
[1:12] <SaintAardvark> lurbs: did not know about 'rados bench' -- thanks for that, and for the clarification!
[1:28] * fsimonce (~simon@host253-71-dynamic.3-87-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[1:29] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[1:38] * yguang11 (~yguang11@2001:4998:effd:600:5103:fe60:609:5578) Quit (Remote host closed the connection)
[1:41] * yguang11 (~yguang11@nat-dip30-wl-d.cfw-a-gci.corp.yahoo.com) has joined #ceph
[1:41] * t4nk443 (~oftc-webi@64-7-156-32.border8-dynamic.dsl.sentex.ca) has joined #ceph
[1:43] * t4nk443 (~oftc-webi@64-7-156-32.border8-dynamic.dsl.sentex.ca) Quit (Remote host closed the connection)
[1:45] * jskinner (~jskinner@173-28-1-197.client.mchsi.com) has joined #ceph
[1:47] * wer_ (~wer@2600:1003:b849:eebe:49d0:5fd6:1a69:c315) Quit (Ping timeout: 480 seconds)
[1:47] * wushudoin_ (~wushudoin@38.140.108.2) Quit (Quit: Leaving)
[1:47] * moore_ (~moore@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) Quit (Remote host closed the connection)
[1:48] * LeaChim (~LeaChim@host86-132-233-125.range86-132.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[1:53] * yguang11 (~yguang11@nat-dip30-wl-d.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[1:54] * rlrevell (~leer@184.52.129.221) has joined #ceph
[1:54] * oms101 (~oms101@p20030057EA084600C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:55] * yguang11_ (~yguang11@nat-dip30-wl-d.cfw-a-gci.corp.yahoo.com) has joined #ceph
[1:56] * rlrevell (~leer@184.52.129.221) has left #ceph
[2:03] * oms101 (~oms101@p20030057EA079000C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[2:04] * haomaiwa_ (~haomaiwan@183.206.168.253) has joined #ceph
[2:04] * wer_ (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[2:07] * as0bu (~as0bu@c-98-230-203-84.hsd1.nm.comcast.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[2:07] * haomaiwang (~haomaiwan@183.206.168.253) Quit (Ping timeout: 480 seconds)
[2:08] * Debesis_ (~0x@5.254.46.84.mobile.mezon.lt) Quit (Read error: Connection reset by peer)
[2:08] * Debesis_ (~0x@5.254.46.84.mobile.mezon.lt) has joined #ceph
[2:19] * johanni (~johanni@173.226.103.101) Quit ()
[2:26] * earthrocker (~zz@135.26.57.201) has joined #ceph
[2:28] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Ping timeout: 480 seconds)
[2:29] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[2:29] * haomaiwa_ (~haomaiwan@183.206.168.253) Quit (Remote host closed the connection)
[2:30] * earthrocker (~zz@135.26.57.201) Quit (Quit: come chat with us @ http://robothive.irc.so | robothive.irc.so 6667 #robothive)
[2:34] * midnightrunner (~midnightr@216.113.160.71) Quit (Ping timeout: 480 seconds)
[2:36] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:37] * mattronix_ (~quassel@mail.mattronix.nl) has joined #ceph
[2:38] * mattronix (~quassel@mail.mattronix.nl) Quit (Ping timeout: 480 seconds)
[2:43] * yanzheng (~zhyan@182.139.21.245) has joined #ceph
[2:51] * ira (~ira@208.217.184.210) Quit (Ping timeout: 480 seconds)
[2:56] * yguang11_ (~yguang11@nat-dip30-wl-d.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[2:56] * puffy (~puffy@216.207.42.144) Quit (Quit: Leaving.)
[2:58] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[3:01] * Debesis_ (~0x@5.254.46.84.mobile.mezon.lt) Quit (Ping timeout: 480 seconds)
[3:04] * haomaiwang (~haomaiwan@218.94.96.134) has joined #ceph
[3:11] * zacbri (~zacbri@glo44-5-88-164-16-77.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[3:14] * zacbri (~zacbri@glo44-5-88-164-16-77.fbx.proxad.net) has joined #ceph
[3:15] * Alssi_ (~Alssi@114.111.60.56) Quit (Remote host closed the connection)
[3:17] * Mika_c (~Mk@122.146.93.152) has joined #ceph
[3:18] * haomaiwang (~haomaiwan@218.94.96.134) Quit (Ping timeout: 480 seconds)
[3:22] * jskinner (~jskinner@173-28-1-197.client.mchsi.com) Quit (Quit: Leaving...)
[3:22] * dyasny (~dyasny@198.251.58.23) Quit (Ping timeout: 480 seconds)
[3:25] * cloud_vision (~cloud_vis@bzq-79-180-29-82.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[3:27] * shyu (~Shanzhi@119.254.120.66) has joined #ceph
[3:32] * georgem (~Adium@23-91-150-96.cpe.pppoe.ca) has joined #ceph
[3:34] * jclm (~jclm@ip-64-134-187-212.public.wayport.net) Quit (Ping timeout: 480 seconds)
[3:34] * kefu (~kefu@114.92.125.213) has joined #ceph
[3:41] <jidar> client io 5583 kB/s rd, 509 MB/s wr, 397 op/s
[3:41] <jidar> yay!
[3:42] <jidar> thanks for all the help from you guys, and dealing with the weirdness of the crushmap on rhel
[3:44] * wschulze1 (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[3:49] * georgem (~Adium@23-91-150-96.cpe.pppoe.ca) Quit (Quit: Leaving.)
[3:49] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[3:54] * zhaochao (~zhaochao@111.161.77.241) has joined #ceph
[4:05] * zack_dolby (~textual@e0109-114-22-11-74.uqwimax.jp) has joined #ceph
[4:11] * KevinPerks (~Adium@2606:a000:80ad:1300:316f:d3e:da99:54d1) Quit (Quit: Leaving.)
[4:14] * kefu (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[4:15] * kefu (~kefu@114.92.125.213) has joined #ceph
[4:17] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has joined #ceph
[4:17] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has left #ceph
[4:18] * haomaiwang (~haomaiwan@218.94.96.134) has joined #ceph
[4:20] * zaitcev (~zaitcev@2001:558:6001:10:61d7:f51f:def8:4b0f) Quit (Quit: Bye)
[4:28] * bobrik___________ (~bobrik@83.243.64.45) Quit (Ping timeout: 480 seconds)
[4:28] * bobrik___________ (~bobrik@83.243.64.45) has joined #ceph
[4:29] * bkopilov (~bkopilov@bzq-79-183-58-206.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[4:32] * flisky (~Thunderbi@106.39.60.34) has joined #ceph
[4:35] * vbellur (~vijay@107-1-123-195-ip-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[4:42] * cfreak200 (andi@p4FF3E199.dip0.t-ipconnect.de) has joined #ceph
[4:51] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) has joined #ceph
[5:08] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[5:15] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[5:19] * beardo_ (~sma310@207-172-244-241.c3-0.atw-ubr5.atw.pa.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[5:20] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[5:21] * beardo_ (~sma310@207-172-244-241.c3-0.atw-ubr5.atw.pa.cable.rcn.com) has joined #ceph
[5:24] * zack_dolby (~textual@e0109-114-22-11-74.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[5:32] * debian112 (~bcolbert@24.126.201.64) Quit (Quit: Leaving.)
[5:35] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[5:37] * squ (~Thunderbi@46.109.36.167) has joined #ceph
[5:39] * MACscr1 (~Adium@2601:247:4102:c3ac:a023:ab4e:88c5:7062) Quit (Quit: Leaving.)
[5:45] * Vacuum_ (~Vacuum@88.130.210.84) has joined #ceph
[5:45] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[5:47] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[5:49] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[5:51] * puffy (~puffy@c-50-131-179-74.hsd1.ca.comcast.net) has joined #ceph
[5:52] * Vacuum__ (~Vacuum@i59F796E7.versanet.de) Quit (Ping timeout: 480 seconds)
[5:59] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[6:01] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[6:11] * shaunm (~shaunm@74.215.76.114) has joined #ceph
[6:13] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[6:29] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Ping timeout: 480 seconds)
[6:29] * yguang11 (~yguang11@12.31.82.125) has joined #ceph
[6:37] * b0e (~aledermue@p5083D1EF.dip0.t-ipconnect.de) has joined #ceph
[6:40] * sjm (~sjm@49.32.0.193) has joined #ceph
[6:41] * linjan (~linjan@213.8.240.146) has joined #ceph
[6:51] * b0e (~aledermue@p5083D1EF.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[6:58] * biowhb (~wanghongb@58.222.226.130) has joined #ceph
[6:59] * as0bu (~as0bu@c-98-230-203-84.hsd1.nm.comcast.net) has joined #ceph
[6:59] * biowhb (~wanghongb@58.222.226.130) has left #ceph
[7:01] * MACscr (~Adium@2601:247:4102:c3ac:2db2:df88:e19a:558f) has joined #ceph
[7:01] * lucas1 (~Thunderbi@218.76.52.64) Quit (Remote host closed the connection)
[7:02] * linjan (~linjan@213.8.240.146) Quit (Ping timeout: 480 seconds)
[7:05] * sjm (~sjm@49.32.0.193) Quit (Quit: Leaving.)
[7:05] * sjm (~sjm@49.32.0.193) has joined #ceph
[7:06] * cooldharma06 (~chatzilla@14.139.180.40) has joined #ceph
[7:06] * yguang11 (~yguang11@12.31.82.125) Quit (Remote host closed the connection)
[7:07] * yguang11 (~yguang11@12.31.82.125) has joined #ceph
[7:09] * linjan (~linjan@213.8.240.146) has joined #ceph
[7:14] * oblu (~o@62.109.134.112) Quit (Ping timeout: 480 seconds)
[7:15] * yguang11 (~yguang11@12.31.82.125) Quit (Ping timeout: 480 seconds)
[7:17] * as0bu (~as0bu@c-98-230-203-84.hsd1.nm.comcast.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[7:17] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:20] * oblu (~o@62.109.134.112) has joined #ceph
[7:23] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[7:25] * sleinen1 (~Adium@2001:620:0:82::101) has joined #ceph
[7:25] * b0e (~aledermue@p5083D1EF.dip0.t-ipconnect.de) has joined #ceph
[7:26] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[7:29] * kefu (~kefu@114.92.125.213) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[7:29] * badone__ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[7:31] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[7:32] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[7:33] * yguang11 (~yguang11@12.31.82.125) has joined #ceph
[7:34] * dgurtner (~dgurtner@178.197.233.213) has joined #ceph
[7:35] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[7:36] * b0e (~aledermue@p5083D1EF.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[7:38] * cooldharma06 (~chatzilla@14.139.180.40) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 21.0/20130515140136])
[7:39] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:41] * linjan (~linjan@213.8.240.146) Quit (Ping timeout: 480 seconds)
[7:43] * trociny (~mgolub@93.183.239.2) Quit (Read error: No route to host)
[7:52] * shohn (~shohn@dslb-178-002-076-138.178.002.pools.vodafone-ip.de) has joined #ceph
[7:54] * trociny (~mgolub@93.183.239.2) has joined #ceph
[7:55] * magicboiz (~magicboiz@2a01:7d00:501:c000::1a0b) Quit (Read error: No route to host)
[7:55] * ConSi (consi@jest.pro) has joined #ceph
[7:55] <ConSi> Hi!
[7:55] * kefu (~kefu@114.92.125.213) has joined #ceph
[7:55] <ConSi> I've only 3 nodes, and I wan't to do it completly fail-safe
[7:55] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) has joined #ceph
[7:56] <ConSi> It is good to place on all 3 nodes mon
[7:56] <ConSi> and also osd's and mount ceph as local mount point at every node?
[7:56] * magicboiz (~magicboiz@2a01:7d00:501:c000::1a0b) has joined #ceph
[7:57] * kefu (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[7:57] * magicboiz (~magicboiz@2a01:7d00:501:c000::1a0b) Quit (Read error: Connection reset by peer)
[7:58] * magicboiz (~magicboiz@2a01:7d00:501:c000::1a0b) has joined #ceph
[7:58] * kefu (~kefu@114.92.125.213) has joined #ceph
[7:59] * puffy (~puffy@c-50-131-179-74.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[8:00] * b0e (~aledermue@213.95.25.82) has joined #ceph
[8:03] * puffy (~puffy@c-50-131-179-74.hsd1.ca.comcast.net) has joined #ceph
[8:05] * puffy (~puffy@c-50-131-179-74.hsd1.ca.comcast.net) Quit ()
[8:05] <redf_> ConSi ;p
[8:06] <ConSi> redf_: hi [;
[8:06] <redf_> at least 1 mon (spof) or uneven number of mons
[8:06] <ConSi> redf_: Yes, for sure, but I want to place mountpoint of ceph, osds and mon on all nodes
[8:07] <redf_> proxmox ?
[8:07] <ConSi> redf_: nope
[8:07] <ConSi> redf_: application now uses nfs as shared storage provided by netapp
[8:07] <ConSi> but in new datacenter we are trying to not spend money on netapp :>
[8:08] <redf_> :)
[8:08] * magicboiz (~magicboiz@2a01:7d00:501:c000::1a0b) Quit (Ping timeout: 480 seconds)
[8:08] <redf_> there is some warning about using osd and mounting it on same node
[8:09] <redf_> not sure about the reason for it
[8:09] <ConSi> Yes I know, thus I'm asking why
[8:09] <redf_> if you mean that
[8:09] * magicboiz (~magicboiz@2a01:7d00:501:c000::1a0b) has joined #ceph
[8:10] <redf_> well, i belive cepfs isnt really production ready and for rbd you could use, in worst case, a small vm
[8:10] <ConSi> and I'm really confused with decision which filesystem I should use
[8:10] <ConSi> because I have equal disks, I can use also glusterfs
[8:11] <ConSi> Because it does not need metadata server
[8:11] <ConSi> But well, disperse module is not really mature yet :/
[8:12] <redf_> so, you want to put nfs share on cephfs, did i got it right?
[8:12] <ConSi> Yes
[8:12] <ConSi> Let me describe that
[8:12] <ConSi> 3 nodes with 800GB of ssd space, 10G lan
[8:13] <redf_> there was some nfs sw with ceph support, but myself never played with it
[8:13] <ConSi> every node has lxc containers with application
[8:13] * kefu (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[8:13] <ConSi> that uses shared storage to exchange data whitin components
[8:13] * kypto (~oftc-webi@idp01webcache6-z.apj.hpecore.net) has joined #ceph
[8:13] <kypto> i have used ceph-deploy to create a 4 node cluster for RBD which is working fine.now can i use the same cluster to create object storage
[8:13] <ConSi> for now is a /d1/ mountpoint provided by nfs
[8:14] <kypto> if i have 2 Tb OSD disks can i split 1 tb to rbd and rest for object store
[8:14] <ConSi> And plan is to put /d1/ and /var/lib/lxc on shared storage
[8:14] * kefu (~kefu@114.92.125.213) has joined #ceph
[8:14] <ConSi> provided by ceph or gluster I'm just not sure now
[8:14] * nsoffer (~nsoffer@bzq-84-111-112-230.cablep.bezeqint.net) has joined #ceph
[8:15] * chasmo77 (~chas77@158.183-62-69.ftth.swbr.surewest.net) has joined #ceph
[8:15] <redf_> http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/
[8:15] <redf_> https://github.com/nfs-ganesha/nfs-ganesha/wiki
[8:15] * overclk (~overclk@121.244.87.117) has joined #ceph
[8:15] <ConSi> redf_: but If filesystem will be ok, I don't want to mount it by nfs anymore
[8:15] <redf_> maybe those 2 links will provide some insight into ceph/nfs stuff
[8:15] <ConSi> just mount -t cephfs /d1/
[8:16] <ConSi> so I can forget forever about nfs ;)
[8:16] <redf_> some ppl are reporting issues with cephfs, some dont
[8:16] <redf_> afaik it depends on use scenario
[8:16] <redf_> anyway, it just means... imo, it isnt production ready
[8:17] <redf_> look at those 2 links
[8:17] <ConSi> redf_: well, some big players are using ceph already, ask wujek ;]
[8:18] <redf_> ceph isnt cephs only
[8:18] <redf_> using rbd mysefl
[8:18] <ConSi> I'm just curious before I deploy this on production
[8:18] * trociny (~mgolub@93.183.239.2) Quit (Read error: Connection reset by peer)
[8:18] <ConSi> redf_: do You use proxmox with rbd ?
[8:18] <redf_> y
[8:19] <ConSi> any failover tests, broken osds or sth ?
[8:19] * dgurtner (~dgurtner@178.197.233.213) Quit (Read error: Connection reset by peer)
[8:19] <redf_> some simple ceph tests with dead osd
[8:20] <redf_> didnt test anything more
[8:20] <ConSi> redf_: It's a new proxmox with zfs?
[8:20] <ConSi> redf_: what provides osd for ceph
[8:20] <redf_> as long as data is there i dont really care about ha, i mean it is the same shi... thing as with xen and iscsi
[8:20] <ConSi> raw disks without raid or zvol
[8:21] <redf_> raw disk
[8:21] <redf_> 4 osd + 1 journal ssd
[8:21] <redf_> per node
[8:22] <redf_> yes, running ceph on hypervisor
[8:22] <ConSi> so when it comes to proxmox I must add separate disks in raid for hypervisor
[8:23] <ConSi> shit :/
[8:23] * Sysadmin88 (~IceChat77@2.124.164.69) Quit (Quit: When the chips are down, well, the buffalo is empty)
[8:24] <redf_> you can keep raid controlle but put hdd into passthru mode or raid0 ot whatever it is called
[8:24] <ConSi> redf_: Yes I know, but nodes are 1RU servers with 8 SSDs
[8:24] <ConSi> 400GB each
[8:24] <ConSi> and, well, where to put proxmox to not waste space
[8:25] * dgurtner (~dgurtner@178.197.233.213) has joined #ceph
[8:25] <redf_> you can use some other boxes to create ceph cluster and then move osds
[8:26] <ConSi> redf_: Its not that simple
[8:26] <ConSi> redf_: I've only 3 nodes on that colocation
[8:26] <redf_> vm? crazy but doable ;)
[8:27] <ConSi> Well, I'm not that crazy ;D
[8:27] <ConSi> rather I would use lvm lv to osd
[8:27] <ConSi> on raid5 created from all ssds
[8:27] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[8:27] * dgurtner (~dgurtner@178.197.233.213) Quit (Read error: Connection reset by peer)
[8:27] * sleinen1 (~Adium@2001:620:0:82::101) Quit (Read error: Connection reset by peer)
[8:28] <ConSi> I'm thinking about replacing 6 of ssds to normal 2TB hard drives
[8:28] <ConSi> put 2 ssds in raid1
[8:28] <ConSi> install proxmox on them, create LV for journal
[8:28] <ConSi> and put osds on normal hdds
[8:29] <redf_> it dosnt make really much sense to create any kind of raid for ceph
[8:29] <redf_> there was some blog
[8:29] <ConSi> redf_: but raid is for proxmox
[8:29] <ConSi> not for ceph
[8:29] <redf_> about raid1 for journal ssd
[8:29] <redf_> ah ok
[8:29] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[8:29] <ConSi> and because I will have plenty of free space on proxmox VG
[8:30] <ConSi> thus I'll do a lv for journal from it
[8:31] <redf_> get it, just check before if ceph sepaks lvm well, there could be some issues with deploy tools
[8:33] * yguang11 (~yguang11@12.31.82.125) Quit (Remote host closed the connection)
[8:33] <ConSi> redf_: what about all nodes reboot
[8:34] <ConSi> redf_: what will happen if one of nodes will boot slightly faster than others
[8:34] <redf_> ad1 google for ceph noout
[8:34] <redf_> ad2 same as 1
[8:35] <redf_> :)
[8:35] <redf_> http://www.sebastien-han.fr/blog/2012/08/17/ceph-storage-node-maintenance/
[8:35] <redf_> read this guy blog
[8:35] <redf_> some really nice stuff there
[8:37] * kefu (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[8:38] <ConSi> quick tests shows that I can't use lv
[8:38] <ConSi> :/
[8:38] <redf_> but if you need to reboot whole px cluster... myself i would go with shutting/pausing vm first, so no changes to rbds are made, set noout and just reboot
[8:39] * kefu (~kefu@114.92.125.213) has joined #ceph
[8:39] <redf_> there is some manual way to prepare osd
[8:39] <redf_> didnt try it
[8:39] <ConSi> manual tasks on proxmox
[8:39] <ConSi> are like walking on thin ice You know :>
[8:39] <redf_> no no
[8:39] <redf_> i didnt use proxmox stuff to deploy ceph
[8:40] <redf_> i went with ceph wiki and then copied config to proxmox
[8:40] <ConSi> So what about proxmox upgrades
[8:41] <ConSi> From my experience any thing that's done not by one and only preferred way
[8:41] <redf_> nothing?
[8:41] <ConSi> Will give you a serious problems after upgrade
[8:42] <redf_> kvm is reading ceph.conf and connects to ceph cluster
[8:42] <redf_> proxmox has almost nothing to do with ceph
[8:42] <redf_> proxmox only tells kvm which rbd image to use
[8:43] <redf_> it is kinda hard to fuck this up ;)
[8:43] <ConSi> Well, You don't use all of the ceph monitoring things in proxmox gui :>?
[8:44] <redf_> scsi0: vm-rbd:vm-104-disk-1,cache=writeback,discard=on,size=10G
[8:44] <redf_> these stuff needs only ceph.conf
[8:44] <ConSi> redf_: ok, cool, 4 osds 1 journal ssd per node
[8:44] <ConSi> how much nodes do You have?
[8:45] <redf_> at the moment 2 only, still moving away from xenserver
[8:45] <redf_> target 4
[8:45] <ConSi> redf_: 10G connected, or 1G ?
[8:45] <redf_> 2x 1
[8:45] <ConSi> redf_: what about performance on VM ?
[8:45] <ConSi> can You run hdparm -Tt for me ?:)
[8:46] <redf_> /dev/sda:
[8:46] <redf_> Timing cached reads: 13616 MB in 2.00 seconds = 6820.50 MB/sec
[8:46] <redf_> Timing buffered disk reads: 240 MB in 3.01 seconds = 79.74 MB/sec
[8:46] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Remote host closed the connection)
[8:47] <redf_> got sata disks only
[8:47] <redf_> so almost native perf
[8:48] <ConSi> And it's 1G lan, sounds nice
[8:48] <redf_> well with 2 nodes there isnt much happening on lan
[8:49] <redf_> much ram avail for read caching
[8:50] <redf_> ok
[8:50] <redf_> g2g, im late
[8:50] <redf_> again ;)
[8:50] <ConSi> redf_: well, the complete application, consists of 8 type oj jboss instances, coldfusion, oracle and some other shitty wierd stuf
[8:50] <ConSi> I must research a lot
[8:50] <ConSi> to not put myself into some shit.
[8:52] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[8:52] * dopesong (~dopesong@88-119-94-55.static.zebra.lt) has joined #ceph
[8:55] * shylesh (~shylesh@121.244.87.118) has joined #ceph
[8:56] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[8:57] <ConSi> Ahhh, now I see :DD
[8:57] <ConSi> everything is possible, I didn't realize that ceph doesn't really use raw device
[8:59] * dgurtner (~dgurtner@178.197.231.51) has joined #ceph
[9:00] * Concubidated (~Adium@199.119.131.10) Quit (Quit: Leaving.)
[9:04] * trociny (~mgolub@93.183.239.2) has joined #ceph
[9:05] * kefu_ (~kefu@114.92.125.213) has joined #ceph
[9:05] * kefu (~kefu@114.92.125.213) Quit (Read error: Connection reset by peer)
[9:05] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[9:06] <Be-El> hi
[9:07] * cloud_vision (~cloud_vis@bzq-79-180-29-82.red.bezeqint.net) has joined #ceph
[9:07] * kefu_ (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[9:09] * kefu (~kefu@114.92.125.213) has joined #ceph
[9:09] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[9:10] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[9:13] * magicboiz (~magicboiz@2a01:7d00:501:c000::1a0b) Quit (Quit: Leaving)
[9:16] * sleinen (~Adium@2001:8a8:3800:2::37) has joined #ceph
[9:16] * sleinen (~Adium@2001:8a8:3800:2::37) Quit ()
[9:19] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[9:27] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:30] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) has joined #ceph
[9:32] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[9:33] * nsoffer (~nsoffer@bzq-84-111-112-230.cablep.bezeqint.net) Quit (Ping timeout: 480 seconds)
[9:33] <SamYaple> hello Be-El
[9:34] * jordanP (~jordan@213.215.2.194) has joined #ceph
[9:34] * fdmanana__ (~fdmanana@bl13-144-168.dsl.telepac.pt) has joined #ceph
[9:41] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[9:42] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[9:44] * dis (~dis@109.110.66.238) Quit (Ping timeout: 480 seconds)
[9:46] * analbeard (~shw@support.memset.com) has joined #ceph
[9:47] * kefu_ (~kefu@114.92.125.213) has joined #ceph
[9:51] * hlkv6-59569 (~hlk@pumba.kramse.dk) has joined #ceph
[9:51] <hlkv6-59569> hello all
[9:51] <hlkv6-59569> I am trying to get a ceph up and running, using the https://github.com/ceph/ceph-ansible - but have problems with the ansible groups
[9:52] <hlkv6-59569> the example uses some Vagrant feature ansible.groups - but I have trouble translating this into our real ansible environment (non-vagrant, but VMware)
[9:54] * kefu (~kefu@114.92.125.213) Quit (Ping timeout: 480 seconds)
[10:00] * nardial (~ls@dslb-178-009-182-197.178.009.pools.vodafone-ip.de) has joined #ceph
[10:00] * sleinen (~Adium@2001:8a8:3800:2::37) has joined #ceph
[10:01] * Flynn (~stefan@ip-81-30-69-189.fiber.nl) Quit (Quit: Flynn)
[10:02] * sleinen (~Adium@2001:8a8:3800:2::37) Quit ()
[10:02] * redf_ (~red@chello084112110034.11.11.vie.surfer.at) Quit (Read error: Connection reset by peer)
[10:02] * red (~red@chello084112110034.11.11.vie.surfer.at) has joined #ceph
[10:07] * dis (~dis@109.110.66.238) has joined #ceph
[10:09] * kefu_ (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[10:10] * kefu (~kefu@114.92.125.213) has joined #ceph
[10:11] * nsoffer (~nsoffer@bzq-79-182-131-63.red.bezeqint.net) has joined #ceph
[10:13] * kefu (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[10:14] * kefu (~kefu@114.92.125.213) has joined #ceph
[10:20] * sjm (~sjm@49.32.0.193) Quit (Ping timeout: 480 seconds)
[10:21] * sjm (~sjm@49.32.0.193) has joined #ceph
[10:22] * dgbaley27 (~matt@c-67-176-93-83.hsd1.co.comcast.net) Quit (Remote host closed the connection)
[10:25] * fsimonce (~simon@host253-71-dynamic.3-87-r.retail.telecomitalia.it) has joined #ceph
[10:28] * flisky (~Thunderbi@106.39.60.34) Quit (Quit: flisky)
[10:31] * philips (~philips@ec2-54-196-103-51.compute-1.amazonaws.com) Quit (Quit: http://ifup.org)
[10:34] * philips (~philips@ec2-54-196-103-51.compute-1.amazonaws.com) has joined #ceph
[10:36] * zack_dolby (~textual@98.170.130.210.bn.2iij.net) has joined #ceph
[10:37] * Inflatablewoman (~Inflatabl@host-93-104-248-34.customer.m-online.net) has joined #ceph
[10:39] * kefu (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[10:40] * kefu (~kefu@114.92.125.213) has joined #ceph
[10:43] * dgurtner (~dgurtner@178.197.231.51) Quit (Ping timeout: 480 seconds)
[10:45] <Inflatablewoman> Hi, can someone tell me what size journal disk I would need when I have 12x6000GB OSDs. I have pencilled in 480GB, is that sufficient?
[10:45] <Inflatablewoman> it's an SSD drive
[10:45] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:45] <jeroenvh> that does depend on your usage patterns
[10:46] <Inflatablewoman> principal use is behind s3 object storage
[10:46] * jluis (~joao@249.38.136.95.rev.vodafone.pt) has joined #ceph
[10:46] * ChanServ sets mode +o jluis
[10:47] <jeroenvh> yeah but 6 TB disks
[10:47] <jeroenvh> what kind of 6 TB disks
[10:47] * fmardini (~fmardini@213.61.152.126) has joined #ceph
[10:47] <jeroenvh> and what sync interval are you using
[10:47] <Inflatablewoman> default?
[10:47] <Inflatablewoman> sorry, ceph newb here
[10:48] <Inflatablewoman> 6TB disks will be regular spinning disk, without RAID
[10:50] <Inflatablewoman> the documentation doesn't specify how big a partition is needed of the SSD
[10:50] <Inflatablewoman> http://ceph.com/docs/master/start/hardware-recommendations/
[10:50] <Inflatablewoman> for the journal
[10:50] <jeroenvh> The journal size should be at least twice the product of the expected drive speed multiplied by filestore max sync interval.
[10:50] <jeroenvh> so
[10:51] <Inflatablewoman> What is a typical max sync value?
[10:51] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:51] <jeroenvh> default is 5 secs I think
[10:51] <Inflatablewoman> great, thanks.
[10:52] <jeroenvh> so if your disks can write 120 MBps (sequential), you should do that times 2 (240 MB) times 5 (1200 MB)
[10:52] <jeroenvh> times 12 disks = around14 GB
[10:52] <jeroenvh> times 12 disks = around 14 GB
[10:52] * joao (~joao@249.38.136.95.rev.vodafone.pt) Quit (Ping timeout: 480 seconds)
[10:52] <jeroenvh> so 480 Gb is overkill :P
[10:52] <Inflatablewoman> nice
[10:52] <Inflatablewoman> not my money ;)
[10:52] <Inflatablewoman> :D
[10:53] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Connection reset by peer)
[10:53] <Inflatablewoman> thanks for your help.
[10:53] <Inflatablewoman> I'll go talk to some people about this. Cheers! very helpful, have a nice day!
[10:53] * fmardini (~fmardini@213.61.152.126) Quit (Read error: Connection reset by peer)
[10:54] * nsoffer (~nsoffer@bzq-79-182-131-63.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[10:54] * dgurtner (~dgurtner@178.197.231.51) has joined #ceph
[10:54] * badone__ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[10:55] <jeroenvh> but better is to use two smaller SSDs instead of one big one
[10:55] <jeroenvh> and split the journals across the SSds
[10:56] * kefu (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[10:57] * kefu (~kefu@114.92.125.213) has joined #ceph
[10:59] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[11:00] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[11:00] * sleinen (~Adium@2001:8a8:3800:2::37) has joined #ceph
[11:01] * sleinen1 (~Adium@vpn-ho-b-132.switch.ch) has joined #ceph
[11:02] * nsoffer (~nsoffer@bzq-109-66-48-78.red.bezeqint.net) has joined #ceph
[11:09] * sleinen (~Adium@2001:8a8:3800:2::37) Quit (Ping timeout: 480 seconds)
[11:09] * Flynn (~stefan@93.191.0.237) has joined #ceph
[11:10] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[11:10] * sleinen1 (~Adium@vpn-ho-b-132.switch.ch) Quit (Ping timeout: 480 seconds)
[11:14] * nsoffer (~nsoffer@bzq-109-66-48-78.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[11:17] * kypto (~oftc-webi@idp01webcache6-z.apj.hpecore.net) Quit (Quit: Page closed)
[11:18] * shylesh__ (~shylesh@121.244.87.118) has joined #ceph
[11:20] * Debesis_ (~0x@5.254.46.84.mobile.mezon.lt) has joined #ceph
[11:21] * vbellur (~vijay@107-1-123-195-ip-static.hfc.comcastbusiness.net) has joined #ceph
[11:25] * zack_dolby (~textual@98.170.130.210.bn.2iij.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[11:29] * haomaiwang (~haomaiwan@218.94.96.134) Quit (Remote host closed the connection)
[11:30] * sleinen (~Adium@2001:8a8:3800:2::37) has joined #ceph
[11:31] * sleinen1 (~Adium@2001:620:0:82::10e) has joined #ceph
[11:33] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[11:38] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Connection reset by peer)
[11:38] * entropicD (~d@46-126-18-124.dynamic.hispeed.ch) has joined #ceph
[11:38] * sleinen (~Adium@2001:8a8:3800:2::37) Quit (Ping timeout: 480 seconds)
[11:40] * nardial (~ls@dslb-178-009-182-197.178.009.pools.vodafone-ip.de) Quit (Quit: Leaving)
[11:42] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[11:43] <entropicD> hi, i have calamari trying to connect to 172.16.79.128 on port 7002, and have noticed is hard-coded in cthulu/manager/notifier.py...is this normal?
[11:44] * Inflatablewoman (~Inflatabl@host-93-104-248-34.customer.m-online.net) Quit (Ping timeout: 480 seconds)
[11:47] * Debesis__ (~0x@5.254.46.84.mobile.mezon.lt) has joined #ceph
[11:47] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) Quit (Ping timeout: 480 seconds)
[11:50] * Debesis__ is now known as Debesis
[11:54] * Debesis_ (~0x@5.254.46.84.mobile.mezon.lt) Quit (Ping timeout: 480 seconds)
[11:54] * sleinen (~Adium@2001:8a8:3800:2::37) has joined #ceph
[11:55] <entropicD> ok, i found this: http://tracker.ceph.com/issues/10280
[11:55] * Mika_c (~Mk@122.146.93.152) Quit (Quit: Konversation terminated!)
[11:55] * sleinen2 (~Adium@vpn-ho-b-130.switch.ch) has joined #ceph
[11:57] * Debesis (~0x@5.254.46.84.mobile.mezon.lt) Quit (Quit: Leaving)
[11:57] * Debesis (~0x@5.254.46.84.mobile.mezon.lt) has joined #ceph
[11:57] * Flynn (~stefan@93.191.0.237) Quit (Quit: Flynn)
[11:58] * sleinen1 (~Adium@2001:620:0:82::10e) Quit (Ping timeout: 480 seconds)
[11:59] * sleinen1 (~Adium@vpn-ho-b-131.switch.ch) has joined #ceph
[12:00] * sjm (~sjm@49.32.0.193) Quit (Ping timeout: 480 seconds)
[12:03] * Lattyware (~bildramer@tor-exit.squirrel.theremailer.net) has joined #ceph
[12:03] <tuxcrafter> http://paste.debian.net/237548/
[12:03] * sjm (~sjm@49.32.0.193) has joined #ceph
[12:03] * sleinen (~Adium@2001:8a8:3800:2::37) Quit (Ping timeout: 480 seconds)
[12:03] <tuxcrafter> i got ^ 1 pg that is inconsistent
[12:03] <tuxcrafter> how do i figure out what rbd pool and what volume it is used for?
[12:04] <tuxcrafter> and is there a way to see what osd this pg is using
[12:04] <tuxcrafter> so i can maybe tell it from which osd to repair from
[12:05] <tuxcrafter> i can only see that it seems to use osd 1 and 5
[12:05] * kefu (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[12:05] <tuxcrafter> i know i want to repair it from osd 1
[12:05] <tuxcrafter> as osd 5 had a bad secotor
[12:06] * dgurtner (~dgurtner@178.197.231.51) Quit (Read error: Connection reset by peer)
[12:07] * sleinen2 (~Adium@vpn-ho-b-130.switch.ch) Quit (Ping timeout: 480 seconds)
[12:15] * sleinen (~Adium@vpn-ho-d-131.switch.ch) has joined #ceph
[12:19] <entropicD> tux, have you checked the output of "ceph pg dump" ?
[12:19] <entropicD> it reports the osd
[12:20] <entropicD> to know which pool it depends on the rules on your crushmap
[12:21] * sleinen (~Adium@vpn-ho-d-131.switch.ch) Quit (Read error: Connection reset by peer)
[12:23] * sleinen1 (~Adium@vpn-ho-b-131.switch.ch) Quit (Ping timeout: 480 seconds)
[12:30] * haomaiwang (~haomaiwan@183.206.160.253) has joined #ceph
[12:30] * dgurtner (~dgurtner@178.197.231.51) has joined #ceph
[12:30] * kefu (~kefu@114.92.125.213) has joined #ceph
[12:32] * Lattyware (~bildramer@9S0AABAB2.tor-irc.dnsbl.oftc.net) Quit ()
[12:37] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[12:37] * dis (~dis@109.110.66.238) Quit (Ping timeout: 480 seconds)
[12:52] * shyu (~Shanzhi@119.254.120.66) Quit (Remote host closed the connection)
[12:53] * kefu (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[12:54] * kefu (~kefu@107.191.52.248) has joined #ceph
[13:00] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[13:00] * kanagaraj (~kanagaraj@ip-64-134-64-51.public.wayport.net) has joined #ceph
[13:01] * dis (~dis@109.110.66.238) has joined #ceph
[13:08] * sleinen (~Adium@195.226.4.121) has joined #ceph
[13:09] * overclk (~overclk@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:11] * kefu (~kefu@107.191.52.248) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[13:11] * sleinen1 (~Adium@2001:620:0:82::105) has joined #ceph
[13:16] * zhaochao (~zhaochao@111.161.77.241) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 38.0.1/20150526223604])
[13:16] * sleinen (~Adium@195.226.4.121) Quit (Ping timeout: 480 seconds)
[13:16] * The1w (~jens@node3.survey-it.dk) has joined #ceph
[13:19] * cooldharma06 (~chatzilla@14.139.180.40) has joined #ceph
[13:22] * cok (~chk@2a02:2350:18:1010:b446:29b0:c47e:fdb6) has joined #ceph
[13:23] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[13:25] <tuxcrafter> https://wiki.ceph.com/Guides/How_To/Benchmark_Ceph_Cluster_Performance
[13:25] <tuxcrafter> im looking at that page and did a lot of test
[13:25] <tuxcrafter> mostly on the local disks and local ssds
[13:25] <tuxcrafter> before they become an osd
[13:25] <tuxcrafter> but how do i benchmark my osd (disk+ssd-journal) combination
[13:25] <tuxcrafter> i am planning to return the ssd i bought
[13:26] <boolman> i got 405 pg's stuck in stale. How do I fix it?
[13:26] <tuxcrafter> as it is performing at 0.8MB/s with dsync
[13:26] <tuxcrafter> (200 iops)
[13:26] * sleinen1 (~Adium@2001:620:0:82::105) Quit (Ping timeout: 480 seconds)
[13:35] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[13:38] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[13:40] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) has joined #ceph
[13:41] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[13:42] * sleinen (~Adium@2001:8a8:3800:2::37) has joined #ceph
[13:43] * sleinen1 (~Adium@vpn-ho-b-132.switch.ch) has joined #ceph
[13:45] * dgurtner (~dgurtner@178.197.231.51) Quit (Ping timeout: 480 seconds)
[13:46] * sleinen2 (~Adium@vpn-ho-d-131.switch.ch) has joined #ceph
[13:47] * sjm (~sjm@49.32.0.193) Quit (Ping timeout: 480 seconds)
[13:47] * sjm (~sjm@49.32.0.193) has joined #ceph
[13:49] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) has joined #ceph
[13:49] * dalgaaf (uid15138@id-15138.charlton.irccloud.com) has joined #ceph
[13:49] * trociny (~mgolub@93.183.239.2) Quit (Ping timeout: 480 seconds)
[13:50] * sleinen (~Adium@2001:8a8:3800:2::37) Quit (Ping timeout: 480 seconds)
[13:51] * kefu (~kefu@114.92.125.213) has joined #ceph
[13:51] * sleinen2 (~Adium@vpn-ho-d-131.switch.ch) Quit (Read error: Connection reset by peer)
[13:52] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[13:52] * i_m (~ivan.miro@deibp9eh1--blueice4n1.emea.ibm.com) has joined #ceph
[13:53] * Concubidated (~Adium@199.119.131.10) has joined #ceph
[13:53] * dgurtner (~dgurtner@178.197.231.51) has joined #ceph
[13:54] * Concubidated (~Adium@199.119.131.10) Quit ()
[13:54] * sleinen1 (~Adium@vpn-ho-b-132.switch.ch) Quit (Ping timeout: 480 seconds)
[13:55] * i_m1 (~ivan.miro@deibp9eh1--blueice4n1.emea.ibm.com) has joined #ceph
[13:55] * i_m (~ivan.miro@deibp9eh1--blueice4n1.emea.ibm.com) Quit ()
[13:58] * daviddcc (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) has joined #ceph
[14:00] * vbellur (~vijay@107-1-123-195-ip-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[14:05] * i_m1 (~ivan.miro@deibp9eh1--blueice4n1.emea.ibm.com) Quit (Read error: Connection reset by peer)
[14:07] * trociny (~mgolub@93.183.239.2) has joined #ceph
[14:07] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[14:07] * lucas1 (~Thunderbi@218.76.52.64) Quit ()
[14:11] * KevinPerks (~Adium@2606:a000:80ad:1300:f9b4:4367:16bf:fbaf) has joined #ceph
[14:12] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[14:14] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[14:14] * kanagaraj (~kanagaraj@ip-64-134-64-51.public.wayport.net) Quit (Quit: Leaving)
[14:18] * jordan_ (~jordan@scality-jouf-2-194.fib.nerim.net) has joined #ceph
[14:19] * kefu (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[14:20] * kefu (~kefu@114.92.125.213) has joined #ceph
[14:20] * Knuckx (~Spessu@tor-exit-node.7by7.de) has joined #ceph
[14:22] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[14:23] * i_m (~ivan.miro@mail.iicmos.ru) has joined #ceph
[14:23] <anorak> Hi All. I am setup another test cluster consisting of 7 osds on a single storage node. I have made some changes on the crush map and have divided those osds in 3 or more buckets. After editing and uploading the crush map...al looks good. However, IF i restart the storage node...it defaults back to its original initial crush map. I assume I have to make the crush map persistent somehow. Any ideas?
[14:24] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has joined #ceph
[14:25] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[14:25] * jordanP (~jordan@213.215.2.194) Quit (Ping timeout: 480 seconds)
[14:26] * dgurtner (~dgurtner@178.197.231.51) Quit (Ping timeout: 480 seconds)
[14:27] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[14:29] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[14:30] * kefu (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[14:31] * jordan_ (~jordan@scality-jouf-2-194.fib.nerim.net) Quit (Quit: Leaving)
[14:31] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) has joined #ceph
[14:33] <anorak> ok never mind. Found the answer :)
[14:34] * kefu (~kefu@114.92.125.213) has joined #ceph
[14:35] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[14:36] * marrusl (~mark@nat-pool-rdu-u.redhat.com) has joined #ceph
[14:37] * kefu (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[14:38] * kefu (~kefu@114.92.125.213) has joined #ceph
[14:39] * kefu (~kefu@114.92.125.213) Quit ()
[14:40] * dugravot61 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) has joined #ceph
[14:42] * kefu (~kefu@114.92.125.213) has joined #ceph
[14:43] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has left #ceph
[14:44] * kefu (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[14:46] * kefu (~kefu@114.92.125.213) has joined #ceph
[14:46] <boolman> cd /opt/
[14:46] <boolman> ls
[14:46] <boolman> ops
[14:47] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[14:47] * kefu (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[14:48] * squ (~Thunderbi@46.109.36.167) Quit (Quit: squ)
[14:49] * kefu (~kefu@li413-226.members.linode.com) has joined #ceph
[14:49] * dgurtner (~dgurtner@178.197.231.51) has joined #ceph
[14:50] * Knuckx (~Spessu@8Q4AABNR4.tor-irc.dnsbl.oftc.net) Quit ()
[14:50] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[14:55] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[14:56] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[14:56] * nils__ (~nils@doomstreet.collins.kg) has joined #ceph
[14:57] * mhack (~mhack@nat-pool-bos-u.redhat.com) has joined #ceph
[15:00] * SaintAardvark (~user@saintaardvark-1-pt.tunnel.tserv14.sea1.ipv6.he.net) has left #ceph
[15:00] * dopesong_ (~dopesong@lb1.mailer.data.lt) has joined #ceph
[15:00] * Concubidated (~Adium@129.192.176.66) has joined #ceph
[15:01] * nils_ (~nils@doomstreet.collins.kg) Quit (Ping timeout: 480 seconds)
[15:02] * loicd1 (~loic@193.54.227.109) has joined #ceph
[15:03] * shaunm (~shaunm@74.215.76.114) Quit (Quit: Ex-Chat)
[15:03] * rlrevell (~leer@184.52.129.221) has joined #ceph
[15:03] * shaunm (~shaunm@74.215.76.114) has joined #ceph
[15:05] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[15:05] * rlrevell (~leer@184.52.129.221) Quit (Read error: Connection reset by peer)
[15:06] * dopesong (~dopesong@88-119-94-55.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[15:06] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:06] * bkopilov (~bkopilov@bzq-79-183-58-206.red.bezeqint.net) has joined #ceph
[15:08] * kefu (~kefu@li413-226.members.linode.com) Quit (Max SendQ exceeded)
[15:09] * kefu (~kefu@li413-226.members.linode.com) has joined #ceph
[15:11] * kanagaraj (~kanagaraj@nat-pool-bos-t.redhat.com) has joined #ceph
[15:13] * capri_on (~capri@212.218.127.222) has joined #ceph
[15:14] * tries_ (~tries__@2a01:2a8:2000:ffff:1260:4bff:fe6f:af91) Quit (Remote host closed the connection)
[15:16] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) has joined #ceph
[15:16] * cdelatte (~cdelatte@2001:1998:860:1001:10bb:f81:31d4:2c4f) has joined #ceph
[15:19] * i_m (~ivan.miro@mail.iicmos.ru) Quit (Quit: Leaving.)
[15:19] * i_m (~ivan.miro@mail.iicmos.ru) has joined #ceph
[15:19] * capri (~capri@212.218.127.222) Quit (Ping timeout: 480 seconds)
[15:20] * Hau_MI is now known as HauM1
[15:21] * xarses_ (~xarses@12.10.113.130) Quit (Ping timeout: 480 seconds)
[15:21] * rwheeler (~rwheeler@nat-pool-bos-u.redhat.com) has joined #ceph
[15:22] * kefu (~kefu@li413-226.members.linode.com) Quit (Max SendQ exceeded)
[15:23] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[15:23] * kefu (~kefu@li413-226.members.linode.com) has joined #ceph
[15:25] * overclk (~overclk@121.244.87.117) has joined #ceph
[15:26] * yanzheng (~zhyan@182.139.21.245) Quit (Quit: This computer has gone to sleep)
[15:27] * kefu (~kefu@li413-226.members.linode.com) Quit ()
[15:27] * i_m (~ivan.miro@mail.iicmos.ru) Quit (Ping timeout: 480 seconds)
[15:28] * sjm (~sjm@49.32.0.193) Quit (Quit: Leaving.)
[15:28] * daviddcc (~dcasier@LCaen-656-1-144-187.w217-128.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[15:28] * sjm (~sjm@49.32.0.193) has joined #ceph
[15:28] * yanzheng (~zhyan@182.139.21.245) has joined #ceph
[15:30] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[15:30] * tupper (~tcole@173.38.117.78) has joined #ceph
[15:32] * sleinen (~Adium@195.226.4.121) has joined #ceph
[15:33] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) has joined #ceph
[15:34] * sleinen1 (~Adium@2001:620:0:82::104) has joined #ceph
[15:37] * dis (~dis@109.110.66.238) Quit (Quit: leaving)
[15:37] * tw0fish (~tw0fish@UNIX3.ANDREW.CMU.EDU) has joined #ceph
[15:40] * ksperis (~ksperis@46.218.42.103) Quit (Ping timeout: 480 seconds)
[15:40] * vata (~vata@cable-21.246.173-197.electronicbox.net) Quit (Quit: Leaving.)
[15:40] * sleinen (~Adium@195.226.4.121) Quit (Ping timeout: 480 seconds)
[15:41] * sep (~sep@2a04:2740:1:0:52e5:49ff:feeb:32) Quit (Remote host closed the connection)
[15:41] * i_m (~ivan.miro@deibp9eh1--blueice4n2.emea.ibm.com) has joined #ceph
[15:41] * daviddcc (~dcasier@LAubervilliers-656-1-16-164.w217-128.abo.wanadoo.fr) has joined #ceph
[15:42] * cok (~chk@2a02:2350:18:1010:b446:29b0:c47e:fdb6) Quit (Quit: Leaving.)
[15:43] <tuxcrafter> so just bought three other ssds for testing
[15:43] * dyasny (~dyasny@198.251.58.23) has joined #ceph
[15:44] <tuxcrafter> im only not sure how to test if the trim support is stable
[15:44] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[15:44] * jrankin (~jrankin@d53-64-170-236.nap.wideopenwest.com) has joined #ceph
[15:47] * sep (~sep@2a04:2740:1:0:52e5:49ff:feeb:32) has joined #ceph
[15:48] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (Quit: Leaving.)
[15:48] * sleinen (~Adium@195.226.4.121) has joined #ceph
[15:50] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[15:50] * The1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[15:54] * sleinen2 (~Adium@vpn-ho-b-132.switch.ch) has joined #ceph
[15:54] * entropicD (~d@46-126-18-124.dynamic.hispeed.ch) Quit (Quit: Ex-Chat)
[15:55] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[15:55] * sleinen1 (~Adium@2001:620:0:82::104) Quit (Ping timeout: 480 seconds)
[15:55] * overclk (~overclk@121.244.87.117) has joined #ceph
[15:56] * sleinen1 (~Adium@2001:620:0:82::104) has joined #ceph
[15:56] * sleinen (~Adium@195.226.4.121) Quit (Ping timeout: 480 seconds)
[15:57] * sjm (~sjm@49.32.0.193) Quit (Quit: Leaving.)
[16:00] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) has joined #ceph
[16:01] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (Quit: Leaving.)
[16:02] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[16:03] * sleinen2 (~Adium@vpn-ho-b-132.switch.ch) Quit (Ping timeout: 480 seconds)
[16:03] * QuantumBeep (~bret@tor-exit.squirrel.theremailer.net) has joined #ceph
[16:03] * thomnico (~thomnico@145.Red-88-3-208.staticIP.rima-tde.net) has joined #ceph
[16:04] * xarses (~xarses@166.175.56.146) has joined #ceph
[16:05] * xarses (~xarses@166.175.56.146) Quit (Read error: Connection reset by peer)
[16:06] * xarses (~xarses@166.175.56.146) has joined #ceph
[16:07] * xarses (~xarses@166.175.56.146) Quit (Remote host closed the connection)
[16:07] * xarses (~xarses@166.175.56.146) has joined #ceph
[16:09] * sleinen1 (~Adium@2001:620:0:82::104) Quit (Ping timeout: 480 seconds)
[16:10] <ska> Can I use both an XFS and CephFs share on same OSD server?
[16:11] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[16:15] * xarses_ (~xarses@166.175.56.146) has joined #ceph
[16:15] * xarses (~xarses@166.175.56.146) Quit (Read error: Connection reset by peer)
[16:15] * smerz (~ircircirc@37.74.194.90) has joined #ceph
[16:16] * thomnico (~thomnico@145.Red-88-3-208.staticIP.rima-tde.net) Quit (Quit: Ex-Chat)
[16:20] <smerz> during recovery, the recovery i/o displayed in ceph status is not the recovery bandwidth that goes over the network right?
[16:20] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[16:21] * sjm (~sjm@49.32.0.149) has joined #ceph
[16:23] * sjm (~sjm@49.32.0.149) Quit ()
[16:24] * sjm (~sjm@49.32.0.149) has joined #ceph
[16:24] <ska> Does cephfs live on top of XFS?
[16:25] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) Quit (Quit: Leaving)
[16:25] <ska> Is cephfs created on top of XFS partitions?
[16:25] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[16:26] <smerz> cephfs is created ontop of OSDs. the OSDs in turn have a local filesytem where they store their objects, typically XFS
[16:27] * kefu (~kefu@114.92.125.213) has joined #ceph
[16:27] * thomnico (~thomnico@145.Red-88-3-208.staticIP.rima-tde.net) has joined #ceph
[16:32] * bene (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[16:33] * QuantumBeep (~bret@7R2AABS7E.tor-irc.dnsbl.oftc.net) Quit ()
[16:34] <ron-slc> smerz: my observation is the recovery bandwidth is not the network bandwidth, but the measure of a single data unit which has been recovered 1M data is 3MB on backing storage with size=3
[16:34] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[16:34] <smerz> ron-slc, thanks!
[16:34] <ron-slc> That is an eyeball measure, by NO means scientific...
[16:35] <smerz> gotcha ;-)
[16:35] * vata (~vata@207.96.182.162) has joined #ceph
[16:35] * xarses_ (~xarses@166.175.56.146) Quit (Ping timeout: 480 seconds)
[16:41] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[16:42] * beardo (~beardo__@beardo.cc.lehigh.edu) Quit (Remote host closed the connection)
[16:47] * linjan (~linjan@80.179.241.26) has joined #ceph
[16:49] * DV__ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[16:50] * puffy (~puffy@c-50-131-179-74.hsd1.ca.comcast.net) has joined #ceph
[16:50] * arbrandes (~arbrandes@177.68.89.195) has joined #ceph
[16:53] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) Quit (Ping timeout: 480 seconds)
[16:54] * xarses (~xarses@172.56.12.251) has joined #ceph
[16:56] * beardo (~beardo__@beardo.cc.lehigh.edu) has joined #ceph
[16:57] <bjornar> I am having some challenges with creating a crush rule for splitting data to multiple datacenters when size is larger than number of datacenters... I would like to then place multiple copies on a single center.. any advice?
[16:59] * DV__ (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[17:02] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:03] * thomnico (~thomnico@145.Red-88-3-208.staticIP.rima-tde.net) Quit (Quit: Ex-Chat)
[17:03] * puffy (~puffy@c-50-131-179-74.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:04] * thomnico (~thomnico@145.Red-88-3-208.staticIP.rima-tde.net) has joined #ceph
[17:05] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[17:08] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[17:09] * haomaiwa_ (~haomaiwan@li596-180.members.linode.com) has joined #ceph
[17:09] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[17:10] * dugravot61 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[17:10] <gregsfortytwo> depending on your number of data centers and desired number of copies (eg, if you have 2 DCs and want 3 copies) you can use crush choose rules, select a too-large total number of OSDs, and then let it get trimmed down
[17:10] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (Remote host closed the connection)
[17:11] <bjornar> gregsfortytwo, ok, do you have an example on this?
[17:11] <gregsfortytwo> eg a rule along the lines of "take root", "choose 2 datacenter", "chooseleaf 2 osd", "emit" would generate 4 OSDs in your two datacenters
[17:11] <gregsfortytwo> and then if the size were 3 it would trim that back
[17:11] <bjornar> so say I have four copies, and 2 dc's.. I would like to choose dc1, dc2, dc1, dc2 ..
[17:11] <bjornar> ..but ofcorse 4 different hosts
[17:12] <gregsfortytwo> right, so if you want it to be symmetric it's easy
[17:12] <gregsfortytwo> do the same as above but set size to 4 ;)
[17:13] * haomaiwang (~haomaiwan@183.206.160.253) Quit (Ping timeout: 480 seconds)
[17:13] <bjornar> so clooseleaf firstn 2 type datacenter?
[17:14] <bjornar> but what if I want the rule to be more dynamic.. so when I get 4 dc's it will actually spread?
[17:15] <gregsfortytwo> bjornar: no, "chooseleaf firstn 2 type datacenter" would pick two datacenters and take a leaf from each
[17:15] <gregsfortytwo> unfortunately CRUSH doesn't support that kind of dynamism where you want to fill out each bucket as much as possible, and then go back round to them if you can't put all copies separately :(
[17:16] <bjornar> gregsfortytwo, Hrmm.. and no trick around it?
[17:16] <gregsfortytwo> just the one I mentioned where you can oversubscribe
[17:16] <gregsfortytwo> but that only works if you're off-by-one
[17:16] <bjornar> Ah..
[17:16] * as0bu (~as0bu@c-98-230-203-84.hsd1.nm.comcast.net) has joined #ceph
[17:17] <bjornar> Thats a shame.. basically means one cant have size larger than number of dcs for dc spread
[17:17] <gregsfortytwo> or else you need to configure it more precisely, yeah
[17:18] * thomnico (~thomnico@145.Red-88-3-208.staticIP.rima-tde.net) Quit (Quit: Ex-Chat)
[17:18] <bjornar> I mean.. 3 is fine, and we will soon have 3 dcs ... but when one goes down also, I would like two copies one
[17:19] <bjornar> Also wondering how does it work when I have two racks called L1 in two different datacenters...
[17:20] <bjornar> will I need to name them dc1-L1 and dc2-L1 ..
[17:21] * kefu (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[17:21] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[17:22] * kefu (~kefu@114.92.125.213) has joined #ceph
[17:22] * yanzheng (~zhyan@182.139.21.245) Quit (Quit: This computer has gone to sleep)
[17:23] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit (Quit: Leaving.)
[17:23] * overclk (~overclk@121.244.87.117) Quit (Ping timeout: 480 seconds)
[17:25] * linuxkidd (~linuxkidd@209.163.164.50) has joined #ceph
[17:28] * sjm (~sjm@49.32.0.149) has left #ceph
[17:28] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:30] * yguang11 (~yguang11@2001:4998:effd:600:b09b:896a:1371:2e5a) has joined #ceph
[17:31] * loicd2 (~loic@80.12.63.104) has joined #ceph
[17:32] * kefu (~kefu@114.92.125.213) Quit (Read error: Connection reset by peer)
[17:33] * kefu (~kefu@114.92.125.213) has joined #ceph
[17:34] * overclk (~overclk@121.244.87.124) has joined #ceph
[17:38] * loicd1 (~loic@193.54.227.109) Quit (Ping timeout: 480 seconds)
[17:42] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[17:44] * scuttlemonkey is now known as scuttle|afk
[17:44] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[17:45] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Connection reset by peer)
[17:47] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[17:47] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Connection reset by peer)
[17:48] * loicd2 (~loic@80.12.63.104) Quit (Ping timeout: 480 seconds)
[17:49] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) has joined #ceph
[17:50] * sugoruyo (~georgev@paarthurnax.esc.rl.ac.uk) has joined #ceph
[17:51] * haomaiwang (~haomaiwan@183.206.168.253) has joined #ceph
[17:51] * moore (~moore@64.202.160.88) has joined #ceph
[17:53] * haomaiwa_ (~haomaiwan@li596-180.members.linode.com) Quit (Ping timeout: 480 seconds)
[17:53] * kefu_ (~kefu@114.92.125.213) has joined #ceph
[17:55] * dis (~dis@109.110.66.238) has joined #ceph
[17:56] * loicd1 (~loic@80.12.63.104) has joined #ceph
[17:57] * loicd1 (~loic@80.12.63.104) Quit ()
[17:58] * kefu (~kefu@114.92.125.213) Quit (Ping timeout: 480 seconds)
[17:59] * Hemanth (~Hemanth@121.244.87.117) Quit (Ping timeout: 480 seconds)
[17:59] * ircolle (~Adium@2601:285:201:2bf9:a98a:7772:6e44:f7e4) has joined #ceph
[18:01] * linuxkidd (~linuxkidd@209.163.164.50) Quit (Ping timeout: 480 seconds)
[18:02] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Quit: Konversation terminated!)
[18:03] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Remote host closed the connection)
[18:04] * sugoruyo (~georgev@paarthurnax.esc.rl.ac.uk) Quit (Quit: I'm going home!)
[18:04] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) Quit (Quit: Leaving)
[18:05] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Ping timeout: 480 seconds)
[18:07] * mhack (~mhack@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[18:09] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[18:14] * ska (~skatinolo@cpe-173-174-111-177.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[18:17] * linuxkidd (~linuxkidd@209.163.164.50) has joined #ceph
[18:23] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[18:24] * shang (~ShangWu@42-69-255-45.EMOME-IP.hinet.net) has joined #ceph
[18:24] * ska (~skatinolo@cpe-173-174-111-177.austin.res.rr.com) has joined #ceph
[18:24] * shang (~ShangWu@42-69-255-45.EMOME-IP.hinet.net) Quit ()
[18:25] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has joined #ceph
[18:25] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has left #ceph
[18:25] * kefu_ (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[18:26] * kefu (~kefu@114.92.125.213) has joined #ceph
[18:29] * ira (~ira@167.220.23.74) has joined #ceph
[18:30] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[18:31] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) Quit (Quit: Leaving.)
[18:32] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) has joined #ceph
[18:33] * overclk (~overclk@121.244.87.124) Quit (Ping timeout: 480 seconds)
[18:33] * shylesh (~shylesh@121.244.87.118) Quit (Remote host closed the connection)
[18:33] * shylesh__ (~shylesh@121.244.87.118) Quit (Remote host closed the connection)
[18:34] * gsilvis (~andovan@c-73-159-49-122.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[18:35] * dugravot6 (~dugravot6@2a01:e35:8bbf:4060:71fd:d87:5e8f:e815) has joined #ceph
[18:37] * gsilvis (~andovan@c-73-159-49-122.hsd1.ma.comcast.net) has joined #ceph
[18:37] * dugravot6 (~dugravot6@2a01:e35:8bbf:4060:71fd:d87:5e8f:e815) Quit ()
[18:38] * derjohn_mob (~aj@88.128.80.245) has joined #ceph
[18:42] * scuttle|afk is now known as scuttlemonkey
[18:52] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (Ping timeout: 480 seconds)
[18:59] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[18:59] * irccloud1228 (uid94503@id-94503.ealing.irccloud.com) has joined #ceph
[19:07] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) Quit (Quit: Leaving)
[19:13] * Kupo1 (~tyler.wil@23.111.254.159) has joined #ceph
[19:18] <ska> Can I use a dedicated user for ceph cluster communication?
[19:20] * ira (~ira@167.220.23.74) Quit (Ping timeout: 480 seconds)
[19:20] * mgolub (~Mikolaj@91.225.200.223) has joined #ceph
[19:21] <monsted> ska: you can run it as a specific (non-root) user, if that's what you mean
[19:23] <ska> Yes, I have a user "ceph" on each host that can ssh into each other's account, and have sudo .
[19:24] <ska> Is there a configuration directive that sets this to be used?
[19:24] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[19:24] * derjohn_mob (~aj@88.128.80.245) Quit (Ping timeout: 480 seconds)
[19:26] * mgolub (~Mikolaj@91.225.200.223) Quit (Read error: No route to host)
[19:27] * mgolub (~Mikolaj@91.225.200.223) has joined #ceph
[19:30] * ira (~ira@208.217.184.210) has joined #ceph
[19:31] * kefu (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[19:32] * bene (~ben@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[19:32] * kefu (~kefu@114.92.125.213) has joined #ceph
[19:34] * johanni (~johanni@173.226.103.101) has joined #ceph
[19:35] * moore (~moore@64.202.160.88) Quit (Remote host closed the connection)
[19:35] * alexbligh1 (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) Quit (Ping timeout: 480 seconds)
[19:41] * bene (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[19:41] <monsted> ska: AFAIR, it's just "-u ceph" if you're using ceph-deploy
[19:42] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Remote host closed the connection)
[19:42] * bene is now known as bene_in_meeting
[19:42] * trociny (~mgolub@93.183.239.2) Quit (Ping timeout: 480 seconds)
[19:44] * midnightrunner (~midnightr@216.113.160.71) has joined #ceph
[19:47] * kefu (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[19:47] * dyasny (~dyasny@198.251.58.23) Quit (Ping timeout: 480 seconds)
[19:48] * kefu (~kefu@114.92.125.213) has joined #ceph
[19:50] * kefu (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[19:51] * kefu (~kefu@114.92.125.213) has joined #ceph
[19:51] * daviddcc (~dcasier@LAubervilliers-656-1-16-164.w217-128.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[19:52] * johanni (~johanni@173.226.103.101) Quit (Remote host closed the connection)
[19:53] * kefu (~kefu@114.92.125.213) Quit (Max SendQ exceeded)
[19:54] * xarses (~xarses@172.56.12.251) Quit (Remote host closed the connection)
[19:55] * sankarshan (~sankarsha@183.87.39.242) Quit (Read error: Connection reset by peer)
[19:57] * kefu (~kefu@114.92.125.213) has joined #ceph
[19:58] * kefu (~kefu@114.92.125.213) Quit ()
[20:02] * xarses (~xarses@172.56.12.251) has joined #ceph
[20:05] * midnightrunner (~midnightr@216.113.160.71) Quit (Remote host closed the connection)
[20:05] * midnightrunner (~midnightr@216.113.160.71) has joined #ceph
[20:06] <TheSov> today my 6 raspi's are coming in
[20:06] <TheSov> im gonna build a "lab" cluster
[20:07] <monsted> ceph on raspi?
[20:07] <TheSov> well just the osd's i have 3, supermicro atom servers to use as monitors
[20:07] * LeaChim (~LeaChim@host86-132-233-125.range86-132.btcentralplus.com) has joined #ceph
[20:08] <TheSov> it would be very nice for a large company of some kind to produce an osd type system that you slide a disk into and connect it to a 1 gig network
[20:10] <monsted> well, i think seagate is doing that with actual disks
[20:10] * sankarshan (~sankarsha@183.87.39.242) has joined #ceph
[20:10] <TheSov> no thats the kinetic
[20:10] <TheSov> its object based
[20:10] <TheSov> ceph cannot use those
[20:11] <TheSov> and out of all the drives on the market you are going to trust seagate?
[20:11] <monsted> but yeah, a smallish arm device with PoE that you could deploy in some kind of blade chassis?
[20:11] <monsted> oh, no, i'm staying the hell away from seagate :)
[20:11] <TheSov> exactly
[20:12] <TheSov> which is why i am asking why we dont have a single disk chassis with installable firmware, that has ethernet on the back
[20:12] <TheSov> basically thats the role the raspi is filling
[20:12] <monsted> i thought i saw kinetic supporting ceph... or maybe that was swift.
[20:14] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[20:14] <ircolle> monsted, TheSov - https://wiki.ceph.com/Planning/Blueprints/Giant/osd%3A_create_backend_for_seagate_kinetic
[20:15] <TheSov> ircolle, i would never use seagate
[20:15] <TheSov> hgst has an open version of kinetic
[20:15] <monsted> and they have drives that don't suck :)
[20:16] <TheSov> exactly
[20:16] <ircolle> TheSov - you're entitled to your opinion, I'm just correcting your misstatement of the facts
[20:17] <TheSov> ircolle, what misstatement?
[20:17] <ircolle> TheSov
[20:17] <ircolle> 12:10
[20:17] <ircolle> no thats the kinetic
[20:17] <ircolle> 12:10
[20:17] <ircolle> its object based
[20:17] <ircolle> 12:10
[20:17] <ircolle> ceph cannot use those
[20:17] * rlrevell (~leer@184.52.129.221) has joined #ceph
[20:18] <TheSov> I was correct, it has to have a backend
[20:18] <TheSov> u cannot plug it into ethernet and add a kinetic as an osd...
[20:18] * nwf (~nwf@00018577.user.oftc.net) Quit (Ping timeout: 480 seconds)
[20:19] * rlrevell (~leer@184.52.129.221) Quit (Read error: Connection reset by peer)
[20:20] <TheSov> thats what me and monsted were talking about, basically a hard disk with an arm computer on that connects directly to ceph
[20:20] <TheSov> which is difficult because ceph is updated, so the firmware would need updating
[20:21] * as0bu (~as0bu@c-98-230-203-84.hsd1.nm.comcast.net) Quit (Quit: Textual IRC Client: www.textualapp.com)
[20:25] * shohn (~shohn@dslb-178-002-076-138.178.002.pools.vodafone-ip.de) Quit (Quit: Leaving.)
[20:27] * nwf (~nwf@00018577.user.oftc.net) has joined #ceph
[20:31] * b0e (~aledermue@p5083D1EF.dip0.t-ipconnect.de) has joined #ceph
[20:31] * wschulze1 (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) has joined #ceph
[20:35] * b0e (~aledermue@p5083D1EF.dip0.t-ipconnect.de) Quit ()
[20:36] <monsted> TheSov: i'm thinking the device could come in a rack-depth tray that holds like 3-4 drives - however many you can power off a PoE+ port. use a few dhcp flags to specify where it gets deployment info and firmware updates. no need for a node per disk, that just wastes switch ports. ideally, priced so people could run them with single drives if they so choose without it getting silly expensive.
[20:36] <TheSov> you mean like switch rack?
[20:37] <monsted> plain server rack?
[20:37] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[20:37] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[20:37] <monsted> not sure what you mean :)
[20:38] <TheSov> switch racks are usually 2 posts, face mount devices
[20:38] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) has left #ceph
[20:38] <TheSov> system racks are 24 inches deeph and 4 posts
[20:39] * dneary (~dneary@70-91-197-134-BusName-NewEngland.hfc.comcastbusiness.net) has joined #ceph
[20:40] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[20:40] <monsted> the rack rail thing could have an option of center mounting so it'd work in a two-post rack
[20:41] <monsted> although that seems like an edge case
[20:41] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[20:41] * nardial (~ls@dslb-178-009-182-197.178.009.pools.vodafone-ip.de) has joined #ceph
[20:44] <monsted> could use the same basic hardware with a shorter case for 1-2 drives and longer case for 3-4 drives
[20:50] * i_m (~ivan.miro@deibp9eh1--blueice4n2.emea.ibm.com) Quit (Read error: Connection reset by peer)
[20:50] <monsted> 25W (PoE+) is a bit meager for 4 drives, though
[20:51] * nwf (~nwf@00018577.user.oftc.net) Quit (Ping timeout: 480 seconds)
[20:53] * dgurtner (~dgurtner@178.197.231.51) Quit (Ping timeout: 480 seconds)
[20:56] <ska> mon addr, should that be set to cluster addr or a public addr, and why?
[20:58] <TheSov> monsted, use low power drives it will work
[20:59] <TheSov> public
[20:59] <TheSov> ska, monitors need to be accessible by clients for auth, and osds
[20:59] <ska> So must be public then.. Ok..
[21:00] <TheSov> yeah the cluster network is osd to osd
[21:00] <TheSov> so yes, today hopefully i will be building my ras pi cluster
[21:01] <TheSov> ceph is available for arm, so adding the repo should just work
[21:02] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[21:03] * bene_in_meeting (~ben@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[21:03] * arbrandes (~arbrandes@177.68.89.195) Quit (Remote host closed the connection)
[21:03] * nardial (~ls@dslb-178-009-182-197.178.009.pools.vodafone-ip.de) Quit (Quit: Leaving)
[21:04] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) Quit (Ping timeout: 480 seconds)
[21:04] * sleinen1 (~Adium@2001:620:0:82::101) has joined #ceph
[21:04] <monsted> hmm, any ideas why "rados put <....>" would just hang and never finish? rados bench reports 0 ops/s too.
[21:09] <monsted> oh, pgs stuck in "creating"
[21:11] * wer_ (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Ping timeout: 480 seconds)
[21:13] * Concubidated (~Adium@129.192.176.66) Quit (Quit: Leaving.)
[21:13] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) has joined #ceph
[21:17] * `10 (~10@69.169.91.14) Quit (Ping timeout: 480 seconds)
[21:17] * squizzi (~squizzi@nat-pool-rdu-t.redhat.com) has joined #ceph
[21:17] <ska> I'm confused about cephfs and mkcephfs.. Do i mkcephfs on each partition on each OSD?
[21:18] * doppelgrau (~doppelgra@p4FE842CF.dip0.t-ipconnect.de) has joined #ceph
[21:20] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[21:22] * shylesh (~shylesh@123.136.221.56) has joined #ceph
[21:25] * bene_in_meeting (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[21:27] * alexbligh1 (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) has joined #ceph
[21:28] * nsoffer (~nsoffer@bzq-79-180-80-9.red.bezeqint.net) has joined #ceph
[21:29] * Concubidated (~Adium@199.119.131.10) has joined #ceph
[21:30] * Concubidated1 (~Adium@199.119.131.10) has joined #ceph
[21:30] * dopeson__ (~dopesong@88-119-94-55.static.zebra.lt) has joined #ceph
[21:32] * shylesh (~shylesh@123.136.221.56) Quit (Remote host closed the connection)
[21:36] * dgbaley27 (~matt@c-67-176-93-83.hsd1.co.comcast.net) has joined #ceph
[21:36] * xarses (~xarses@172.56.12.251) Quit (Remote host closed the connection)
[21:37] * xarses (~xarses@172.56.12.251) has joined #ceph
[21:37] * dopesong_ (~dopesong@lb1.mailer.data.lt) Quit (Ping timeout: 480 seconds)
[21:37] * Concubidated (~Adium@199.119.131.10) Quit (Ping timeout: 480 seconds)
[21:38] * sage (~quassel@2607:f298:6050:709d:4004:c720:f8bb:dac8) Quit (Remote host closed the connection)
[21:38] * dopeson__ (~dopesong@88-119-94-55.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[21:40] * nardial (~ls@dslb-178-009-182-197.178.009.pools.vodafone-ip.de) has joined #ceph
[21:42] <MACscr> hows the cache pool support these days? any better?
[21:42] * mgolub (~Mikolaj@91.225.200.223) Quit (Quit: away)
[21:44] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) Quit (Quit: Verlassend)
[21:45] * dyasny (~dyasny@173.231.115.59) has joined #ceph
[21:46] * dneary (~dneary@70-91-197-134-BusName-NewEngland.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[21:46] * Concubidated1 is now known as Concubidated
[21:46] * dyasny (~dyasny@173.231.115.59) Quit ()
[21:47] * CheKoLyN (~saguilar@bender.parc.xerox.com) has joined #ceph
[21:47] * dyasny (~dyasny@173.231.115.59) has joined #ceph
[21:50] * puffy (~puffy@c-50-131-179-74.hsd1.ca.comcast.net) has joined #ceph
[21:52] * aarcane (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[21:53] * dopesong (~dopesong@78-56-228-178.static.zebra.lt) has joined #ceph
[21:55] * davidz (~davidz@cpe-23-242-27-128.socal.res.rr.com) Quit (Quit: Leaving.)
[21:59] * delattec (~cdelatte@vlan16nat.mystrotv.com) has joined #ceph
[22:00] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[22:01] * Fapiko (~notarima@tor-exit.squirrel.theremailer.net) has joined #ceph
[22:03] * joshd1 (~jdurgin@66-194-8-225.static.twtelecom.net) has joined #ceph
[22:04] * derjohn_mob (~aj@tmo-110-80.customers.d1-online.com) has joined #ceph
[22:05] * joshd (~jdurgin@206.169.83.146) Quit (Ping timeout: 480 seconds)
[22:06] * cdelatte (~cdelatte@2001:1998:860:1001:10bb:f81:31d4:2c4f) Quit (Ping timeout: 480 seconds)
[22:10] <mongo> The main limiter is the age of kernels, if you use RHEL/CentOS in my experiance.
[22:11] * beardo_ (~sma310@207-172-244-241.c3-0.atw-ubr5.atw.pa.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[22:12] * jrankin (~jrankin@d53-64-170-236.nap.wideopenwest.com) Quit (Quit: Leaving)
[22:13] * nardial (~ls@dslb-178-009-182-197.178.009.pools.vodafone-ip.de) Quit (Quit: Leaving)
[22:13] <magicrobotmonkey> is it possible to add a new crush type without de/re compiling the crush map?
[22:14] * nwf (~nwf@00018577.user.oftc.net) has joined #ceph
[22:16] <TheSov> no offense to redhat but their upgrade policy is downright cromagnon
[22:17] <TheSov> im not going to reinstall my entire os for every version upgrade.
[22:17] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[22:17] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[22:22] * beardo_ (~sma310@207-172-244-241.c3-0.atw-ubr5.atw.pa.cable.rcn.com) has joined #ceph
[22:25] * delatte (~cdelatte@vlandnat.mystrotv.com) has joined #ceph
[22:26] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[22:27] * DV_ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[22:29] * kanagaraj (~kanagaraj@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[22:29] * xarses (~xarses@172.56.12.251) Quit (Remote host closed the connection)
[22:31] * fdmanana__ (~fdmanana@bl13-144-168.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[22:31] * Fapiko (~notarima@5NZAAD1V7.tor-irc.dnsbl.oftc.net) Quit ()
[22:32] * delattec (~cdelatte@vlan16nat.mystrotv.com) Quit (Ping timeout: 480 seconds)
[22:40] * xarses (~xarses@172.56.12.251) has joined #ceph
[22:41] * beardo_ (~sma310@207-172-244-241.c3-0.atw-ubr5.atw.pa.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[22:41] * dopesong_ (~dopesong@lb1.mailer.data.lt) has joined #ceph
[22:44] * dyasny (~dyasny@173.231.115.59) Quit (Ping timeout: 480 seconds)
[22:45] * johanni (~johanni@173.226.103.101) has joined #ceph
[22:47] * dopesong (~dopesong@78-56-228-178.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[22:48] * fdmanana__ (~fdmanana@bl13-144-168.dsl.telepac.pt) has joined #ceph
[22:49] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[22:50] * ngoswami (~ngoswami@1.39.15.230) has joined #ceph
[22:50] <magicrobotmonkey> i just ran `ceph osd crush remove default` and all my mons crashed
[22:51] <TheSov> ...
[22:52] <TheSov> u have a default cluster
[22:52] * dneary (~dneary@70-91-197-131-BusName-NewEngland.hfc.comcastbusiness.net) has joined #ceph
[22:52] * beardo_ (~sma310@207-172-244-241.c3-0.atw-ubr5.atw.pa.cable.rcn.com) has joined #ceph
[22:52] * cloud_vision (~cloud_vis@bzq-79-180-29-82.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[22:54] <magicrobotmonkey> i have all my stuff in platter/ssd roots
[22:55] <magicrobotmonkey> anyways if a command like that is going to cause mons to start segfaulting, maybe it should be caught
[22:55] * xarses (~xarses@172.56.12.251) Quit (Remote host closed the connection)
[22:56] * tw0fish (~tw0fish@UNIX3.ANDREW.CMU.EDU) Quit (Quit: leaving)
[22:57] * tupper (~tcole@173.38.117.78) Quit (Ping timeout: 480 seconds)
[23:00] * linuxkidd (~linuxkidd@209.163.164.50) Quit (Quit: Leaving)
[23:02] <TheSov> this on hammer?
[23:04] * BranchPr1dictor is now known as BranchPredictor
[23:05] * xarses (~xarses@166.175.184.79) has joined #ceph
[23:05] * sleinen1 (~Adium@2001:620:0:82::101) Quit (Ping timeout: 480 seconds)
[23:07] * dneary (~dneary@70-91-197-131-BusName-NewEngland.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[23:08] * badone__ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[23:13] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[23:17] * marrusl (~mark@nat-pool-rdu-u.redhat.com) Quit (Remote host closed the connection)
[23:17] * `10 (~10@69.169.91.14) has joined #ceph
[23:18] * doppelgrau (~doppelgra@p4FE842CF.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[23:19] * linjan (~linjan@80.179.241.26) Quit (Ping timeout: 480 seconds)
[23:20] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:20] * davidzlap (~Adium@206.169.83.146) has joined #ceph
[23:21] * MentalRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[23:21] * kszarlej (~kszarlej@5.196.174.189) has joined #ceph
[23:22] <kszarlej> hey guys, I want to remove a node from ceph cluster. All i have to do is lower the replication level
[23:22] <kszarlej> and
[23:22] <kszarlej> delete the osds from crushmap?
[23:26] * joshd (~jdurgin@206.169.83.146) has joined #ceph
[23:28] * joshd1 (~jdurgin@66-194-8-225.static.twtelecom.net) Quit (Ping timeout: 480 seconds)
[23:30] * ngoswami (~ngoswami@1.39.15.230) Quit (Ping timeout: 480 seconds)
[23:33] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[23:33] * midnightrunner (~midnightr@216.113.160.71) Quit (Ping timeout: 480 seconds)
[23:36] * rwheeler (~rwheeler@nat-pool-bos-u.redhat.com) Quit (Quit: Leaving)
[23:37] * ngoswami (~ngoswami@1.39.96.96) has joined #ceph
[23:40] * bene_in_meeting (~ben@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[23:52] * cloud_vision (~cloud_vis@bzq-79-180-29-82.red.bezeqint.net) has joined #ceph
[23:53] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[23:58] * fdmanana__ (~fdmanana@bl13-144-168.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[23:59] <ska> how many mds servers can I have on standby?

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.