#ceph IRC Log

Index

IRC Log for 2015-06-14

Timestamps are in GMT/BST.

[0:00] <gleam> as long as you've edited ceph.conf before the journal partition is created, it should use whatever is in the conf file
[0:00] <JohnPreston78> Hey !
[0:00] <JohnPreston78> Hi everyone
[0:00] <gleam> if the partition exists and is smaller than the configured journal size then i think it will error
[0:00] <TheSov> is the ceph.conf synced between ceph mons?
[0:00] <gleam> no, that's up to you
[0:00] <JohnPreston78> Does someone know the "biggest" CEPH cluster deployed in TB / PB ? public numbers ofc
[0:00] <gleam> ceph-deploy will complain if your local (on the deploy node) ceph.conf doesn't match the conf on the remote host though
[0:01] <TheSov> i know of a 25PB one done by cern
[0:01] <gleam> cern has at least 3PB, i think flickr has at least 3PB, cern has tested upwards of 30PB
[0:01] <JohnPreston78> TheSov: Oh that's cool
[0:01] <gleam> i believe flickr uses multiple independent 3PB clusters
[0:01] <JohnPreston78> CERN, those guys are crazy
[0:01] <gleam> and shard data across them at the application level
[0:01] <gleam> but i might be misremembering
[0:01] <gleam> cern is also almost entirely rbd as far as i know
[0:01] <gleam> which is what most people who ask that question seem to care about
[0:02] <TheSov> JohnPreston78, get this, they didnt use any ssd
[0:02] <TheSov> its pure rust
[0:03] <JohnPreston78> TheSov: Well I get quite good results without SSD - so far
[0:03] <TheSov> for any serious results u need ssd for your monitors and journals
[0:03] <JohnPreston78> gleam: block is what I am interested in so that's good to know
[0:04] <TheSov> my only issue right now is the fact that if you put 1 journal ssd for 5 rust drives, if the journal fails. you lost 5 osds
[0:04] <JohnPreston78> TheSov: I read somewhere (cant remember the source) that for CEPH Block storage it would be more efficient without journal though. Heard of this ?
[0:04] <TheSov> no
[0:05] <TheSov> i heard the opposite
[0:05] <TheSov> blockstore is where you want ssd journals
[0:05] <JohnPreston78> I heard the opposite for Object Store :D
[0:05] <TheSov> object storage is gonna be fast without it
[0:05] <JohnPreston78> xD I heard the exact opposite TheSov
[0:05] <TheSov> odd
[0:05] <JohnPreston78> damned, it is a trap !
[0:05] <JohnPreston78> mouahah
[0:05] <TheSov> i just read the ceph workup from cern and they wrote that
[0:05] <TheSov> ssd improves it a lot
[0:06] <TheSov> and they use block
[0:06] <TheSov> https://cds.cern.ch/record/2015206/files/CephScaleTestMarch2015.pdf
[0:07] <TheSov> basically i want to get an NVMe card for 10 osd journals
[0:07] <TheSov> err 12
[0:08] <delaf> i am testing ceph on a small cluster (12 osd, 3 servers, 1SSD for 4HDD). Writes works quite well, but i get very poor performance on reading, i do not undestand why
[0:08] <TheSov> 12 osd journals
[0:08] <TheSov> how are you reading?
[0:08] <TheSov> are you using the ceph client?
[0:08] <delaf> TheSov: using rbd, reading sequentially one big file
[0:09] <TheSov> yeah but your client how is it connecting to ceph?
[0:09] <TheSov> err nevermind
[0:09] <delaf> the network you mean ?
[0:09] <TheSov> u answered that RBD required the ceph client
[0:09] <Sysadmin88> define 'poor performance' (use numbers)
[0:09] <Sysadmin88> also, what networking do you have?
[0:10] <delaf> read = 30MB/s, wirte = 110MB/s
[0:10] * Azru (~luigiman@tor00.telenet.unc.edu) has joined #ceph
[0:10] <TheSov> delaf, increase the size of your "max_sectors_kb"
[0:10] <delaf> the network is 1Gb/s
[0:10] <TheSov> wait
[0:10] <Sysadmin88> 110MB/s is 1GbE limit
[0:10] <TheSov> Sysadmin88, 125
[0:11] <TheSov> err 120
[0:11] <Sysadmin88> 110MB is certainly near enough to it that it would be 'good enough' for most
[0:11] <delaf> yes, i got no problem with wirte
[0:11] <monsted> 110 is about the maximum useful bandwidth
[0:12] <monsted> there's a bunch of overhead in the various protocols
[0:12] <delaf> TheSov: on wich server I need to change the max_sectors_kb ?
[0:12] <delaf> only on the client ?
[0:12] <TheSov> the client server
[0:12] <TheSov> yes
[0:12] <delaf> or on all the ceph node
[0:12] <delaf> ?
[0:12] <delaf> ok
[0:12] <TheSov> just the client
[0:12] <TheSov> make it big, like 16k
[0:12] <TheSov> 16384
[0:13] <Sysadmin88> how is RAM usage on your storage nodes? and CPU usage?
[0:13] <delaf> 64GB on all servers
[0:13] <delaf> oh sorry
[0:13] <delaf> Sysadmin88: CPU usage very low
[0:13] <TheSov> where are your monitors?
[0:14] <delaf> and ram too
[0:14] <TheSov> delaf, where are your monitors?
[0:14] <delaf> TheSov: for now the monitor are on the 3 servers that have OSDs
[0:14] <TheSov> they do not share a disk correct?
[0:15] <delaf> TheSov: OSDs have dedictated hdd + 1ssd for journal
[0:15] * sleinen1 (~Adium@2001:620:0:82::101) Quit (Ping timeout: 480 seconds)
[0:15] <JohnPreston78> TheSov: Thanks though for the info
[0:16] <Sysadmin88> delaf, are you using raid groups or 1OSD per disk?
[0:16] <JohnPreston78> TheSov: recently my only concern with CEPH was the IO latency (!= from IO ps)
[0:16] <delaf> 1 OSD per disk
[0:17] <delaf> i got 12 HDD and 3 SSD
[0:17] <Sysadmin88> are your read tests random or sequential?
[0:18] <delaf> TheSov: I do not undestrand but I can't put a bigger value than 4096 for max_sectors_kb..
[0:18] <delaf> # echo 16384 > /sys/block/rbd0/queue/max_sectors_kb
[0:18] <delaf> bash: echo: write error: Invalid argument
[0:19] <delaf> if i put 2024 it works, with 4096 too, but no more
[0:19] <delaf> any idea ?
[0:20] <TheSov> that makes no sense
[0:20] <delaf> and with sh : i get something even more strnage :
[0:20] <delaf> # echo 16384 > /sys/block/rbd0/queue/max_sectors_kb
[0:20] <delaf> sh: echo: I/O error
[0:22] <TheSov> wait
[0:22] <TheSov> im looking that up now
[0:23] <delaf> i'm using 3.19.0-20-generic kernel on ubuntu trusty
[0:24] <TheSov> it makes no sense it should work
[0:24] * nsoffer (~nsoffer@bzq-79-180-80-86.red.bezeqint.net) has joined #ceph
[0:24] <TheSov> put it in quotes?
[0:24] <TheSov> it works for me
[0:25] <delaf> no more luck with quote (as i thought)
[0:26] <delaf> i will try with another kernel
[0:26] <delaf> i remember i manage to change it on antoher test cluster based on debian some weeks ago
[0:28] <TheSov> ok apparently going to 16384 is not supported
[0:28] <TheSov> the max supported is 4k
[0:28] <TheSov> 4096
[0:28] <TheSov> try that and see if it goes
[0:29] <delaf> yes 4096 is working
[0:29] <TheSov> any faster
[0:29] <delaf> and is my default
[0:29] <TheSov> and you get bad read speeds with that?
[0:29] <Sysadmin88> is your read test random or sequential?
[0:29] <delaf> TheSov: yes
[0:29] <delaf> Sysadmin88: sequential
[0:30] <delaf> i tried with : sudo dd if=test.zero of=/dev/null bs=1M
[0:30] <delaf> test.zero is a 900GB file created with dd of=test.zero if=/dev/zero bs=1M (at arouand 110MB/s)
[0:31] <TheSov> u have ssd caches right
[0:31] <Sysadmin88> what file system you using on your disks? (interested)
[0:31] <delaf> TheSov: yes
[0:31] <TheSov> so thats got your writes covered
[0:31] <TheSov> so reads is all rust
[0:32] <TheSov> do you per chance have shitty sata controllers or something?
[0:32] <delaf> Sysadmin88: /dev/sdc1 on /var/lib/ceph/osd/ceph-0 type xfs (rw,noatime,inode64)
[0:32] * dostrow (~dostrow@bunker.bloodmagic.com) has joined #ceph
[0:32] <delaf> Sysadmin88: all my OSDs are like this one
[0:33] <Sysadmin88> i've been meaning to try ZFS on my next test... be interesting to see how the ARC works alongside ceph
[0:34] <Sysadmin88> especially for re-reads
[0:35] <delaf> TheSov: the controller is a Supermicro SMC2208
[0:35] <delaf> (LSI 2208)
[0:35] <TheSov> bizarre
[0:35] <TheSov> Sysadmin88, just make sure you set a max arc size so you get at least a few gigs of ram for the osd
[0:36] <Sysadmin88> indeed :) i love my ZFS NAS, be nice to scale it out a bit
[0:37] <delaf> Sysadmin88: one things i don't like at all with zfs, is that perfromance gets poor if you use more than 80% space
[0:37] <Sysadmin88> i can still throw 400-500MB/s with 80-85% capacity
[0:38] <delaf> Sysadmin88: my problem is appear when reading/writing many small files
[0:38] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[0:39] <delaf> rsync/rdiff-backup stops working for me if more than 80% disk space is used
[0:40] * Azru (~luigiman@9S0AAA29W.tor-irc.dnsbl.oftc.net) Quit ()
[0:44] <delaf> TheSov: if I try to read files directly from an OSD disk (with dd), I read data at 100MB/s
[0:44] <TheSov> are your systems bonded?
[0:44] <delaf> so I thought that with ceph I would have been limited by my network
[0:44] <delaf> TheSov: no
[0:45] <delaf> there is no separation between public and private network (ie there is only 1 network).
[0:46] <delaf> but I don't think it will change anything for my problem
[0:46] <delaf> (as the network is not overloaded when reading the file)
[0:47] <TheSov> i asked because certain bond types slow things down
[0:47] <delaf> ok
[0:47] <delaf> in fact nothing seems to be overloaded when a read the file on the rbd client
[0:47] <TheSov> sounds like the clients fault
[0:48] <delaf> yes it seems
[0:48] <delaf> but I cannot see what can do this :(
[0:49] * Schaap (~Frostshif@212.83.40.239) has joined #ceph
[0:50] <doppelgrau> delaf: is more than 80% of the rbd-image full or of the ceph-Partition?
[0:50] <delaf> doppelgrau: yes, it is 100% full
[0:50] <doppelgrau> delaf: which?
[0:51] <delaf> oups sorry, the rbd image :
[0:51] <delaf> $ sudo rbd showmapped
[0:51] <delaf> id pool image snap device
[0:51] <delaf> 0 rbd test1 - /dev/rbd0
[0:51] <delaf> /dev/rbd0 977G 977G 20K 100% /mnt
[0:51] <delaf> doppelgrau: what do you call "ceph-Partition" ?
[0:52] <doppelgrau> delaf: the partitions used for the osds
[0:52] <delaf> no OSDs are arouand 25% full
[0:52] <doppelgrau> ok, so no blocking IO with ???nearfull???
[0:56] <delaf> if I read 2 files at the same time, i get the bandwidth 15MB/s
[0:56] <delaf> per file
[0:57] <delaf> so around 30MB/s total
[0:57] <Sysadmin88> monitoring the CPU and RAM and disk usage while your reading that?
[0:57] <delaf> if i read only one file a get 30MB/s
[0:57] <Sysadmin88> could be it's reading from the same OSD and it's causing random IO?
[0:57] <delaf> CPU is very low, RAM too
[0:58] <delaf> Sysadmin88: maybe, but still 30MB/s is very poor in any case
[0:58] <Sysadmin88> HDDs dont like random, performance can go down very quickly
[1:00] <delaf> i will try to create small files (< 4MB) and read them squentially to see what happens
[1:03] <Sysadmin88> got iozone?
[1:03] <delaf> i can install it
[1:03] <delaf> i never used it
[1:03] <Sysadmin88> i like that for benchmarking. multiple tests as needed and configurable
[1:06] * ivotron (uid25461@id-25461.brockwell.irccloud.com) has joined #ceph
[1:08] <delaf> Sysadmin88: when reading the small file (2MB each), I read at 105MB/s
[1:08] <delaf> that is quite near the best of my network
[1:08] <Sysadmin88> 'good enough
[1:09] <delaf> I do not really undestand why reading one big file is not the same
[1:10] <Sysadmin88> look into the IO on the OSD's disk
[1:10] <Sysadmin88> what kind of disk are you using?
[1:10] <delaf> 1TB SATA 7.2K
[1:12] <delaf> hum.. there is no IO on OSD.. I think all is in memory, I will have to make a test with more files
[1:13] <Sysadmin88> one reason i like ZFS... if your reading the same thing repeatedly then ARC really does make a difference
[1:17] <Sysadmin88> iozone makes new files. look it up, it's nice.
[1:19] * Schaap (~Frostshif@5NZAADSBX.tor-irc.dnsbl.oftc.net) Quit ()
[1:19] <delaf> Sysadmin88: usually i use bonnie++
[1:20] <delaf> i will look at iozone, last time I looked at it, i found that the result were not really quickly readable ^^
[1:20] <Sysadmin88> there are a couple of types of test
[1:20] <Sysadmin88> the throughput ones are easier to read than the 'auto' one
[1:22] <delaf> ok i will look at it
[1:22] <delaf> Sysadmin88: do you use ceph for VM images ?
[1:24] * Kwen (~hassifa@176.10.99.202) has joined #ceph
[1:24] <Sysadmin88> not at the moment. strictly testing when i get time
[1:24] <delaf> ok
[1:26] * dopesong (~dopesong@lan126-981.elekta.lt) has joined #ceph
[1:34] <jidar> I'm trying to clean out all of my OSDs and start over, ceph-deploy purge/purgedata doesn't seem to do this
[1:36] <delaf> Sysadmin88: so, reading sequentially small files gives me the same result than reading the big file (arround 30MB/s)
[1:44] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[1:49] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[1:53] * Kwen (~hassifa@7R2AABOZC.tor-irc.dnsbl.oftc.net) Quit ()
[1:58] * thundercloud (~Peaced@tor00.telenet.unc.edu) has joined #ceph
[1:58] * oms101 (~oms101@p20030057EA656200C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[2:05] * kklimonda_ (sid72883@id-72883.highgate.irccloud.com) Quit ()
[2:07] * oms101 (~oms101@p20030057EA48F700C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[2:08] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[2:13] * mattronix_ (~quassel@mail.mattronix.nl) has joined #ceph
[2:16] * mattronix (~quassel@mail.mattronix.nl) Quit (Ping timeout: 480 seconds)
[2:27] * nsoffer (~nsoffer@bzq-79-180-80-86.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[2:28] * thundercloud (~Peaced@5NZAADSE4.tor-irc.dnsbl.oftc.net) Quit ()
[2:53] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[2:58] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[3:03] * Shnaw (~ZombieL@exit2.tor-proxy.net.ua) has joined #ceph
[3:03] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[3:17] * Debesis_ (~0x@140.217.38.86.mobile.mezon.lt) Quit (Ping timeout: 480 seconds)
[3:17] * wkennington (~william@76.77.181.49) has joined #ceph
[3:32] * Shnaw (~ZombieL@9S0AAA3E2.tor-irc.dnsbl.oftc.net) Quit ()
[3:33] * Harryhy (~cooey@146.185.177.103) has joined #ceph
[3:36] * ivotron (uid25461@id-25461.brockwell.irccloud.com) Quit (Quit: Connection closed for inactivity)
[3:36] <jidar> man
[3:36] <jidar> this is a mess
[3:37] * puffy (~puffy@c-50-131-179-74.hsd1.ca.comcast.net) has joined #ceph
[3:37] * puffy (~puffy@c-50-131-179-74.hsd1.ca.comcast.net) Quit ()
[3:38] * DV (~veillard@2001:41d0:1:d478::1) Quit (Remote host closed the connection)
[3:38] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[3:39] * tmh_ is now known as _tmh_
[3:44] * wkennington (~william@76.77.181.49) Quit (Remote host closed the connection)
[3:47] * wkennington (~william@76.77.181.49) has joined #ceph
[3:50] * kefu (~kefu@114.86.215.22) has joined #ceph
[3:51] * kefu (~kefu@114.86.215.22) Quit ()
[4:02] * puffy (~puffy@c-50-131-179-74.hsd1.ca.comcast.net) has joined #ceph
[4:02] * puffy (~puffy@c-50-131-179-74.hsd1.ca.comcast.net) Quit ()
[4:02] * Harryhy (~cooey@5NZAADSIR.tor-irc.dnsbl.oftc.net) Quit ()
[4:04] * wkennington (~william@76.77.181.49) Quit (Remote host closed the connection)
[4:07] * wkennington (~william@76.77.181.49) has joined #ceph
[4:25] * Doodlepieguy (~Kayla@hessel2.torservers.net) has joined #ceph
[4:39] * bobrik (~bobrik@83.243.64.45) Quit (Ping timeout: 480 seconds)
[4:39] * dlan_ (~dennis@116.228.88.131) Quit (Remote host closed the connection)
[4:40] * dlan (~dennis@116.228.88.131) has joined #ceph
[4:42] * vbellur (~vijay@122.171.91.105) Quit (Ping timeout: 480 seconds)
[4:51] * dlan (~dennis@116.228.88.131) Quit (Remote host closed the connection)
[4:51] * scuttlemonkey is now known as scuttle|afk
[4:51] * dlan (~dennis@116.228.88.131) has joined #ceph
[4:55] * Doodlepieguy (~Kayla@5NZAADSKY.tor-irc.dnsbl.oftc.net) Quit ()
[4:58] * dlan (~dennis@116.228.88.131) Quit (Remote host closed the connection)
[4:58] * dlan (~dennis@116.228.88.131) has joined #ceph
[4:59] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[5:03] * OutOfNoWhere (~rpb@199.68.195.101) Quit (Ping timeout: 480 seconds)
[5:04] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[5:06] * vbellur (~vijay@122.171.91.105) has joined #ceph
[5:16] * Vacuum_ (~Vacuum@88.130.203.36) has joined #ceph
[5:23] * Vacuum__ (~Vacuum@88.130.206.82) Quit (Ping timeout: 480 seconds)
[5:23] * yguang11 (~yguang11@12.31.82.125) has joined #ceph
[5:30] * yguang11_ (~yguang11@2001:4998:effd:7801::105e) has joined #ceph
[5:32] * puffy (~puffy@c-50-131-179-74.hsd1.ca.comcast.net) has joined #ceph
[5:37] * yguang11 (~yguang11@12.31.82.125) Quit (Ping timeout: 480 seconds)
[5:39] * Bwana (~basicxman@tor-exit-node.7by7.de) has joined #ceph
[5:59] * aarontc (~aarontc@2001:470:e893::1:1) Quit (Ping timeout: 480 seconds)
[5:59] * aarontc (~aarontc@2001:470:e893::1:1) has joined #ceph
[6:08] * Bwana (~basicxman@9S0AAA3JJ.tor-irc.dnsbl.oftc.net) Quit ()
[6:14] * ivotron (uid25461@id-25461.brockwell.irccloud.com) has joined #ceph
[6:22] * Fapiko (~Jyron@9S0AAA3KQ.tor-irc.dnsbl.oftc.net) has joined #ceph
[6:30] * calvinx (~calvin@101.100.172.246) has joined #ceph
[6:36] * MACscr (~Adium@2601:d:c800:de3:9957:e5a:e070:6617) Quit (Quit: Leaving.)
[6:52] * Fapiko (~Jyron@9S0AAA3KQ.tor-irc.dnsbl.oftc.net) Quit ()
[6:52] * Jones (~PcJamesy@178-175-128-50.ip.as43289.net) has joined #ceph
[7:00] * wkennington (~william@76.77.181.49) Quit (Remote host closed the connection)
[7:03] * puffy (~puffy@c-50-131-179-74.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[7:03] * wkennington (~william@76.77.181.49) has joined #ceph
[7:22] * Jones (~PcJamesy@7R2AABO2A.tor-irc.dnsbl.oftc.net) Quit ()
[7:22] * brannmar (~sese_@tor-exit3-readme.dfri.se) has joined #ceph
[7:41] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Quit: later)
[7:41] * yguang11_ (~yguang11@2001:4998:effd:7801::105e) Quit (Ping timeout: 480 seconds)
[7:52] * brannmar (~sese_@5NZAADSSK.tor-irc.dnsbl.oftc.net) Quit ()
[7:52] * `Jin (~ulterior@178.32.53.131) has joined #ceph
[8:02] * dopesong (~dopesong@lan126-981.elekta.lt) Quit (Remote host closed the connection)
[8:16] * gaveen (~gaveen@175.157.134.18) has joined #ceph
[8:22] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[8:22] * `Jin (~ulterior@9S0AAA3NL.tor-irc.dnsbl.oftc.net) Quit ()
[8:22] * tritonx (~Ian2128@tor.nohats.ca) has joined #ceph
[8:25] * shylesh (~shylesh@121.244.87.118) has joined #ceph
[8:32] * jclm (~jclm@ip24-253-98-109.lv.lv.cox.net) has joined #ceph
[8:49] * jclm (~jclm@ip24-253-98-109.lv.lv.cox.net) Quit (Quit: Leaving.)
[8:52] * tritonx (~Ian2128@5NZAADSUQ.tor-irc.dnsbl.oftc.net) Quit ()
[8:57] * straterra (~allenmelo@nx-01.tor-exit.network) has joined #ceph
[9:12] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[9:18] * shylesh__ (~shylesh@121.244.87.118) has joined #ceph
[9:27] * straterra (~allenmelo@9S0AAA3P7.tor-irc.dnsbl.oftc.net) Quit ()
[9:36] * biGGer (~murmur@nx-01.tor-exit.network) has joined #ceph
[9:46] * gaveen (~gaveen@175.157.134.18) Quit (Ping timeout: 480 seconds)
[9:58] * dopesong (~dopesong@lan126-981.elekta.lt) has joined #ceph
[10:06] * biGGer (~murmur@9S0AAA3RH.tor-irc.dnsbl.oftc.net) Quit ()
[10:06] * rogst1 (~ZombieTre@cloud.tor.ninja) has joined #ceph
[10:19] * dopesong_ (~dopesong@lb1.mailer.data.lt) has joined #ceph
[10:24] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[10:25] * dopesong (~dopesong@lan126-981.elekta.lt) Quit (Ping timeout: 480 seconds)
[10:26] * sleinen1 (~Adium@2001:620:0:82::100) has joined #ceph
[10:31] * MACscr (~Adium@2601:d:c800:de3:3517:760f:d3e5:3807) has joined #ceph
[10:33] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[10:36] * rogst1 (~ZombieTre@7R2AABO4S.tor-irc.dnsbl.oftc.net) Quit ()
[10:36] * datagutt (~Pirate@89.105.194.70) has joined #ceph
[10:48] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[10:54] * nardial (~ls@dslb-178-009-182-197.178.009.pools.vodafone-ip.de) has joined #ceph
[11:00] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[11:03] * nardial (~ls@dslb-178-009-182-197.178.009.pools.vodafone-ip.de) Quit (Quit: Leaving)
[11:06] * datagutt (~Pirate@5NZAADS09.tor-irc.dnsbl.oftc.net) Quit ()
[11:06] * ivotron (uid25461@id-25461.brockwell.irccloud.com) Quit (Quit: Connection closed for inactivity)
[11:06] * nardial (~ls@dslb-178-009-182-197.178.009.pools.vodafone-ip.de) has joined #ceph
[11:15] * thebevans (~bevans@94.5.237.252) has joined #ceph
[11:35] * thebevans (~bevans@94.5.237.252) Quit (Quit: thebevans)
[11:50] * jschmid (~jxs@ip9234f579.dynamic.kabel-deutschland.de) has joined #ceph
[11:51] * naga1 (~oftc-webi@idp01webcache3-z.apj.hpecore.net) has joined #ceph
[11:53] * haomaiwa_ (~haomaiwan@114.111.166.250) Quit (Remote host closed the connection)
[12:01] <naga1> i configured cinder with ceph, when i restart cinder-volume i saw this error in cinder-volume.log
[12:01] <naga1> File "/opt/stack/venv/cinder-20150613T122148Z/local/lib/python2.7/site-packages/rados.py"
[12:01] <naga1> raise make_ex(ret, "error calling conf_read_file")
[12:01] <naga1> Error: error calling conf_read_file: errno EACCES
[12:01] <naga1> can somebody answer me
[12:01] * shylesh (~shylesh@121.244.87.118) Quit (Ping timeout: 480 seconds)
[12:01] * shylesh__ (~shylesh@121.244.87.118) Quit (Ping timeout: 480 seconds)
[12:02] * KungFuHamster (~HoboPickl@37.48.65.122) has joined #ceph
[12:03] * Debesis_ (~0x@140.217.38.86.mobile.mezon.lt) has joined #ceph
[12:07] * bobrik (~bobrik@83.243.64.45) has joined #ceph
[12:09] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[12:11] * shylesh (~shylesh@121.244.87.124) has joined #ceph
[12:14] * shylesh__ (~shylesh@121.244.87.124) has joined #ceph
[12:18] * ledgr (~qstion@37.157.144.44) Quit (Read error: Connection reset by peer)
[12:22] * fdmanana__ (~fdmanana@bl13-129-165.dsl.telepac.pt) has joined #ceph
[12:24] * MrHeavy (~MrHeavy@pool-108-54-190-117.nycmny.fios.verizon.net) Quit (Read error: Connection reset by peer)
[12:25] * MrHeavy (~MrHeavy@pool-108-54-190-117.nycmny.fios.verizon.net) has joined #ceph
[12:31] * shylesh__ (~shylesh@121.244.87.124) Quit (Ping timeout: 480 seconds)
[12:31] * shylesh (~shylesh@121.244.87.124) Quit (Ping timeout: 480 seconds)
[12:31] * KungFuHamster (~HoboPickl@7R2AABO6X.tor-irc.dnsbl.oftc.net) Quit ()
[12:32] * dopesong_ (~dopesong@lb1.mailer.data.lt) Quit (Quit: Leaving...)
[12:33] * nsoffer (~nsoffer@109.66.19.158) has joined #ceph
[12:36] * yuanz (~yzhou67@192.102.204.38) Quit (Read error: Connection reset by peer)
[12:37] * yuanz (~yzhou67@192.102.204.38) has joined #ceph
[12:39] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[12:41] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit ()
[12:42] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[12:42] * fdmanana__ (~fdmanana@bl13-129-165.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[12:45] * bkopilov (~bkopilov@bzq-79-183-58-206.red.bezeqint.net) has joined #ceph
[12:46] * Random (~Ian2128@h-213.61.149.100.host.de.colt.net) has joined #ceph
[12:49] * vbellur1 (~vijay@122.171.91.105) has joined #ceph
[12:54] * vbellur1 (~vijay@122.171.91.105) Quit (Remote host closed the connection)
[13:03] * fdmanana__ (~fdmanana@bl13-129-165.dsl.telepac.pt) has joined #ceph
[13:09] * zimboboyd (~zimboboyd@ip5b43818b.dynamic.kabel-deutschland.de) has joined #ceph
[13:16] * Random (~Ian2128@9S0AAA3W8.tor-irc.dnsbl.oftc.net) Quit ()
[13:18] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[13:22] * fdmanana__ (~fdmanana@bl13-129-165.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[13:24] * kanagaraj (~kanagaraj@27.7.32.241) has joined #ceph
[13:55] * demonspork (~totalworm@37.157.195.143) has joined #ceph
[13:55] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[13:57] * nardial (~ls@dslb-178-009-182-197.178.009.pools.vodafone-ip.de) Quit (Quit: Leaving)
[13:57] * sleinen1 (~Adium@2001:620:0:82::100) Quit (Read error: Connection reset by peer)
[14:01] * dalegaard-39554 (~dalegaard@vps.devrandom.dk) Quit (Quit: leaving)
[14:02] * dalegaard-39554 (~dalegaard@vps.devrandom.dk) has joined #ceph
[14:06] * DLange (~DLange@dlange.user.oftc.net) Quit (Quit: updates...)
[14:11] * DLange (~DLange@dlange.user.oftc.net) has joined #ceph
[14:12] * DLange (~DLange@dlange.user.oftc.net) Quit ()
[14:13] * DLange (~DLange@dlange.user.oftc.net) has joined #ceph
[14:18] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[14:18] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[14:25] * demonspork (~totalworm@3DDAAA8QF.tor-irc.dnsbl.oftc.net) Quit ()
[14:31] * tuxcrafter (~jelle@ebony.powercraft.nl) has joined #ceph
[14:34] * toast (~Tralin|Sl@bakunin.gtor.org) has joined #ceph
[14:38] * naga1 (~oftc-webi@idp01webcache3-z.apj.hpecore.net) Quit (Remote host closed the connection)
[14:40] * vbellur1 (~vijay@122.171.91.105) has joined #ceph
[14:41] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[14:47] * vbellur (~vijay@122.171.91.105) Quit (Quit: Leaving.)
[14:49] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[14:55] * sankarshan (~sankarsha@183.87.39.242) has joined #ceph
[15:00] * jiyer (~chatzilla@63.229.31.161) Quit (Ping timeout: 480 seconds)
[15:04] * toast (~Tralin|Sl@9S0AAA30T.tor-irc.dnsbl.oftc.net) Quit ()
[15:04] * Bromine (~TehZomB@185.77.129.88) has joined #ceph
[15:05] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[15:27] * Sysadmin88 (~IceChat77@2.124.164.69) Quit (Read error: Connection reset by peer)
[15:27] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[15:34] * Bromine (~TehZomB@9S0AAA31L.tor-irc.dnsbl.oftc.net) Quit ()
[15:58] * sleinen1 (~Adium@2001:620:0:82::104) has joined #ceph
[16:03] * derjohn_mob (~aj@tmo-100-146.customers.d1-online.com) has joined #ceph
[16:04] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[16:09] * MJXII (~Scrin@tor-exit.csail.mit.edu) has joined #ceph
[16:19] * stj (~stj@2604:a880:800:10::2cc:b001) Quit (Quit: leaving.)
[16:23] * stj (~stj@2604:a880:800:10::2cc:b001) has joined #ceph
[16:23] * derjohn_mob (~aj@tmo-100-146.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[16:39] * MJXII (~Scrin@5NZAADTGU.tor-irc.dnsbl.oftc.net) Quit ()
[16:39] * rushworld (~Scymex@95.211.169.35) has joined #ceph
[16:58] * daviddcc (~dcasier@21.184.128.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[17:01] * haomaiwang (~haomaiwan@183.206.168.223) has joined #ceph
[17:03] * kanagaraj (~kanagaraj@27.7.32.241) Quit (Ping timeout: 480 seconds)
[17:06] * daviddcc (~dcasier@80.215.205.235) has joined #ceph
[17:09] * rushworld (~Scymex@9S0AAA33R.tor-irc.dnsbl.oftc.net) Quit ()
[17:18] * mrapple (~DougalJac@89.105.194.71) has joined #ceph
[17:28] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[17:31] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit ()
[17:48] * mrapple (~DougalJac@3DDAAA80N.tor-irc.dnsbl.oftc.net) Quit ()
[17:48] * [Sim] (florian@bash.tty.nu) has joined #ceph
[17:48] <[Sim]> howdy
[17:54] <[Sim]> I'm having a hell of a headache trying to build a ceph cluster on debian jessie
[17:54] * daviddcc (~dcasier@80.215.205.235) Quit (Read error: Connection reset by peer)
[17:55] <[Sim]> the inktank packages wont work because the wheezy .debs depend on the wheezy version of libboost, the debian packages lack ceph-deploy, so I'm left with manual setup. which fails to start, and systemctl is not telling me anything about why :-(
[17:55] <[Sim]> any experiences here, or a cookbook? :)
[17:57] <[Sim]> i.e. I followed http://ceph.com/docs/master/install/manual-deployment/ and /etc/init.d/ceph start mon.admin (node name = admin) reports OK, but there is no ceph process running and ceph -s fails
[18:06] * danieagle (~Daniel@177.138.221.97) has joined #ceph
[18:07] * haomaiwang (~haomaiwan@183.206.168.223) Quit (Quit: Leaving...)
[18:08] * haomaiwang (~haomaiwan@183.206.168.223) has joined #ceph
[18:22] * thebevans (~bevans@94.5.237.252) has joined #ceph
[18:22] * pepzi (~neobenedi@tor-exit-1.zenger.nl) has joined #ceph
[18:23] * jschmid (~jxs@ip9234f579.dynamic.kabel-deutschland.de) Quit (Remote host closed the connection)
[18:23] * jschmid (~jxs@ip9234f579.dynamic.kabel-deutschland.de) has joined #ceph
[18:23] * jschmid (~jxs@ip9234f579.dynamic.kabel-deutschland.de) Quit ()
[18:37] * thebevans (~bevans@94.5.237.252) Quit (Quit: thebevans)
[18:42] * adil452100 (~oftc-webi@105.155.146.33) has joined #ceph
[18:42] <adil452100> Hello
[18:42] <adil452100> Can you please help me resolve a problem with ceph ? I have all my 15 osds utilization at ~ 100% using iostat when deep scarbbing is working
[18:46] <loicd> adil452100: deep scrubbing can be hard on disks. You can disable if if your cluster is too busy right now.
[18:48] <adil452100> which osd parameter can reduce this utilization ?
[18:48] <loicd> adil452100: http://dachary.org/?p=3157
[18:49] <adil452100> loicd> osd_disk_threads?
[18:50] <adil452100> I have already read a lot of your posts (nice work :))
[18:50] <adil452100> but I really suffer because even if I have a good servers configuration, I have poor performance
[18:51] <adil452100> I have changed several parameters this last months with success (not only with deep scrub)
[18:52] <adil452100> sorry .. without success
[18:52] <adil452100> 15 OSD and I have max 1000 iops
[18:52] * pepzi (~neobenedi@5NZAADTNY.tor-irc.dnsbl.oftc.net) Quit ()
[18:53] <adil452100> I have a 10GB network :(
[19:02] * mlausch (~mlausch@2001:8d8:1fe:7:517f:ab4e:919:4374) Quit (Ping timeout: 480 seconds)
[19:03] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[19:11] * mlausch (~mlausch@2001:8d8:1fe:7:a010:aee2:62e4:3299) has joined #ceph
[19:13] * chasmo77 (~chas77@158.183-62-69.ftth.swbr.surewest.net) Quit (Quit: It's just that easy)
[19:14] * adil452100 (~oftc-webi@105.155.146.33) Quit (Quit: Page closed)
[19:14] * adil452100 (~oftc-webi@105.155.146.33) has joined #ceph
[19:17] * adil452100 (~oftc-webi@105.155.146.33) Quit ()
[19:28] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[19:31] <[Sim]> hrm, ok so my issue is really with systemd. starting ceph-mon manually and it all works nicely
[19:31] <[Sim]> but why won't it with systemd/systemctl
[19:36] * flakrat (~flakrat@fttu-216-41-245-200.btes.tv) has joined #ceph
[19:39] * flakrat (~flakrat@fttu-216-41-245-200.btes.tv) Quit (Read error: Connection reset by peer)
[19:44] * danieagle (~Daniel@177.138.221.97) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[19:51] * xarses (~xarses@12.10.113.130) Quit (Ping timeout: 480 seconds)
[19:57] * Heliwr (~Xylios@9S0AAA39D.tor-irc.dnsbl.oftc.net) has joined #ceph
[20:02] * bjornar_ (~bjornar@ti0099a430-1131.bb.online.no) has joined #ceph
[20:08] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[20:13] * xarses (~xarses@12.10.113.130) has joined #ceph
[20:13] * tganguly (~tganguly@122.171.26.191) has joined #ceph
[20:14] * xarses (~xarses@12.10.113.130) Quit (Remote host closed the connection)
[20:14] * xarses (~xarses@12.10.113.130) has joined #ceph
[20:15] * thebevans (~bevans@94.5.237.252) has joined #ceph
[20:16] * scuttle|afk is now known as scuttlemonkey
[20:17] * nsoffer (~nsoffer@109.66.19.158) Quit (Ping timeout: 480 seconds)
[20:24] * daviddcc (~dcasier@ADijon-653-1-114-184.w90-33.abo.wanadoo.fr) has joined #ceph
[20:27] * Heliwr (~Xylios@9S0AAA39D.tor-irc.dnsbl.oftc.net) Quit ()
[20:29] * thebevans (~bevans@94.5.237.252) Quit (Quit: thebevans)
[20:31] * thebevans (~bevans@94.5.237.252) has joined #ceph
[20:31] * PierreW (~xENO_@89.105.194.72) has joined #ceph
[20:33] * tganguly (~tganguly@122.171.26.191) Quit (Remote host closed the connection)
[20:44] * flakrat (~flakrat@fttu-216-41-245-200.btes.tv) has joined #ceph
[20:45] * ivotron (uid25461@id-25461.brockwell.irccloud.com) has joined #ceph
[20:45] * mgolub (~Mikolaj@91.225.202.92) has joined #ceph
[20:46] * bjornar_ (~bjornar@ti0099a430-1131.bb.online.no) Quit (Ping timeout: 480 seconds)
[20:57] * shnarch (~shnarch@bzq-109-67-128-59.red.bezeqint.net) has joined #ceph
[20:57] * thebevans (~bevans@94.5.237.252) Quit (Quit: thebevans)
[21:01] * sankarshan (~sankarsha@183.87.39.242) Quit (Ping timeout: 480 seconds)
[21:01] * PierreW (~xENO_@5NZAADTTF.tor-irc.dnsbl.oftc.net) Quit ()
[21:03] * fdmanana__ (~fdmanana@bl13-129-165.dsl.telepac.pt) has joined #ceph
[21:06] * thebevans (~bevans@94.5.237.252) has joined #ceph
[21:09] * shnarch (~shnarch@bzq-109-67-128-59.red.bezeqint.net) Quit (Remote host closed the connection)
[21:10] * sankarshan (~sankarsha@183.87.39.242) has joined #ceph
[21:15] * Jourei (~Unforgive@host-176-37-40-213.la.net.ua) has joined #ceph
[21:16] * rotbeard (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) has joined #ceph
[21:19] * thebevans (~bevans@94.5.237.252) Quit (Quit: thebevans)
[21:20] * calvinx (~calvin@101.100.172.246) Quit (Quit: calvinx)
[21:28] * Meths_ (~meths@2.25.223.20) has joined #ceph
[21:30] * sankarshan (~sankarsha@183.87.39.242) Quit (Ping timeout: 480 seconds)
[21:32] * rotbeard (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) Quit (Quit: Leaving)
[21:34] * Meths (~meths@2.27.78.187) Quit (Ping timeout: 480 seconds)
[21:34] * flakrat (~flakrat@fttu-216-41-245-200.btes.tv) Quit (Quit: Leaving)
[21:40] * nsoffer (~nsoffer@bzq-79-180-80-86.red.bezeqint.net) has joined #ceph
[21:40] * linjan (~linjan@80.179.241.26) has joined #ceph
[21:42] * sankarshan (~sankarsha@183.87.39.242) has joined #ceph
[21:45] * Jourei (~Unforgive@8Q4AABI0Q.tor-irc.dnsbl.oftc.net) Quit ()
[21:50] * sankarshan (~sankarsha@183.87.39.242) Quit (Ping timeout: 480 seconds)
[21:54] * thebevans (~bevans@94.5.237.252) has joined #ceph
[21:57] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[22:00] * sankarshan (~sankarsha@183.87.39.242) has joined #ceph
[22:01] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[22:20] * JohnO (~dicko@5NZAADTX3.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:20] * ChrisNBl_ (~textual@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[22:20] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Quit: Goodbye)
[22:23] * mgolub (~Mikolaj@91.225.202.92) Quit (Quit: away)
[22:23] * thebevans (~bevans@94.5.237.252) Quit (Quit: thebevans)
[22:25] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[22:31] * Meths_ is now known as Meths
[22:32] * ibravo (~ibravo@72.83.69.64) has joined #ceph
[22:35] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[22:48] * ibravo (~ibravo@72.83.69.64) Quit (Quit: This computer has gone to sleep)
[22:49] * JohnO (~dicko@5NZAADTX3.tor-irc.dnsbl.oftc.net) Quit ()
[22:58] <guerby> sileht, loicd I sent an email to ceph-users to warn about erasure coded pools and hammer : http://tracker.ceph.com/issues/12012
[22:59] * nsoffer (~nsoffer@bzq-79-180-80-86.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[23:05] * puffy (~puffy@c-50-131-179-74.hsd1.ca.comcast.net) has joined #ceph
[23:16] * linjan (~linjan@80.179.241.26) Quit (Ping timeout: 480 seconds)
[23:20] * Rosenbluth (~MatthewH1@7R2AABPKJ.tor-irc.dnsbl.oftc.net) has joined #ceph
[23:20] <loicd> frickler: are you around by any chance ? I just discovered that openstack network create does not route packets to the net. All VMs on this network talk to each other. But the default gateway does not send packets to the net. In the neutron/server.log of the host I see an error neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api [-] No DHCP agents are associated with network '582950be-f33c-4375-8f8e-14606143e6d7'. Unable to send notification . Which makes sense
[23:20] <loicd> because the /etc/neutron/policy.json has all dhcp operations "admin_only"
[23:23] <loicd> or maybe it's just a false negative https://bugs.launchpad.net/neutron/+bug/1289130
[23:24] <loicd> since I'm using a Havana cluster
[23:24] <loicd> dhcp actually works, it's routing that does not happen
[23:28] * Sysadmin88 (~IceChat77@2.124.164.69) has joined #ceph
[23:29] * Debesis_ is now known as Debesis
[23:35] <loicd> adding a router and connecting it to an external network works
[23:35] <loicd> neat
[23:35] * loicd realizes this is slightly off topic here...
[23:36] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Read error: Connection reset by peer)
[23:37] <lurbs> 0.94.2 is supposed to be out, but there are various packages missing from the repositories. There's no amd64 packages for Precise (12.04 LTS), and no i386 for Trusty (14.04 LTS) or Debian.
[23:47] * thebevans (~bevans@94.5.237.252) has joined #ceph
[23:49] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[23:49] * Rosenbluth (~MatthewH1@7R2AABPKJ.tor-irc.dnsbl.oftc.net) Quit ()
[23:51] * scuttlemonkey is now known as scuttle|afk

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.