#ceph IRC Log

Index

IRC Log for 2015-04-17

Timestamps are in GMT/BST.

[0:00] <bstillwell> I used to use ntpdate to get the clock in sync now, but now I just use ntpd for both ('ntpd -gq' does it right away).
[0:02] * OutOfNoWhere (~rpb@199.68.195.102) has joined #ceph
[0:04] <litwol> sooo close to having my cluster up
[0:04] <litwol> mon.0@0(leader).mds e1 warning, MDS mds.? 192.168.1.82:6800/2081 up but filesystem disabled
[0:04] <litwol> last obstacle from this tutorial https://wiki.gentoo.org/wiki/Ceph
[0:05] * tmh_ (~e@loophole.cc) has joined #ceph
[0:07] * LeaChim (~LeaChim@host86-143-18-67.range86-143.btcentralplus.com) has joined #ceph
[0:08] * tk12 (~tk12@68.140.239.132) has joined #ceph
[0:08] * tk12_ (~tk12@68.140.239.132) Quit (Read error: Connection reset by peer)
[0:08] * xcezzz (~Adium@pool-100-3-14-19.tampfl.fios.verizon.net) has joined #ceph
[0:08] * xcezzz1 (~Adium@pool-100-3-14-19.tampfl.fios.verizon.net) Quit (Read error: Connection reset by peer)
[0:09] * daniel2_ (~daniel@12.164.168.117) Quit (Remote host closed the connection)
[0:13] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Remote host closed the connection)
[0:24] <litwol> omg
[0:24] <litwol> it is working!
[0:24] <litwol> :-D
[0:25] * Wizeon (~Jones@5NZAABRRZ.tor-irc.dnsbl.oftc.net) Quit ()
[0:26] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[0:26] <litwol> now to randomly cycle machines in this test to see how ceph handles
[0:29] * badone__ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[0:30] * sleinen1 (~Adium@2001:620:0:82::102) Quit (Ping timeout: 480 seconds)
[0:31] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[0:34] * tk12 (~tk12@68.140.239.132) Quit (Remote host closed the connection)
[0:34] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[0:34] * root4 (~root@p57B2F509.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[0:35] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Remote host closed the connection)
[0:36] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[0:36] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[0:36] <litwol> hmm
[0:37] <litwol> one of my 3 OSDs is marked as 'down', even though host osd service status is "running"
[0:37] <litwol> cluster health is "HEALTH_OK"
[0:37] <Nats_> look in /var/log/ceph/ceph-osd.n.log to see what the osd is doing
[0:38] * daniel2_ (~daniel@12.164.168.117) has joined #ceph
[0:38] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[0:38] * tk12 (~tk12@68.140.239.132) has joined #ceph
[0:39] <litwol> http://dpaste.com/22CM0ZF
[0:40] <litwol> i'm too new to ceph to understand the log
[0:40] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[0:40] <Nats_> it looks pretty normal
[0:41] <Nats_> other than having no pgs on it
[0:41] <litwol> https://bpaste.net/show/9ff3520158af conf
[0:42] <litwol> here is full OSD log after truncating and restarting https://bpaste.net/show/e6b9d89dd0ae
[0:42] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[0:43] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) has joined #ceph
[0:44] * litwol tries this http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-May/040252.html
[0:44] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Remote host closed the connection)
[0:44] * KevinPerks1 (~Adium@2606:a000:a6c4:1f00:b85a:30c3:f79c:58c) has joined #ceph
[0:45] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) Quit ()
[0:45] * PerlStalker (~PerlStalk@162.220.127.20) Quit (Quit: ...)
[0:46] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) has joined #ceph
[0:46] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[0:48] * KevinPerks2 (~Adium@2606:a000:a6c4:1f00:b4b7:394f:e927:ca6a) has joined #ceph
[0:48] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) Quit ()
[0:49] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Remote host closed the connection)
[0:49] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[0:52] <litwol> yey!
[0:52] <litwol> worked
[0:52] * KevinPerks1 (~Adium@2606:a000:a6c4:1f00:b85a:30c3:f79c:58c) Quit (Ping timeout: 480 seconds)
[0:55] * tuhnis (~Lattyware@tor-exit3-readme.dfri.se) has joined #ceph
[0:56] * root4 (~root@p57B2F532.dip0.t-ipconnect.de) has joined #ceph
[0:56] <litwol> wow nice.
[0:56] <litwol> rebooted host. not a hiccup in service.
[1:06] * lovejoy (~lovejoy@cpc69388-oxfd28-2-0-cust415.4-3.cable.virginm.net) has joined #ceph
[1:08] * LeaChim (~LeaChim@host86-143-18-67.range86-143.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[1:09] <litwol> works wonderfully. i like it a lot. very sad zfs isn't working very well. but it could be just because i'm using version 0.87.1
[1:12] * rendar (~I@host135-119-dynamic.57-82-r.retail.telecomitalia.it) Quit ()
[1:14] * KevinPerks2 (~Adium@2606:a000:a6c4:1f00:b4b7:394f:e927:ca6a) Quit (Quit: Leaving.)
[1:14] <Tetard> zfs over ceph ?
[1:14] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Remote host closed the connection)
[1:14] <litwol> zfs under ceph
[1:14] <litwol> disk < zfs < ceph
[1:15] * tk12_ (~tk12@68.140.239.132) has joined #ceph
[1:15] * tk12 (~tk12@68.140.239.132) Quit (Read error: Connection reset by peer)
[1:18] * imjustmatthew (~imjustmat@pool-108-4-98-95.rcmdva.fios.verizon.net) has joined #ceph
[1:19] * qhartman (~qhartman@den.direwolfdigital.com) has joined #ceph
[1:20] <Nats_> probably not a good idea even if it did work
[1:21] <litwol> Why not?
[1:21] <Nats_> you want to give ceph physical disks
[1:21] <litwol> zfs has excellent support for snapshots and rollbacks
[1:21] <litwol> as well as data integrity via checksuming
[1:21] <Nats_> ceph has checksumming
[1:21] <litwol> Nats_: how? mind you, i am very new anddidn't read everything i need to.
[1:21] <Nats_> and snapshots of the raw ceph files doesnt seem very useful
[1:22] <litwol> Nats_: everything i've read so far hints that ceph puts data into a /directory/. it is user's choice what FS that directory is managed by
[1:22] <litwol> Nats_: is there a command for osd that does something like "take this raw disk, format it any way you see fit osd"?
[1:22] <Nats_> litwol, thats true but the directory sturcture is essentially private
[1:23] <litwol> by 'private' do you mean no one else exceph ceph osd uses it?
[1:23] <litwol> no other client services that is.
[1:23] <florz> Nats_: ceph stores checksums for all data?
[1:23] <Nats_> i mean private in the sense that how its structured is not how you seeit when using say rgw/rbd/etc
[1:23] <Nats_> florz, yes. that is what deep-scrub does
[1:24] <Nats_> once a week it will comapre checksum to whats on disk
[1:24] <Nats_> and if its wrong, tell you
[1:24] <florz> Nats_: IC ... but not on normal reads?
[1:24] <Nats_> florz, afaik for speed it does not do so on every read
[1:24] <litwol> zfs does :)
[1:25] <litwol> another good reason for it
[1:25] <Nats_> but i'm just another user
[1:25] * tuhnis (~Lattyware@425AAAMKE.tor-irc.dnsbl.oftc.net) Quit ()
[1:25] <florz> ah, then that's probably where I got the idea that it doesn't store checksums at all
[1:25] * Popz (~TomyLobo@tor-exit.server9.tvdw.eu) has joined #ceph
[1:26] * oms101 (~oms101@p20030057EA4D1200EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:27] <Nats_> have to pick your poison. btrfs will do checksum on read but is not recommended for production use
[1:29] <florz> the documentation says "Ceph checks the primary and any replica nodes, generates a catalog of all objects in the placement group and compares them to ensure that no objects are missing or mismatched, and their contents are consistent."
[1:29] <florz> doesn't sound to me like they store any checksums
[1:30] <florz> and yeah, my assumption was that the long-term goal was to have btrfs take care of that
[1:30] <florz> after all, checksumming with random-access writes is not exactly trivial to make performant
[1:31] * litwol prefers zfs
[1:31] <litwol> been using it on prod for a few years
[1:31] <litwol> couldn't be happier.
[1:31] <davidz1> florz Nats_: Replicated pools generate checksums by reading objects during deep-scrub. Erasure coded pools have checksums present (no random writes), so don't have to read the data.
[1:31] <litwol> just need to figure out why my ceph demo setup didn't use it well. maybe 0.87.1 is not zfs-friendly.
[1:32] <Nats_> libtwol; my 2c as someone with a prod ceph cluster for over a year. it literally makes no difference to me what filesystem is under it
[1:32] <florz> davidz1: thx!
[1:33] * lovejoy (~lovejoy@cpc69388-oxfd28-2-0-cust415.4-3.cable.virginm.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[1:33] <Nats_> the fact its xfs is just an implementation detail, it could be a raw partition and it would not affect how ceph is used in the slightest
[1:34] * oms101 (~oms101@p20030057EA395C00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[1:34] <Nats_> since you access your data via ceph, not via the underlying filesystem ceph's data is on
[1:36] <litwol> Nats_: what comand creates osd from raw disk? i'd love to read up on it.
[1:36] <litwol> Nats_: i would prefer fewer elements in the stack. i just dont know how yet. or what doc to read.
[1:37] <Nats_> libtwol, there isnt one i was just sort of saying its equivalent to being raw, because as the ceph user you do not access it
[1:37] <florz> Nats_: except Ceph probably relies on certain FS semantics that not all filesystems have, so you would risk data integrity by using the wrong FS
[1:38] * mattrich (~Adium@38.108.161.130) has joined #ceph
[1:38] <litwol> Nats_: but i'm not speaking from enduser perspective.. i'm talking about ceph delegating various functionality onto FS.
[1:38] <litwol> Nats_: such as zfs's snapshot and rollback ability. check-sum data integrity on reads. etc
[1:39] <litwol> Nats_: they have virtualy no performance impact on zfs level
[1:39] <florz> Nats_: and also, the underlying file system has quite a bit of impact on the performance
[1:39] <litwol> thx for chat. walking home now.
[1:39] * litwol afk
[1:41] <mattrich> i have a small cluster running firefly that is having trouble converging during recovery. osds are often getting killed by the oom-killer (and occassionally hosts are becoming flat out unresponsive). 12 disks per host, 32gb of ram. any suggestions?
[1:42] <Nats_> <florz> doesn't sound to me like they store any checksums <-- it definiately stores checksums
[1:43] <Nats_> mattrich, whats your degraded %age
[1:43] <gregsfortytwo> unfortunately Ceph does not store checksums in the general case right now :(
[1:44] <gregsfortytwo> it will do so opportunistically on new enough code, and it uses checksums in transmission
[1:44] <gregsfortytwo> and it calculates checksums for replica comparison during deep scrub
[1:44] <Nats_> gregsfortytwo, oic
[1:44] <mjblw> When I see a slow request in the logs, I see "ack+ondisk+write". What is the ack+ondisk+write telling me? Example:
[1:44] <mjblw> 2015-04-16 22:43:01.962805 osd.124 10.30.66.11:6820/2877 189 : [WRN] slow request 480.729314 seconds old, received at 2015-04-16 22:35:01.233441: osd_op(client.49154410.0:94386325 rb.0.20bf907.238e1f29.00000000a16b [write 0~712704] 4.1f969ccb ack+ondisk+write e387297) v4 currently waiting for subops from 14,69
[1:44] <Nats_> gregsfortytwo, given a 2 replica cluster how does it know which object is 'wrong' during deep scrub?
[1:45] <gregsfortytwo> but it doesn't maintain checksums on every piece of data on every write because it's not feasible for anything which allows overwrites (every write turns into a read-modify-write)
[1:45] <mattrich> Nats_: 30% currently
[1:45] <gregsfortytwo> it doesn't know; that's why repairs of that state require admin intervention
[1:45] <Nats_> gregsfortytwo, is not not true of RBD? i have never intervened
[1:46] <gregsfortytwo> it's the same with RBD as with anything else
[1:46] <gregsfortytwo> I mean, usually you don't have a difference so this doesn't come up
[1:46] <mattrich> Nats_: I stopped 4 out of 12 OSDs per post, to limit the max memory usage. that necessitated more recovery, so i don't know if that was a good trade off
[1:46] <Nats_> i very occasionally get the active+clean+inconsistant. i just run the ceph pg repair and it does its thing
[1:46] <mattrich> s/per post/per host/
[1:47] <gregsfortytwo> Nats_: yeah, so when you do that Ceph pretty much just uses the primary's copy as the good one
[1:47] <gregsfortytwo> on replicated PGs
[1:47] <Nats_> gregsfortytwo, ah right
[1:47] <Nats_> TIL
[1:47] <gregsfortytwo> erasure-coded ones do maintain checksums because they're append-only, so there it knows which chunks are bad and rebuilds the correct data
[1:48] <Nats_> mattrich, having suffered through a similar scenario all i can say is good luck :/
[1:49] <florz> BTW, the "1 GB RAM per TB of storage" recommendation, that refers to the size of the storage the respective OSD is managing, not the total cluster size, right?
[1:50] <Nats_> yes
[1:50] <Nats_> though 1GB is too low
[1:55] * Popz (~TomyLobo@1GLAABEGZ.tor-irc.dnsbl.oftc.net) Quit ()
[1:55] * SinZ|offline (~kalmisto@72.52.91.30) has joined #ceph
[2:01] <mattrich> Would changing `osd recovery max active` or `osd max backfills` have any effect on memory consumption?
[2:02] <Nats_> a little
[2:02] <Nats_> my experience is that memory usage scales heavily with %degraded
[2:05] <Nats_> mattrich, if you haven't already i would suggest 'ceph osd set noout'
[2:05] * mattrich (~Adium@38.108.161.130) has left #ceph
[2:05] * mattrich (~Adium@38.108.161.130) has joined #ceph
[2:06] * daniel2_ (~daniel@12.164.168.117) Quit (Remote host closed the connection)
[2:06] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[2:06] <Nats_> temporarily set nobackfill as well, reduce max backfills to 1 , see if you can get all osd's online and through recovery (but not backfill)
[2:06] * daniel2_ (~daniel@12.164.168.117) has joined #ceph
[2:06] <Nats_> then turn backfill on and hope for the best
[2:06] <mattrich> Nats_: thank you, trying now!
[2:06] <Nats_> starting one osd at a time is probably sensible too
[2:07] <Nats_> mattrich, there's also settings for leveldb that i think can reduce memory usage a little
[2:07] <Nats_> was mentioned in firefly release notes
[2:07] <Nats_> leveldb write buffer size = 0
[2:07] <Nats_> leveldb cache size = 0
[2:07] * bandrus (~brian@128.sub-70-211-79.myvzw.com) Quit (Ping timeout: 480 seconds)
[2:08] * p66kumar (~p66kumar@74.119.205.248) Quit (Quit: p66kumar)
[2:08] <Nats_> mattrich, and turn off all debug if you haven't already
[2:10] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[2:13] * tk12_ (~tk12@68.140.239.132) Quit (Remote host closed the connection)
[2:13] * dmick (~dmick@206.169.83.146) has left #ceph
[2:16] * davidz1 (~davidz@cpe-23-242-189-171.socal.res.rr.com) Quit (Quit: Leaving.)
[2:17] * B_Rake_ (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[2:20] * B_Rake__ (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[2:20] * B_Rake_ (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Read error: Connection reset by peer)
[2:22] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:24] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Ping timeout: 480 seconds)
[2:25] * SinZ|offline (~kalmisto@5NZAABRW0.tor-irc.dnsbl.oftc.net) Quit ()
[2:25] * puvo (~Sirrush@85.25.148.130) has joined #ceph
[2:27] * davidz (~davidz@2605:e000:1313:8003:dcab:185d:7b16:d24c) has joined #ceph
[2:35] <achieva> how can i check size of each pg?
[2:36] <achieva> it looks like 1MB ... right?
[2:40] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[2:41] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) has joined #ceph
[2:43] * daniel2_ (~daniel@12.164.168.117) Quit (Remote host closed the connection)
[2:44] * daniel2_ (~daniel@12.164.168.117) has joined #ceph
[2:44] * daniel2__ (~daniel@12.164.168.117) has joined #ceph
[2:44] * debian112 (~bcolbert@24.126.201.64) Quit (Quit: Leaving.)
[2:44] * daniel2_ (~daniel@12.164.168.117) Quit ()
[2:49] * dneary (~dneary@pool-96-252-45-212.bstnma.fios.verizon.net) has joined #ceph
[2:51] <fxmulder> what's the best way to recover from disk failure on an osd? I've rebuilt the disk but its empty so I need to get it to pull data from the other replica
[2:55] * puvo (~Sirrush@5NZAABRX3.tor-irc.dnsbl.oftc.net) Quit ()
[2:55] * puvo (~rapedex@207.201.223.197) has joined #ceph
[2:58] * daniel2__ (~daniel@12.164.168.117) Quit (Remote host closed the connection)
[3:01] * daniel2_ (~daniel@12.164.168.117) has joined #ceph
[3:02] * Hnaf (~Hnaf@198.60.31.75) Quit (Ping timeout: 480 seconds)
[3:05] * daniel2_ (~daniel@12.164.168.117) Quit (Remote host closed the connection)
[3:05] <Nats_> fxmulder, add it as a new osd and delete the old one
[3:06] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:08] <fxmulder> Nats_: tried that, it says it created the osd but ceph status only shows 1 osd still
[3:12] <fxmulder> ceph-deploy osd create gs5:/gluster/ceph
[3:12] <fxmulder> returns [ceph_deploy.osd][DEBUG ] Host gs5 is now ready for osd use.
[3:16] <Nats_> cant offer any further advice sorry, what you pasted is all i ever do to add a disk
[3:17] * daniel2_ (~daniel@12.164.168.117) has joined #ceph
[3:17] * daniel2_ (~daniel@12.164.168.117) Quit ()
[3:19] * B_Rake__ (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Remote host closed the connection)
[3:20] <fxmulder> would be nice if the osd was writing anything to its log files
[3:21] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[3:25] * puvo (~rapedex@5NZAABRZB.tor-irc.dnsbl.oftc.net) Quit ()
[3:25] * cholcombe (~chris@pool-108-42-125-114.snfcca.fios.verizon.net) Quit (Remote host closed the connection)
[3:28] * kefu (~kefu@114.92.111.70) has joined #ceph
[3:30] * kefu (~kefu@114.92.111.70) Quit ()
[3:34] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) has joined #ceph
[3:37] * ircolle (~Adium@2601:1:a580:1735:d02c:4dd6:bb0f:2b20) Quit (Quit: Leaving.)
[3:40] * daniel2_ (~daniel@12.164.168.117) has joined #ceph
[3:40] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[3:41] * tk12 (~tk12@c-107-3-156-164.hsd1.ca.comcast.net) has joined #ceph
[3:42] * vjujjuri (~chatzilla@204.14.239.105) Quit (Ping timeout: 480 seconds)
[3:45] * badone__ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[3:50] * tupper_ (~tcole@rtp-isp-nat-pool1-2.cisco.com) Quit (Ping timeout: 480 seconds)
[3:52] * tk12 (~tk12@c-107-3-156-164.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[3:52] * root4 (~root@p57B2F532.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[3:54] * zhaochao (~zhaochao@111.161.77.236) has joined #ceph
[3:54] * daniel2_ (~daniel@12.164.168.117) Quit (Remote host closed the connection)
[3:55] * sjmtest (uid32746@id-32746.uxbridge.irccloud.com) Quit (Quit: Connection closed for inactivity)
[3:55] * kalmisto (~poller@5.79.68.161) has joined #ceph
[3:58] * zack_dolby (~textual@219.117.239.161.static.zoot.jp) has joined #ceph
[4:04] * reed (~reed@2602:244:b653:6830:bc97:e4a5:9d73:7630) Quit (Ping timeout: 480 seconds)
[4:14] * root4 (~root@p57B2F81C.dip0.t-ipconnect.de) has joined #ceph
[4:22] * wschulze (~wschulze@cpe-74-73-11-233.nyc.res.rr.com) has joined #ceph
[4:25] * kalmisto (~poller@5NZAABR1G.tor-irc.dnsbl.oftc.net) Quit ()
[4:25] * Mika_c (~quassel@122.146.93.152) has joined #ceph
[4:28] * sankarshan (~sankarsha@121.244.87.117) Quit (Ping timeout: 480 seconds)
[4:34] * hellertime (~Adium@pool-173-48-154-80.bstnma.fios.verizon.net) has joined #ceph
[4:34] * marrusl (~mark@cpe-24-90-46-248.nyc.res.rr.com) Quit (Remote host closed the connection)
[4:39] * mtanski (~mtanski@65.244.82.98) Quit (Ping timeout: 480 seconds)
[4:40] * mtanski (~mtanski@65.244.82.98) has joined #ceph
[4:45] * vata (~vata@cable-21.246.173-197.electronicbox.net) has joined #ceph
[4:52] * tserong (~tserong@203-214-92-220.dyn.iinet.net.au) Quit (Quit: Leaving)
[4:55] * offender (~PcJamesy@hessel0.torservers.net) has joined #ceph
[5:03] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) Quit (Quit: p66kumar)
[5:05] * hellertime (~Adium@pool-173-48-154-80.bstnma.fios.verizon.net) Quit (Quit: Leaving.)
[5:09] * tserong (~tserong@203-214-92-220.dyn.iinet.net.au) has joined #ceph
[5:14] * zhithuang (~zhithuang@202.76.244.5) has joined #ceph
[5:24] * mattrich (~Adium@38.108.161.130) Quit (Quit: Leaving.)
[5:25] * offender (~PcJamesy@3OZAAA7VW.tor-irc.dnsbl.oftc.net) Quit ()
[5:25] * Vacuum_ (~vovo@88.130.200.73) has joined #ceph
[5:25] * CoMa (~Maariu5_@tor-exit.server9.tvdw.eu) has joined #ceph
[5:28] * imjustmatthew (~imjustmat@pool-108-4-98-95.rcmdva.fios.verizon.net) Quit (Read error: Connection reset by peer)
[5:30] * wschulze (~wschulze@cpe-74-73-11-233.nyc.res.rr.com) Quit (Quit: Leaving.)
[5:31] * zack_dolby (~textual@219.117.239.161.static.zoot.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[5:32] * Vacuum (~vovo@88.130.204.52) Quit (Ping timeout: 480 seconds)
[5:34] * kefu (~kefu@114.92.111.70) has joined #ceph
[5:41] * wschulze (~wschulze@cpe-74-73-11-233.nyc.res.rr.com) has joined #ceph
[5:47] * wschulze (~wschulze@cpe-74-73-11-233.nyc.res.rr.com) Quit (Quit: Leaving.)
[5:51] * vata (~vata@cable-21.246.173-197.electronicbox.net) Quit (Quit: Leaving.)
[5:55] * CoMa (~Maariu5_@5NZAABR5I.tor-irc.dnsbl.oftc.net) Quit ()
[5:55] * Sketchfile (~aleksag@india012.server4you.net) has joined #ceph
[6:02] * dneary (~dneary@pool-96-252-45-212.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[6:02] * zhithuang is now known as winston-d_
[6:04] * Kupo1 (~tyler.wil@23.111.254.159) Quit (Read error: Connection reset by peer)
[6:06] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[6:10] * lalatenduM (~lalatendu@122.171.77.204) has joined #ceph
[6:16] * kefu (~kefu@114.92.111.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:16] * Mika_c (~quassel@122.146.93.152) Quit (Remote host closed the connection)
[6:17] * sherlocked (~watson@14.139.82.6) has joined #ceph
[6:25] * Sketchfile (~aleksag@98EAABBYT.tor-irc.dnsbl.oftc.net) Quit ()
[6:25] * DougalJacobs (~arsenaali@98EAABBZC.tor-irc.dnsbl.oftc.net) has joined #ceph
[6:35] * amote (~amote@121.244.87.116) has joined #ceph
[6:42] * haomaiwang (~haomaiwan@114.111.166.249) has joined #ceph
[6:43] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) has joined #ceph
[6:46] * shylesh (~shylesh@121.244.87.124) has joined #ceph
[6:47] * lalatenduM (~lalatendu@122.171.77.204) Quit (Ping timeout: 480 seconds)
[6:55] * DougalJacobs (~arsenaali@98EAABBZC.tor-irc.dnsbl.oftc.net) Quit ()
[6:55] * ricin (~utugi____@176.10.99.208) has joined #ceph
[7:01] * pavera (~tomc@192.41.52.12) has joined #ceph
[7:08] * mykola (~Mikolaj@91.225.202.153) has joined #ceph
[7:08] * rdas (~rdas@110.227.47.22) has joined #ceph
[7:10] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[7:13] * sleinen1 (~Adium@2001:620:0:82::100) has joined #ceph
[7:16] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[7:18] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[7:25] * ricin (~utugi____@98EAABB0A.tor-irc.dnsbl.oftc.net) Quit ()
[7:25] * AluAlu (~Jyron@marcuse-1.nos-oignons.net) has joined #ceph
[7:25] * B_Rake (~B_Rake@2605:a601:5b9:dd01:ed1c:865f:b3:5205) has joined #ceph
[7:30] * sankarshan (~sankarsha@121.244.87.117) has joined #ceph
[7:35] * daniel2_ (~daniel@12.0.207.18) has joined #ceph
[7:39] * B_Rake (~B_Rake@2605:a601:5b9:dd01:ed1c:865f:b3:5205) Quit (Remote host closed the connection)
[7:49] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[7:51] * karnan (~karnan@106.51.135.5) has joined #ceph
[7:53] * kefu (~kefu@114.92.111.70) has joined #ceph
[7:53] * oro (~oro@209.249.118.57) has joined #ceph
[7:54] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[7:55] * AluAlu (~Jyron@2WVAABO7Z.tor-irc.dnsbl.oftc.net) Quit ()
[7:56] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[7:56] * Arcturus (~N3X15@edwardsnowden2.torservers.net) has joined #ceph
[8:04] * rulo (~Thunderbi@183.38.151.45) has joined #ceph
[8:05] * winston-d_ (~zhithuang@202.76.244.5) Quit (Ping timeout: 480 seconds)
[8:08] * kefu (~kefu@114.92.111.70) Quit (Max SendQ exceeded)
[8:08] * overclk (~overclk@121.244.87.117) has joined #ceph
[8:09] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[8:09] * kefu (~kefu@114.92.111.70) has joined #ceph
[8:12] * cok (~chk@2a02:2350:18:1010:cd63:9b30:3967:545f) has joined #ceph
[8:14] * shang (~ShangWu@175.41.48.77) has joined #ceph
[8:15] * oro (~oro@209.249.118.57) Quit (Ping timeout: 480 seconds)
[8:16] * oro (~oro@209.249.118.68) has joined #ceph
[8:17] * dopesong (~dopesong@88-119-94-55.static.zebra.lt) has joined #ceph
[8:19] * ajazdzewski (~ajazdzews@2001:4dd0:ae29:1:bc92:a888:1bb7:5ff9) has joined #ceph
[8:21] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[8:22] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[8:22] * shohn (~shohn@dslb-178-008-023-209.178.008.pools.vodafone-ip.de) has joined #ceph
[8:23] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) Quit (Quit: p66kumar)
[8:23] <pavera> anyone around?
[8:25] * Arcturus (~N3X15@5NZAABSCC.tor-irc.dnsbl.oftc.net) Quit ()
[8:25] * oro (~oro@209.249.118.68) Quit (Ping timeout: 480 seconds)
[8:27] * pavera (~tomc@192.41.52.12) Quit (Quit: pavera)
[8:31] * derjohn_mob (~aj@tmo-112-62.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[8:35] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[8:37] * cok (~chk@2a02:2350:18:1010:cd63:9b30:3967:545f) Quit (Quit: Leaving.)
[8:40] * cok (~chk@2a02:2350:18:1010:a02b:a396:36c1:6428) has joined #ceph
[8:43] * Sysadmin88 (~IceChat77@054527d3.skybroadband.com) Quit (Quit: He who laughs last, thinks slowest)
[8:44] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[8:47] <anorak> hi
[8:48] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[8:49] <stalob> hi
[8:50] * daniel2_ (~daniel@12.0.207.18) Quit (Remote host closed the connection)
[8:50] * derjohn_mob (~aj@b2b-94-79-172-98.unitymedia.biz) has joined #ceph
[8:55] * Kottizen (~Jamana@tor-exit0-readme.dfri.se) has joined #ceph
[8:57] * trociny (~mgolub@93.183.239.2) Quit (Ping timeout: 480 seconds)
[8:59] * sleinen1 (~Adium@2001:620:0:82::100) Quit (Read error: Connection reset by peer)
[9:00] * shohn (~shohn@dslb-178-008-023-209.178.008.pools.vodafone-ip.de) Quit (Quit: Leaving.)
[9:03] * winston-d_ (~zhithuang@202.76.244.5) has joined #ceph
[9:05] * b0e (~aledermue@213.95.25.82) has joined #ceph
[9:12] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[9:15] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[9:16] * dopesong_ (~dopesong@lb0.mailer.data.lt) has joined #ceph
[9:18] * analbeard (~shw@support.memset.com) has joined #ceph
[9:22] * dopesong (~dopesong@88-119-94-55.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[9:25] * Kottizen (~Jamana@2WVAABPB0.tor-irc.dnsbl.oftc.net) Quit ()
[9:25] * Kalado (~WedTM@exit2.failwhale.org) has joined #ceph
[9:26] * lovejoy (~lovejoy@cpc69388-oxfd28-2-0-cust415.4-3.cable.virginm.net) has joined #ceph
[9:27] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) has joined #ceph
[9:27] * dgurtner (~dgurtner@178.197.231.76) has joined #ceph
[9:34] * ajazdzewski (~ajazdzews@2001:4dd0:ae29:1:bc92:a888:1bb7:5ff9) Quit (Ping timeout: 480 seconds)
[9:38] * fsimonce (~simon@host178-188-dynamic.26-79-r.retail.telecomitalia.it) has joined #ceph
[9:39] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[9:41] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[9:42] * nils_ (~nils@doomstreet.collins.kg) has joined #ceph
[9:44] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[9:46] * lovejoy (~lovejoy@cpc69388-oxfd28-2-0-cust415.4-3.cable.virginm.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[9:46] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[9:49] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) has joined #ceph
[9:51] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[9:55] * Kalado (~WedTM@98EAABB5J.tor-irc.dnsbl.oftc.net) Quit ()
[9:55] * Xylios (~DJComet@tor-exit.xshells.net) has joined #ceph
[9:56] * tmh_ (~e@loophole.cc) Quit (Ping timeout: 480 seconds)
[10:00] * kefu (~kefu@114.92.111.70) Quit (Max SendQ exceeded)
[10:01] * kefu (~kefu@114.92.111.70) has joined #ceph
[10:04] * lovejoy (~lovejoy@57519dc8.skybroadband.com) has joined #ceph
[10:13] * shohn (~shohn@dslb-178-008-023-209.178.008.pools.vodafone-ip.de) has joined #ceph
[10:16] * rendar (~I@host13-180-dynamic.23-79-r.retail.telecomitalia.it) has joined #ceph
[10:19] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:20] * thomnico (~thomnico@AToulouse-654-1-289-55.w86-199.abo.wanadoo.fr) has joined #ceph
[10:25] * Xylios (~DJComet@98EAABB6W.tor-irc.dnsbl.oftc.net) Quit ()
[10:25] * Aramande_ (~Catsceo@195.169.125.226) has joined #ceph
[10:26] * shohn (~shohn@dslb-178-008-023-209.178.008.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[10:27] * vbellur (~vijay@122.172.194.57) Quit (Ping timeout: 480 seconds)
[10:27] * ajazdzewski (~ajazdzews@lpz-66.sprd.net) has joined #ceph
[10:28] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[10:30] * karnan (~karnan@106.51.135.5) Quit (Ping timeout: 480 seconds)
[10:33] * Miouge_ (~Miouge@94.136.92.20) has joined #ceph
[10:36] * Miouge (~Miouge@94.136.92.20) Quit (Ping timeout: 480 seconds)
[10:36] * Miouge_ is now known as Miouge
[10:40] * karnan (~karnan@106.51.235.143) has joined #ceph
[10:41] * vbellur (~vijay@122.167.118.14) has joined #ceph
[10:42] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) has joined #ceph
[10:47] * kefu (~kefu@114.92.111.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[10:49] * kefu (~kefu@114.92.111.70) has joined #ceph
[10:51] * ajazdzewski (~ajazdzews@lpz-66.sprd.net) Quit (Ping timeout: 480 seconds)
[10:51] * Pablo (~pcaruana@nat-pool-brq-t.redhat.com) has joined #ceph
[10:51] * Pablo is now known as pcaruana
[10:53] * ajazdzewski (~ajazdzews@lpz-66.sprd.net) has joined #ceph
[10:53] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[10:55] * Aramande_ (~Catsceo@2WVAABPGM.tor-irc.dnsbl.oftc.net) Quit ()
[10:56] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) has joined #ceph
[10:56] * kefu (~kefu@114.92.111.70) Quit (Max SendQ exceeded)
[10:57] * kefu (~kefu@114.92.111.70) has joined #ceph
[10:57] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) Quit ()
[10:58] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) has joined #ceph
[11:02] * vbellur (~vijay@122.167.118.14) Quit (Ping timeout: 480 seconds)
[11:04] * rulo (~Thunderbi@183.38.151.45) Quit (Quit: rulo)
[11:05] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[11:09] * kefu is now known as kefu|afk
[11:13] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) Quit (Quit: Leaving.)
[11:16] * winston-d_ (~zhithuang@202.76.244.5) Quit (Ping timeout: 480 seconds)
[11:16] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[11:18] * vbellur (~vijay@122.166.95.248) has joined #ceph
[11:19] * winston-d_ (~zhithuang@58.33.47.14) has joined #ceph
[11:21] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[11:24] * kefu|afk (~kefu@114.92.111.70) Quit (Max SendQ exceeded)
[11:25] * kefu (~kefu@114.92.111.70) has joined #ceph
[11:29] <qstion> Will it be possible to have multiple cephfs pools in the future?
[11:30] * winston-d_ (~zhithuang@58.33.47.14) Quit (Ping timeout: 480 seconds)
[11:39] * cok (~chk@2a02:2350:18:1010:a02b:a396:36c1:6428) Quit (Quit: Leaving.)
[11:39] <joelm> qstion: I'd imagine so, the tool has been altered to function in that way
[11:39] <joelm> only one pool may exist atm though
[11:40] <qstion> cool
[11:41] <T1w> what about a caching pool and an erasure encoded pool?
[11:41] * kefu (~kefu@114.92.111.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[11:42] * kefu (~kefu@114.92.111.70) has joined #ceph
[11:42] <joelm> T1w: yea, think that's possible now, but only for a given attached pools to cephfs
[11:42] <T1w> ok
[11:42] <joelm> you can mount inside cephfs though, so you can parition in that way
[11:43] <joelm> but if you want finer grained control of caps, I assume it's not there yet
[11:44] <T1w> yeah, but it'll come at some point
[11:44] <joelm> yea, think so, tools been made to reflect that at least :)
[11:51] * sherlocked (~watson@14.139.82.6) Quit (Quit: Leaving)
[11:52] * OutOfNoWhere (~rpb@199.68.195.102) Quit (Ping timeout: 480 seconds)
[11:52] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[11:58] * winston-d_ (~zhithuang@58.33.47.14) has joined #ceph
[12:00] * zack_dolby (~textual@e0109-114-22-11-74.uqwimax.jp) has joined #ceph
[12:08] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) has joined #ceph
[12:08] <hoo> hi
[12:08] <hoo> have r715 dells
[12:09] <hoo> with hw raid controller h700 only
[12:09] <hoo> no jbod mode possible
[12:09] * mykola (~Mikolaj@91.225.202.153) Quit (Remote host closed the connection)
[12:09] <hoo> is it ok to configure each disk as single drive raid0?
[12:11] <JarekO> yes it is ok
[12:13] <hoo> JarekO: thx, then i will do it that way
[12:13] * winston-1_ (~zhithuang@202.76.244.5) has joined #ceph
[12:14] * winston-d_ (~zhithuang@58.33.47.14) Quit (Ping timeout: 480 seconds)
[12:22] * winston-1_ (~zhithuang@202.76.244.5) Quit (Read error: Connection reset by peer)
[12:23] * winston-d_ (~zhithuang@202.76.244.5) has joined #ceph
[12:25] * Curt` (~LRWerewol@95.128.43.164) has joined #ceph
[12:42] * sleinen1 (~Adium@2001:620:0:82::100) has joined #ceph
[12:45] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[12:49] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[12:49] * kawa2014 (~kawa@212.110.41.244) has joined #ceph
[12:51] * vbellur (~vijay@122.166.95.248) Quit (Ping timeout: 480 seconds)
[12:55] * Curt` (~LRWerewol@2WVAABPLL.tor-irc.dnsbl.oftc.net) Quit ()
[13:00] <Kvisle> scratching my head when trying to get ceph object gateway up and running ... I have a ceph cluster that to me seem to work fine, and I'm able to create users with radosgw-admin ... but when a user wants to create a bucket, it throws 405 method not allowed on the PUT-operation
[13:00] <Kvisle> verified that the traffic goes through to the radosgw
[13:03] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[13:03] <JarekO> Kvisle: DNS issue ?
[13:04] * hellertime (~Adium@72.246.0.14) has joined #ceph
[13:05] <Kvisle> JarekO: the dns entries both for the s3-endpoint, and the wildcard-entry is there and resolves correctly
[13:06] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[13:08] <JarekO> Kvisle: which client you are using?
[13:08] <Kvisle> dragondisk
[13:10] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[13:11] * vbellur (~vijay@122.167.124.153) has joined #ceph
[13:11] <Kvisle> JarekO: same behaviour with s3cmd
[13:12] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit ()
[13:13] * Hemanth (~Hemanth@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:14] * zhaochao (~zhaochao@111.161.77.236) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 31.6.0/20150331233809])
[13:16] <JarekO> Kvisle: can you paste ceph.conf related to [client.radosgw.name]?
[13:17] * kawa2014 (~kawa@212.110.41.244) Quit (Ping timeout: 480 seconds)
[13:17] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[13:18] * winston-d_ (~zhithuang@202.76.244.5) Quit (Ping timeout: 480 seconds)
[13:19] * kefu (~kefu@114.92.111.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[13:21] * lovejoy (~lovejoy@57519dc8.skybroadband.com) Quit (Quit: Textual IRC Client: www.textualapp.com)
[13:23] * lovejoy (~lovejoy@213.83.69.6) has joined #ceph
[13:23] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[13:25] * rogst (~Azru@tor-exit.server11.tvdw.eu) has joined #ceph
[13:27] <Kvisle> JarekO: https://gist.github.com/kvisle/5542b589b8e12aa1aade
[13:35] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[13:37] * analbeard (~shw@support.memset.com) has joined #ceph
[13:37] * capri_on (~capri@212.218.127.222) has joined #ceph
[13:39] * kefu (~kefu@114.92.111.70) has joined #ceph
[13:41] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[13:43] * capri (~capri@212.218.127.222) Quit (Ping timeout: 480 seconds)
[13:43] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[13:45] <Kvisle> I managed to create a bucket with boto, but I get access denied when I attempt to access it
[13:46] <Kvisle> maybe it's the headers
[13:46] <anorak> Hi everyone. Just a small sanity check. I have a rbd pool and just allocated a block device of 2 TB. Although, the client is not writing anything but am seeing via "ceph -w" that some operations are going on and the used space is increasing. If I would have to take an educated guess, I would say that ceph is allocating the 2TB block device in the backend regardless the client is writing anything to it or not. Is that the case?
[13:47] * zack_dolby (~textual@e0109-114-22-11-74.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[13:50] * kefu (~kefu@114.92.111.70) Quit (Max SendQ exceeded)
[13:50] * kefu (~kefu@114.92.111.70) has joined #ceph
[13:55] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[13:55] * rogst (~Azru@98EAABCD7.tor-irc.dnsbl.oftc.net) Quit ()
[13:55] * mollstam (~jacoo@85.25.9.11) has joined #ceph
[13:57] * karnan (~karnan@106.51.235.143) Quit (Ping timeout: 480 seconds)
[14:01] <anorak> anyone? :)
[14:07] * Hemanth (~Hemanth@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:12] * kefu (~kefu@114.92.111.70) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[14:12] <stalob> no
[14:14] * i_m (~ivan.miro@deibp9eh1--blueice3n2.emea.ibm.com) has joined #ceph
[14:20] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[14:23] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[14:24] <m0zes> rbd volumes are sparse.
[14:25] * mollstam (~jacoo@98EAABCE7.tor-irc.dnsbl.oftc.net) Quit ()
[14:29] <anorak> m0zes: are you referring to me?
[14:29] <m0zes> yes.
[14:29] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[14:29] <anorak> m0zes: oh ok. Thanks! Can you please elaborate a bit?
[14:31] <m0zes> rbd volumes should only take up enough space as the data that is written to them. data that is removed from the volume can be reclaimed, but generally only if the filesystem supports TRIM (like for SSDs)
[14:32] <anorak> but then the question becomes that why the data usage is increasing with some operations going on in the backend when no data is being written to the rbd volumes....
[14:35] <m0zes> 'ceph df detail' shows the pool space used increasing?
[14:35] * shohn (~shohn@dslb-178-008-023-209.178.008.pools.vodafone-ip.de) has joined #ceph
[14:37] <m0zes> are your journals on raw partions (separate from osd filesystem)?
[14:37] * tupper_ (~tcole@108-83-203-37.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[14:37] * dalgaaf (uid15138@id-15138.charlton.irccloud.com) Quit (Quit: Connection closed for inactivity)
[14:38] <anorak> yes. It has stopped now. But the data usage overall is bugging me. I have till now two rbd volumes. One is allocated 100 GB and the second 2 TB. RAW USED is 268G and rbd pool USED is 85752M
[14:38] <anorak> no....journals are not separate
[14:39] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[14:39] <anorak> replication factor is 2
[14:40] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) has joined #ceph
[14:40] <m0zes> I think that journals that are are on the same filesystem as the osds contribute to an increased "RAW USED"
[14:41] <m0zes> you simply created the two rbd disks, you have mapped them and put an filesystem on them?
[14:41] <anorak> yes
[14:41] <anorak> for argument's sake...if data usage is approx. 83 GB....with replication 2...it becomes 166GB....so the RAW storage is reporting 256GB which means 90GB is being used by the journals?
[14:46] * tupper_ (~tcole@108-83-203-37.lightspeed.rlghnc.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[14:47] <m0zes> the other thing that *might* be contributing to it, is default filesystem initial usage. I am trying to get an estimation on that now.
[14:48] <anorak> that is 33M per OSD
[14:48] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[14:49] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[14:50] <m0zes> not sure what contributes to that extra usage, then.
[14:50] <anorak> I have also checked the actual data usage in both volumes....it is not more than 52 GB right now
[14:51] <anorak> perhaps it is the file system itself underneath (.i.e. on the OSD) which is xfs
[14:51] * sjm (~sjm@pool-173-70-76-86.nwrknj.fios.verizon.net) has joined #ceph
[14:54] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[14:55] * shylesh (~shylesh@121.244.87.124) Quit (Remote host closed the connection)
[14:56] * tupper_ (~tcole@rtp-isp-nat1.cisco.com) has joined #ceph
[14:57] * shohn (~shohn@dslb-178-008-023-209.178.008.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[14:58] * wschulze (~wschulze@cpe-74-73-11-233.nyc.res.rr.com) has joined #ceph
[14:59] <m0zes> the overhead for just creating an xfs filesystem on a 2TB rbd image (at least in my pool) was about 1.5G I am willing to bet some of that space was simply zeroed out and could be properly used by rbd volume, though.
[15:00] <anorak> hmm...ok....thanks. I will dig deeper though
[15:00] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[15:00] * marrusl (~mark@cpe-24-90-46-248.nyc.res.rr.com) has joined #ceph
[15:04] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[15:05] <Kvisle> JarekO: I figured it out, I had to add rgw dns name
[15:05] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[15:05] * cok (~chk@nat-cph1-sys.net.one.com) has joined #ceph
[15:05] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[15:06] * kawa2014 (~kawa@212.110.41.244) has joined #ceph
[15:06] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[15:09] * vbellur (~vijay@122.167.124.153) Quit (Ping timeout: 480 seconds)
[15:09] * rdas (~rdas@110.227.47.22) Quit (Quit: Leaving)
[15:09] * kawa2014 (~kawa@212.110.41.244) Quit ()
[15:10] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[15:12] * kefu (~kefu@114.92.111.70) has joined #ceph
[15:13] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Ping timeout: 480 seconds)
[15:13] * dgurtner_ (~dgurtner@178.197.231.90) has joined #ceph
[15:14] * dgurtner (~dgurtner@178.197.231.76) Quit (Ping timeout: 480 seconds)
[15:15] * cok (~chk@nat-cph1-sys.net.one.com) Quit (Quit: Leaving.)
[15:17] * wicope_ (~wicope@76.Red-83-60-55.dynamicIP.rima-tde.net) has joined #ceph
[15:17] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Read error: Connection reset by peer)
[15:20] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[15:20] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:23] * vbellur (~vijay@122.178.242.128) has joined #ceph
[15:25] * Spessu (~smf68@chulak.enn.lu) has joined #ceph
[15:25] * MACscr (~Adium@2601:d:c800:de3:100c:9116:fdbc:1f5d) Quit (Quit: Leaving.)
[15:28] <Kvisle> is there any reasons to not have multiple rados gateways working actively together against the same cluster?
[15:28] * Hemanth (~Hemanth@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:29] * harold (~hamiller@71-94-227-43.dhcp.mdfd.or.charter.com) has joined #ceph
[15:30] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[15:31] * danieagle (~Daniel@177.138.223.106) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[15:36] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[15:37] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[15:37] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[15:37] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[15:37] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[15:38] * evilrob0_ (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) Quit (Remote host closed the connection)
[15:42] * linuxkidd (~linuxkidd@vpngac.ccur.com) has joined #ceph
[15:51] * thomnico (~thomnico@AToulouse-654-1-289-55.w86-199.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[15:52] * trociny (~mgolub@93.183.239.2) has joined #ceph
[15:54] * ajazdzewski (~ajazdzews@lpz-66.sprd.net) Quit (Quit: Konversation terminated!)
[15:55] * Spessu (~smf68@5NZAABSVM.tor-irc.dnsbl.oftc.net) Quit ()
[15:55] * winston-d_ (~zhithuang@58.33.47.14) has joined #ceph
[15:57] * PerlStalker (~PerlStalk@162.220.127.20) has joined #ceph
[15:57] * debian112 (~bcolbert@12.44.85.98) has joined #ceph
[15:58] * nathharp (~nathharp@host-94-175-243-34.not-set-yet.virginmedia.net) has joined #ceph
[16:02] * dopesong (~dopesong@88-119-94-55.static.zebra.lt) has joined #ceph
[16:04] * winston-d_ (~zhithuang@58.33.47.14) Quit (Ping timeout: 480 seconds)
[16:05] * winston-d_ (~zhithuang@202.76.244.5) has joined #ceph
[16:06] <cetex> ceph should offer "reduced" fault-tolerance mode as an option. :)
[16:07] <cetex> write one copy to disk synchronized, the other two could be stored in ramdisk and flushed later.. :)
[16:08] <cetex> primary osd for a pg would write it to journal, the other osd's for the pg would write it while allowing page cache.
[16:08] <cetex> *let the kernel manage when to write*
[16:08] * dopesong_ (~dopesong@lb0.mailer.data.lt) Quit (Ping timeout: 480 seconds)
[16:08] <debian112> I went to upgrade ceph and I get this: http://paste.debian.net/167316/
[16:09] * harold (~hamiller@71-94-227-43.dhcp.mdfd.or.charter.com) Quit (Quit: Leaving)
[16:09] <debian112> any idea?
[16:10] * dopesong (~dopesong@88-119-94-55.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[16:10] <debian112> I am sure it's more centos package depends, than ceph
[16:12] * amote (~amote@121.244.87.116) Quit (Quit: Leaving)
[16:17] * dyasny (~dyasny@173.231.115.58) has joined #ceph
[16:20] * vata (~vata@208.88.110.46) has joined #ceph
[16:24] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[16:27] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:28] * tk12 (~tk12@68.140.239.132) has joined #ceph
[16:29] * tk12 (~tk12@68.140.239.132) Quit (Remote host closed the connection)
[16:29] * tk12 (~tk12@68.140.239.132) has joined #ceph
[16:31] <nathharp> I???ve seen that problem - which version of centos are you on, and have you got EPEL enabled?
[16:32] * kefu (~kefu@114.92.111.70) Quit (Max SendQ exceeded)
[16:33] * kefu (~kefu@114.92.111.70) has joined #ceph
[16:33] * lalatenduM (~lalatendu@122.172.162.182) has joined #ceph
[16:33] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has joined #ceph
[16:36] <JarekO> Kvisle: there is no reasons IMO
[16:37] <nathharp> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg18747.html - covers why, and how it can be resolved
[16:37] * sleinen1 (~Adium@2001:620:0:82::100) Quit (Read error: Connection reset by peer)
[16:38] * sleinen1 (~Adium@2001:620:0:82::100) has joined #ceph
[16:47] <nathharp> Any suggestions as to why a brand new cluster (single monitor, 4 OSDs on two hosts) won???t replicate
[16:47] <nathharp> default size set to 1, and all healthy
[16:48] <nathharp> set pool size to 2, and the PGs aren???t replicating
[16:48] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[16:49] * ircolle (~Adium@2601:1:a580:1735:d02c:4dd6:bb0f:2b20) has joined #ceph
[16:55] * darks (~MonkeyJam@politkovskaja.torservers.net) has joined #ceph
[16:56] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has left #ceph
[17:07] * reed (~reed@75-101-54-131.dsl.static.fusionbroadband.com) has joined #ceph
[17:08] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:11] * ksperis (~ksperis@46.218.42.103) Quit (Quit: Quitte)
[17:12] * jeff-YF (~jeffyf@67.23.117.122) Quit (Quit: jeff-YF)
[17:14] * p66kumar (~p66kumar@c-67-188-232-183.hsd1.ca.comcast.net) has joined #ceph
[17:19] <joelm> nathharp: you want a size larger than 1
[17:19] <joelm> and min_size even
[17:21] <Vivek> loicd: ping.
[17:22] * evilrob00 (~evilrob00@128.107.241.183) has joined #ceph
[17:23] * sleinen1 (~Adium@2001:620:0:82::100) Quit (Ping timeout: 480 seconds)
[17:25] * darks (~MonkeyJam@98EAABCLU.tor-irc.dnsbl.oftc.net) Quit ()
[17:26] * DoDzy (~Pieman@tor-exit.server9.tvdw.eu) has joined #ceph
[17:28] * nathharp (~nathharp@host-94-175-243-34.not-set-yet.virginmedia.net) Quit (Quit: nathharp)
[17:30] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[17:30] * ajazdzewski (~ajazdzews@2001:4dd0:ae29:1:c4ce:dd34:bc4b:9a93) has joined #ceph
[17:31] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) has joined #ceph
[17:34] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:36] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[17:36] * lalatenduM (~lalatendu@122.172.162.182) Quit (Read error: Connection reset by peer)
[17:37] * shaunm (~shaunm@74.215.76.114) Quit (Ping timeout: 480 seconds)
[17:38] * lalatenduM (~lalatendu@122.172.98.5) has joined #ceph
[17:45] * MACscr (~Adium@2601:d:c800:de3:f172:697:7fc7:ab8b) has joined #ceph
[17:46] * cholcombe (~chris@pool-108-42-125-114.snfcca.fios.verizon.net) has joined #ceph
[17:47] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) Quit (Quit: Leaving.)
[17:48] * kefu (~kefu@114.92.111.70) Quit (Quit: Textual IRC Client: www.textualapp.com)
[17:49] * derjohn_mob (~aj@b2b-94-79-172-98.unitymedia.biz) Quit (Ping timeout: 480 seconds)
[17:51] <cholcombe> ceph: anyone know which tag i should pull on calamari as stable?
[17:51] * dgurtner_ (~dgurtner@178.197.231.90) Quit (Ping timeout: 480 seconds)
[17:53] * alram (~alram@206.169.83.146) has joined #ceph
[17:54] * ircolle (~Adium@2601:1:a580:1735:d02c:4dd6:bb0f:2b20) Quit (Quit: Leaving.)
[17:55] * DoDzy (~Pieman@98EAABCMN.tor-irc.dnsbl.oftc.net) Quit ()
[18:01] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[18:04] * sleinen (~Adium@ext-nat-254.eduroam.unibe.ch) has joined #ceph
[18:06] * sleinen1 (~Adium@2001:620:0:82::103) has joined #ceph
[18:10] * pcaruana (~pcaruana@nat-pool-brq-t.redhat.com) Quit (Quit: Leaving)
[18:13] * sleinen (~Adium@ext-nat-254.eduroam.unibe.ch) Quit (Ping timeout: 480 seconds)
[18:16] * puffy (~puffy@50.185.218.255) Quit (Quit: Leaving.)
[18:16] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[18:17] * evilrob00 (~evilrob00@128.107.241.183) Quit (Remote host closed the connection)
[18:17] * RayTracer (~RayTracer@153.19.7.39) has joined #ceph
[18:20] * subscope (~subscope@92-249-244-167.pool.digikabel.hu) has joined #ceph
[18:22] * bandrus (~brian@128.sub-70-211-79.myvzw.com) has joined #ceph
[18:24] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:26] * jwilkins (~jwilkins@c-67-180-123-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:28] * ajazdzewski_ (~ajazdzews@2001:4dd0:ae29:1:c4ce:dd34:bc4b:9a93) has joined #ceph
[18:28] * ajazdzewski (~ajazdzews@2001:4dd0:ae29:1:c4ce:dd34:bc4b:9a93) Quit (Read error: Connection reset by peer)
[18:30] * Mousey (~TheDoudou@tor.piratenpartei-nrw.de) has joined #ceph
[18:30] * jeff-YF (~jeffyf@67.23.117.122) Quit (Quit: jeff-YF)
[18:30] * dgbaley27 (~matt@c-67-176-93-83.hsd1.co.comcast.net) has joined #ceph
[18:35] <litwol> food for thought: Digital storage as a public utility.
[18:35] * lalatenduM (~lalatendu@122.172.98.5) Quit (Ping timeout: 480 seconds)
[18:35] <litwol> imagine if there was a central namespace for a pool. anyhone can connect to
[18:35] <litwol> add drives etc etc
[18:35] <litwol> all over the globe
[18:35] <litwol> :-D
[18:36] <litwol> would be nice if personal access could be encrypted and quality enforced with quotas
[18:36] * daniel2_ (~daniel@12.164.168.117) has joined #ceph
[18:36] <litwol> etc etc
[18:36] <litwol> one can dream.
[18:36] * ajazdzewski__ (~ajazdzews@p200300406E249400C4CEDD34BC4B9A93.dip0.t-ipconnect.de) has joined #ceph
[18:36] * jwilkins (~jwilkins@2601:9:4580:f4c:ea2a:eaff:fe08:3f1d) has joined #ceph
[18:38] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[18:38] * rotbeard (~redbeard@aftr-95-222-27-149.unity-media.net) has joined #ceph
[18:39] * RayTracer (~RayTracer@153.19.7.39) Quit (Quit: Leaving...)
[18:39] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[18:40] * ChrisHolcombe (~chris@pool-108-42-125-114.snfcca.fios.verizon.net) has joined #ceph
[18:40] * ajazdzewski_ (~ajazdzews@2001:4dd0:ae29:1:c4ce:dd34:bc4b:9a93) Quit (Ping timeout: 480 seconds)
[18:43] * jeff-YF (~jeffyf@65.242.59.114) has joined #ceph
[18:45] * lalatenduM (~lalatendu@122.171.71.146) has joined #ceph
[18:46] * daniel2_ (~daniel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[18:46] * cholcombe (~chris@pool-108-42-125-114.snfcca.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[18:47] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[18:51] * jeff-YF (~jeffyf@65.242.59.114) Quit (Ping timeout: 480 seconds)
[18:51] * alram (~alram@206.169.83.146) Quit (Quit: Lost terminal)
[18:56] * nils_ (~nils@doomstreet.collins.kg) Quit (Quit: Leaving)
[18:57] * daniel2_ (~daniel@12.164.168.117) has joined #ceph
[18:58] * i_m (~ivan.miro@deibp9eh1--blueice3n2.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[18:59] * Mousey (~TheDoudou@5NZAABS41.tor-irc.dnsbl.oftc.net) Quit ()
[19:00] * mollstam (~Eman@tor-exit.server7.tvdw.eu) has joined #ceph
[19:01] * nathharp (~nathharp@90.200.89.199) has joined #ceph
[19:02] * winston-d_ (~zhithuang@202.76.244.5) has left #ceph
[19:04] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:07] <loicd> winston-d: I don't think the cluster name character range is a hard requirement. But deviating from it will probably trigger some parsing problems where scripts assume this restriction.
[19:08] <loicd> the easy way is to not use - in the name of the cluster
[19:08] <loicd> the harder way is to relax this restriction by modifying the tools (like calamari it seems) that won't tolerate a - in the cluster
[19:10] * ajazdzewski__ (~ajazdzews@p200300406E249400C4CEDD34BC4B9A93.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[19:11] * winston-d_ (~zhithuang@202.76.244.5) has joined #ceph
[19:11] * nathharp (~nathharp@90.200.89.199) Quit (Quit: nathharp)
[19:12] * alram (~alram@206.169.83.146) has joined #ceph
[19:13] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[19:15] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[19:16] <winston-d_> loicd: for calamari, I've already filed a bug: http://tracker.ceph.com/issues/11366, but if '-' isn't supposed to be in cluster name, the bug is invalid.
[19:18] <winston-d_> loicd: we want to name the cluster with a convention that encodes Region, AZ and cluster in it, so ideally there should be a delimiter in between, and we choose '-'
[19:19] <winston-d_> loicd: comments?
[19:20] <jcsp1> winston-d_: ignoring the general question of whether we should lock down these names more, you will probably have better luck with underscores with todays' code.
[19:21] <loicd> winston-d: relaxing this constraint would require development. But I don't think it's on the roadmap.
[19:22] <winston-d_> jcsp1: thx for tip!
[19:23] <winston-d_> loicd: may i get some background why such constraint is there in 1st place?
[19:24] <winston-d_> i'm trying to understand the reason behind the limitation
[19:29] * mollstam (~Eman@425AAAMWA.tor-irc.dnsbl.oftc.net) Quit ()
[19:30] * Frymaster (~MonkeyJam@162.243.253.118) has joined #ceph
[19:35] * jwilkins (~jwilkins@2601:9:4580:f4c:ea2a:eaff:fe08:3f1d) Quit (Quit: Leaving)
[19:36] * jwilkins (~jwilkins@2601:9:4580:f4c:ea2a:eaff:fe08:3f1d) has joined #ceph
[19:36] * LeaChim (~LeaChim@host86-143-18-67.range86-143.btcentralplus.com) has joined #ceph
[19:43] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) Quit (Quit: Leaving)
[19:45] * debian112 (~bcolbert@12.44.85.98) has left #ceph
[19:46] * sleinen1 (~Adium@2001:620:0:82::103) Quit (Ping timeout: 480 seconds)
[19:47] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[19:52] * evilrob00 (~evilrob00@cpe-72-179-3-209.austin.res.rr.com) has joined #ceph
[19:58] * puffy (~puffy@216.207.42.129) has joined #ceph
[19:59] * Frymaster (~MonkeyJam@5NZAABS7K.tor-irc.dnsbl.oftc.net) Quit ()
[20:00] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) has left #ceph
[20:00] * Wizeon (~Hidendra@chomsky.torservers.net) has joined #ceph
[20:10] * dgbaley27 (~matt@c-67-176-93-83.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[20:23] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[20:29] * linuxkidd (~linuxkidd@vpngac.ccur.com) Quit (Quit: Leaving)
[20:30] * Wizeon (~Hidendra@5NZAABS8N.tor-irc.dnsbl.oftc.net) Quit ()
[20:32] * ajazdzewski__ (~ajazdzews@p200300406E249400C4CEDD34BC4B9A93.dip0.t-ipconnect.de) has joined #ceph
[20:34] * DougalJacobs (~AG_Clinto@exit1.ipredator.se) has joined #ceph
[20:42] * sjmtest (uid32746@id-32746.uxbridge.irccloud.com) has joined #ceph
[20:43] <loicd> winston-d: I don't think there is a reason to exclude -
[20:44] * lovejoy (~lovejoy@213.83.69.6) Quit (Ping timeout: 480 seconds)
[20:45] <Vivek> loicd: hey
[20:46] <Vivek> loicd: ignorning me ;)
[20:46] <loicd> Vivek: hey, how are you ?
[20:46] <Vivek> I am great.
[20:46] <loicd> Vivek: not at all. I was wondering if you work for a company actually ?
[20:46] <Vivek> lol
[20:47] <loicd> or is this Ceph thing a volunteer based adventure of yours ? ;-)
[20:47] * wschulze1 (~wschulze@cpe-74-73-11-233.nyc.res.rr.com) has joined #ceph
[20:47] * winston-d_ (~zhithuang@202.76.244.5) Quit (Ping timeout: 480 seconds)
[20:48] <Vivek> It is a payed job with one the world's largest networking companies in the world :)
[20:49] <loicd> ah cool
[20:49] <loicd> exclusively working on Ceph ?
[20:49] <Vivek> I work with OpenStack and Ceph.
[20:50] * fghaas (~florian@91-119-140-224.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[20:50] <Vivek> loicd: I am also the co-author os the OpenStack Essex Beginner's Guide.
[20:51] <Vivek> loicd: https://cssoss.files.wordpress.com/2012/05/openstackbookv3-0_csscorp2.pdf
[20:52] <Vivek> In fact, I am now integrating ceph with openstack block/swift storage, vcenter,hyperv,xenserver.
[20:52] * vjujjuri (~chatzilla@204.14.239.106) has joined #ceph
[20:52] <loicd> Vivek: I'm also using OpenStack but still using Dumpling. How big is your cluster ?
[20:52] <Vivek> Vms launched with the above list should be running on ceph.
[20:53] <Vivek> loicd: I have two 4 node clusters
[20:53] <loicd> same as mine, all my mail on it :-)
[20:54] <loicd> it's late here in France, ttyl, have a nice day/evening !
[20:54] <Vivek> first cluster is 12 osds, 3 mons and around 48 TB of storage
[20:54] * wschulze (~wschulze@cpe-74-73-11-233.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[20:54] <Vivek> The seconf cluster has around 20 osds, 3 mons with 78 TB storage.
[20:54] <Vivek> ok loicd, thanks for your support :)
[20:55] <Tetard> Vivek: great, looking forward to a guide on CEPH :D
[20:55] <Vivek> Sure, some time I will
[20:55] <loicd> \o/
[20:55] <Vivek> If only I could get some spare time after fullfilling my client's expectations :)
[20:55] <Tetard> I am still working grasping all the concepts. Started with Hammer and followed the tutorial, but ran into some issues
[20:56] <Vivek> loicd: It's 12.26 AM here and I am still at work :)
[20:56] <Tetard> Got a couple of medium sized Ganeti clusters, and CEPH is clearly the way to go- but man there seems to be a lot of things to consider before going ahead
[20:57] <Vivek> loicd: did you reply to my mailing list post ?, hmm don't see a reply :)
[20:57] * ircolle (~ircolle@198.202.203.177) has joined #ceph
[20:57] <Vivek> loicd> same as mine, all my mail on it :-)
[20:58] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[20:58] <Vivek> lol misunderstood that post :)
[20:58] <Vivek> np, bye.
[21:04] * DougalJacobs (~AG_Clinto@2WVAABP8I.tor-irc.dnsbl.oftc.net) Quit ()
[21:04] * galaxyAbstractor (~ZombieTre@72.ip-198-50-145.net) has joined #ceph
[21:10] * georgem (~Adium@fwnat.oicr.on.ca) has left #ceph
[21:15] * lovejoy (~lovejoy@cpc69388-oxfd28-2-0-cust415.4-3.cable.virginm.net) has joined #ceph
[21:18] * reed (~reed@75-101-54-131.dsl.static.fusionbroadband.com) Quit (Quit: Ex-Chat)
[21:20] * lalatenduM (~lalatendu@122.171.71.146) Quit (Quit: Leaving)
[21:26] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) Quit (Quit: Leaving.)
[21:30] * capri_on (~capri@212.218.127.222) Quit (Read error: Connection reset by peer)
[21:30] * capri_on (~capri@212.218.127.222) has joined #ceph
[21:34] * galaxyAbstractor (~ZombieTre@2WVAABP9X.tor-irc.dnsbl.oftc.net) Quit ()
[21:37] * lovejoy (~lovejoy@cpc69388-oxfd28-2-0-cust415.4-3.cable.virginm.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[21:37] * rotbeard (~redbeard@aftr-95-222-27-149.unity-media.net) Quit (Quit: Leaving)
[21:37] * lovejoy (~lovejoy@cpc69388-oxfd28-2-0-cust415.4-3.cable.virginm.net) has joined #ceph
[21:39] <ktdreyer> scuttlemonkey: what was the new CNAME that you had set up today in relation to the mirrors?
[21:40] <scuttlemonkey> au.ceph.com
[21:40] * subscope (~subscope@92-249-244-167.pool.digikabel.hu) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[21:40] <scuttlemonkey> ktdreyer: ^
[21:41] <scuttlemonkey> but I'd like something more solid that we can expand on for eu.ceph, au.ceph, and anyone else that volunteers
[21:42] <ktdreyer> cool, ok, thanks
[21:42] <ktdreyer> good to know
[21:44] * rendar (~I@host13-180-dynamic.23-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[21:46] * rastro (~rastro@68.140.239.132) has joined #ceph
[21:46] <ChrisHolcombe> ceph: are there any deb packages in the works for calamari?
[21:46] <ChrisHolcombe> if not i might have to take a stab at it
[21:46] * rendar (~I@host13-180-dynamic.23-79-r.retail.telecomitalia.it) has joined #ceph
[21:47] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) has joined #ceph
[21:50] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[21:50] * sherlocked (~watson@14.139.82.6) has joined #ceph
[21:50] * kevinc (~kevinc__@client65-162.sdsc.edu) has joined #ceph
[21:50] <ChrisHolcombe> ceph: nvm i found the deb builder script
[21:51] * sherlocked (~watson@14.139.82.6) Quit ()
[21:52] * harold (~hamiller@71-94-227-43.dhcp.mdfd.or.charter.com) has joined #ceph
[21:54] * Kioob (~Kioob@ALyon-651-1-58-69.w2-3.abo.wanadoo.fr) has joined #ceph
[21:59] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has left #ceph
[22:01] * tk12_ (~tk12@68.140.239.132) has joined #ceph
[22:04] * ircolle (~ircolle@198.202.203.177) Quit (Remote host closed the connection)
[22:04] * Frostshifter (~N3X15@marylou.nos-oignons.net) has joined #ceph
[22:06] * jwilkins (~jwilkins@2601:9:4580:f4c:ea2a:eaff:fe08:3f1d) Quit (Ping timeout: 480 seconds)
[22:08] * tk12 (~tk12@68.140.239.132) Quit (Ping timeout: 480 seconds)
[22:09] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[22:13] * daniel2_ (~daniel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[22:15] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) Quit (Quit: Ex-Chat)
[22:19] * hoo (~hoo@firewall.netconomy.net) Quit (Quit: Leaving)
[22:21] * nathharp (~nathharp@90.200.89.199) has joined #ceph
[22:21] * derjohn_mob (~aj@ip-95-223-126-17.hsi16.unitymediagroup.de) has joined #ceph
[22:22] <nathharp> hi all - can anyone help me troubleshoot a new cluster? 1 mon, 2 nodes each with two OSD
[22:23] <nathharp> mon on first node
[22:23] <nathharp> however, it doesn???t seem to replicate
[22:23] <nathharp> if I set min size to 1 before starting the cluster, the cluster is health
[22:23] <nathharp> y
[22:23] <nathharp> but if I set the size to 2, I just end up with a degraded cluster
[22:24] <darkfaded> scuttlemonkey: i can put up a mirror too i think, i have one isp customer
[22:24] <darkfaded> query me when it gets to a point where one can do things :)
[22:25] * daniel2_ (~daniel@12.164.168.117) has joined #ceph
[22:29] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[22:30] * harold (~hamiller@71-94-227-43.dhcp.mdfd.or.charter.com) Quit (Quit: Leaving)
[22:33] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[22:34] * Frostshifter (~N3X15@98EAABCVR.tor-irc.dnsbl.oftc.net) Quit ()
[22:35] * sleinen1 (~Adium@2001:620:0:82::102) has joined #ceph
[22:38] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[22:39] * toast (~cryptk@azura.nullbyte.me) has joined #ceph
[22:41] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[22:50] <scuttlemonkey> darkfaded: will do
[22:51] <scuttlemonkey> can you send an email to pmcgarry@redhat.com so I remember?
[22:53] * hellertime (~Adium@72.246.0.14) Quit (Quit: Leaving.)
[22:57] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[22:58] <darkfaded> k
[23:04] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:05] * sjmtest (uid32746@id-32746.uxbridge.irccloud.com) Quit (Quit: Connection closed for inactivity)
[23:07] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[23:08] * dyasny (~dyasny@173.231.115.58) Quit (Ping timeout: 480 seconds)
[23:09] * toast (~cryptk@2WVAABQD8.tor-irc.dnsbl.oftc.net) Quit ()
[23:09] * Keiya (~n0x1d@tor-exit.server9.tvdw.eu) has joined #ceph
[23:11] * vata (~vata@208.88.110.46) Quit (Quit: Leaving.)
[23:12] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[23:13] * nathharp (~nathharp@90.200.89.199) Quit (Quit: nathharp)
[23:21] * ajazdzewski__ (~ajazdzews@p200300406E249400C4CEDD34BC4B9A93.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[23:22] * BManojlovic (~steki@cable-89-216-238-140.dynamic.sbb.rs) has joined #ceph
[23:24] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[23:24] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[23:25] * tk12_ (~tk12@68.140.239.132) Quit (Remote host closed the connection)
[23:36] * daniel2_ (~daniel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[23:38] * tk12 (~tk12@68.140.239.132) has joined #ceph
[23:39] * Keiya (~n0x1d@98EAABCXR.tor-irc.dnsbl.oftc.net) Quit ()
[23:39] * shaunm (~shaunm@74.215.76.114) has joined #ceph
[23:39] * clarjon1 (~Pulec@edwardsnowden2.torservers.net) has joined #ceph
[23:42] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[23:42] * Kioob (~Kioob@ALyon-651-1-58-69.w2-3.abo.wanadoo.fr) Quit (Quit: Leaving.)
[23:45] * sleinen1 (~Adium@2001:620:0:82::102) Quit (Ping timeout: 480 seconds)
[23:46] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) has joined #ceph
[23:46] * daniel2_ (~daniel@12.164.168.117) has joined #ceph
[23:49] * kevinc (~kevinc__@client65-162.sdsc.edu) Quit (Quit: Leaving)
[23:50] * tk12_ (~tk12@68.140.239.132) has joined #ceph
[23:50] * tk12 (~tk12@68.140.239.132) Quit (Read error: Connection reset by peer)
[23:52] * jwilkins (~jwilkins@c-50-131-97-162.hsd1.ca.comcast.net) has joined #ceph
[23:55] * alram (~alram@206.169.83.146) Quit (Quit: leaving)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.